text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Indentation is broken Bug Description For 17.10, please treat this as a 0-day SRU bug report. [Impact] Without this fix, in any text editor that uses this framework (Kate and KDevelop are two of them), the text editor unexpectedly and automatically indents when the user types any of the following characters directly after a function (which is indented): u, n, d, e, f, i and n [Test Case] 1. Open Kate. 2. On the lower right corner, set the syntax highlighting to Python (default for .py files). 3. Type the following code, go to a new line, and backspace so your cursor is not indented like the print function: def func(): print("foo") 4. Type "abcd" The cursor should stay where it is, but instead after pressing "d" it autoindents. [Regression Potential] There is little to no regression potential, unless somehow the trigger character is undefined but needs to automatically indent for some reason (which, it should be defined in the template anyways, it shouldn't need hardcoding in this spot). Hello Simon, or anyone else affected, Accepted ktexteditor into artful-proposed. The package will build now and be available at https:/ /launchpad. net/ubuntu/ +source/ ktexteditor/ 5.38.0- 0, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https:/ /wiki.ubuntu. com/QATeam/ PerformingSRUVe rification . Thank you in advance!
https://bugs.launchpad.net/ubuntu/+source/ktexteditor/+bug/1724709
CC-MAIN-2022-33
refinedweb
235
55.13
Set movement/walking animations? Hello, does anyone know how to set movement / walking animations? @LeeC2202 @stillhere @aimless @Frazzlee @NotCrunchyTaco); And to reset. Function.Call(Hash.RESET_PED_MOVEMENT_CLIPSET, Game.Player.Character, 0.0f); @aimless said in Set movement/walking animations?:); How would I set this as the movement clipset? anim@sports@ballgame@handball@ ball_sprint @NotCrunchyTaco I think you have to use the ones with move in them. @aimless It kind of does. Other than this animation I don't know how I will get dribbling for BasketballV? @NotCrunchyTaco Have you ever looked into the IK (Inverse Kinematics) functions on a Ped? There's a native called SET_PED_CAN_ARM_IK. I have never actually tried but I always wondered if that would let you override the arm position by moving the hand bones. If that was the case, you could probably use normal walk/run animations and use IK to match the hand to the ball. The way IK usually works, is you move the end of the bone chain and the model moves to maintain set limits with all the bones attached to it. Each IK node has a weight and that controls how much a node can be pulled by the IK chain. You see it in use when you walk over different height terrain and the foot changes position based on the height of the ground beneath it. @LeeC2202 Good suggestion! I am famaliar with IK's and IK chains when I used to do 3D Renders with Cinema 4D. I could try to mess around with those somehow..... - sollaholla @NotCrunchyTaco Why don't you just loop the animation on the upper body? You can actually have animations play while the player is walking. Here's an example use your anim: PlayerPed.Task.PlayAnimation("anim@sports@ballgame@handball@", "ball_idle", 8.0f, -8.0f, -1, (AnimationFlags) 49, 0.0f); (AnimationFlags)49is telling the script "We want to play this on the upper body, and loop the animation infinitely." Here's more info on animation flags, ripped straight from the NativeDB Odd number : loop infinitely Even number : Freeze at last frame Multiple of 4: Freeze at last frame but controllable 01 to 15 > Full body 10 to 31 > Upper body 32 to 47 > Full body > Controllable 48 to 63 > Upper body > Controllable Controllable means that you can play the animation and it will blend with the players movement. Usually the animation will discontinue play after moving to much but using 49 tells the game you want the anim to loop no matter what (since it's an odd number, and it's within the controllable range). Hope this helps, bud. It's what I'm using for the Spider-Man mod. - sollaholla @NotCrunchyTaco Also for ball movement you can try what I used for my batman bats script that allows you to bounce a coordinate up and down (in your case it would be on the Z axis): -- Put this in a utility file or some static class -- public static float PingPong(float t, float length) { t = Repeat(t, length * 2f); return length - Math.Abs(t - length); } public static float Repeat(float t, float length) { return t - Floor(t / length) * length; } public static float Floor(float f) { return (float)Math.Floor(f); } -- And use this to bounce the ball -- var position = ball.Position; position.Z = Utils.PingPong(Game.GameTime / 100, bounceSpeed) + Game.Player.Character.Position.Z - someOffset; ball.Position = Vector3.Lerp(ball.Position, position, Game.LastFrameTime * smoothSpeed); @sollaholla Thanks man. I was thinking using the upper body only tag. Thank you for the code and the examples
https://forums.gta5-mods.com/topic/6605/set-movement-walking-animations/4
CC-MAIN-2018-39
refinedweb
593
65.22
#include <cstdio> #include <cstdlib> #include <iostream> using namespace std; char* saying; int n; int x; int main (/*int nArg, char* pszArgs[]*/) { //attempt to read a string of variable length cout << "here goes nothing\n"; cin>>n; cin.width(++n); saying = new char[n]; cout << endl; cin >> *saying; cout << endl; x = n; n = 0; while (n <= x) { cout<< *(saying + n); n++; } delete [] saying; return 0; } I am new to C++ and am attempting to teach myself how to use pointers to create variable length arrays. When i run this program it successfully stores the data from cin to *saying (or at least i think it dose) but when i go to display the content of the array i get the first char returned and then multiple garbage letters. for example "====2z". i tried changeing line 23 to: cout<< *saying; and deleting the while loop but then i only got the first letter out. What could i do to make the program display all of what the user inputs, and i don't want the answer "use #include <string>". another thing i would like to know is how do i read how long the data is that the user inputs with out storing it in a string and without asking? any help is appreciated, thanks The Blue Phoenix
http://www.dreamincode.net/forums/topic/193274-dynamic-memory-allocation/
CC-MAIN-2017-22
refinedweb
216
70.16
#include <mlRuntimeType.h> RuntimeType thus represents the TYPE on an associated class. Thread-safety: This class is reentrant. An instance of RuntimeType is associated with a class (e.g., an image processing module like AddModule) and contains This class has (static) access to a static dictionary, in which entries for all created instances of RuntimeType can be found. Note that this class does not have a copy constructor or an assignment operator since a RuntimeType needs to be unique in the dictionary. Create a new type with other name if needed. Definition at line 49 of file mlRuntimeType.h. Constructor. Passes the string name of the class type className and the name of dll in dllName. If no dllName is passed, the name of the last most loaded dll is used. If className is passed as NULL, an invalid default bad type is constructed. Destructor. Cleans up all stuff. Returns true if this (runtime)type knows how to create an instance of the class. Definition at line 90 of file mlRuntimeType.h. Referenced by ml::CopyBase< BASE_DERIVED_CLASS >::checkObjectType(), and ml::ListTemplate< T >::clone(). Creates and returns an instance of the of the class (NULL if such an instance is unavailable). Definition at line 94 of file mlRuntimeType.h. Referenced by ml::ListTemplate< T >::clone(). Returns the callback used to create an instance of the object or NULL on abstract classes. Definition at line 97 of file mlRuntimeType.h. Returns the null terminated string name of the source dll or "" if it is unknown. Definition at line 83 of file mlRuntimeType.h. Returns the id which describes at which place the type was registered in the Runtime Type System. Definition at line 100 of file mlRuntimeType.h. Returns the null terminated string name of the class. Returns "BadType" on error. Definition at line 80 of file mlRuntimeType.h. Referenced by ml::CopyBase< BASE_DERIVED_CLASS >::checkObjectType(), and ml::ListContainerTemplate< T >::ListContainerTemplate(). Returns the parent (runtime)type of the type, i.e., the type it is derived from or NULL if not derived from any other type. Definition at line 87 of file mlRuntimeType.h. Returns true if this (runtime)type is derived from the argument (runtime)type runtimeType. Note that a type is always considered as derived from itself, i.e., a type A is derived from a type A. If a NULL pointer is passed, false is returned. Referenced by ml::CopyBase< BASE_DERIVED_CLASS >::checkObjectType().
http://www.mevislab.de/fileadmin/docs/current/MeVisLab/Resources/Documentation/Publish/SDK/ToolBoxReference/classml_1_1RuntimeType.html
crawl-003
refinedweb
402
60.51
4.8. Processing large NumPy arrays with memory mapping Sometimes, we need to deal with NumPy arrays that are too big to fit in the system memory. A common solution is to use memory mapping and implement out-of-core computations. The array is stored in a file on the hard drive, and we create a memory-mapped object to this file that can be used as a regular NumPy array. Accessing a portion of the array results in the corresponding data being automatically fetched from the hard drive. Therefore, we only consume what we use. How to do it... 1. Let's create a memory-mapped array in write mode: import numpy as np nrows, ncols = 1000000, 100 f = np.memmap('memmapped.dat', dtype=np.float32, mode='w+', shape=(nrows, ncols)) 2. Let's feed the array with random values, one column at a time because our system's memory is limited! for i in range(ncols): f[:, i] = np.random.rand(nrows) We save the last column of the array: x = f[:, -1] 3. Now, we flush memory changes to disk by deleting the object: del f 4. Reading a memory-mapped array from disk involves the same memmap() function. The data type and the shape need to be specified again, as this information is not stored in the file: f = np.memmap('memmapped.dat', dtype=np.float32, shape=(nrows, ncols)) np.array_equal(f[:, -1], x) True del f This method is not adapted for long-term storage of data and data sharing. The following recipe in this chapter will show a better way based on the HDF5 file format. How it works... Memory mapping lets you work with huge arrays almost as if they were regular arrays. Python code that accepts a NumPy array as input will also accept a memmap array. However, we need to ensure that the array is used efficiently. That is, the array is never loaded as a whole (otherwise, it would waste system memory and would dismiss any advantage of the technique). Memory mapping is also useful when you have a huge file containing raw data in a homogeneous binary format with a known data type and shape. In this case, an alternative solution is to use NumPy's fromfile() function with a file handle created with Python's native open() function. Using f.seek() lets you position the cursor at any location and load a given number of bytes into a NumPy array. There's more... Another way of dealing with huge NumPy matrices is to use sparse matrices through SciPy's sparse subpackage. It is adapted when matrices contain mostly zeros, as is often the case with simulations of partial differential equations, graph algorithms, or specific machine learning applications. Representing matrices as dense structures can be a waste of memory, and sparse matrices offer a more efficient compressed representation. Using sparse matrices in SciPy is not straightforward as multiple implementations exist. Each implementation is best for a particular kind of application. Here are a few references: - SciPy lecture notes about sparse matrices, available at - Reference documentation on sparse matrices, at - Documentation of memmap, at See also - Manipulating large arrays with HDF5 - Performing out-of-core computations on large arrays with Dask
https://ipython-books.github.io/48-processing-large-numpy-arrays-with-memory-mapping/
CC-MAIN-2019-09
refinedweb
541
54.93
Advancing Open Domain Question Answering with RocketQA Overview We discussed building an Open Domain Question Answering (ODQA) system with Jina in our previous post. The two-stage pipeline consisting of a retriever and reader is widely used in practice. As the reader part is relatively developed, most of the recent research focuses on the retriever part, which is in the process of development. RocketQA is one of the successful attempts in this direction. Until July 2021, it was the top algorithm on the MS MARCO leaderboard. Recently, RocketQA released its code and models. We are proud to partner with RocketQA, which will allow you to use the RocketQA pre-trained models directly from Jina Hub. In this post, We'll introduce the idea of RocketQA and show you how to use it to build an ODQA using Jina. Where does DPR Fail? As one of the dense-vector methods, DPR (Dense Passage Retrieval) is the first attempt to show that dense vector retrieval can outperform the term-based methods with a simple training procedure. Negative sampling is one of the most important methods used in DPR. The DPR proposed using gold passages (correct question-answer pairs) from a mini-batch as a positive sample and BM-25 to generate the negative samples. To be more specific, given a positive sample, DPR uses BM-25 to retrieve the most matched passages as negative samples that do not contain the right answers. By feeding both the positive and negative samples to the training procedure, the dual encoder model learns to create a vector space. In this vector space, relevant questions and answers will have smaller distances than the irrelevant pairs. This method works well in general. But after a close look, we will notice that some of the negative samples are false negatives because of the noisy training data. For example, "DNA is made up of molecules called nucleotides" is a correct answer, but it is considered a negative sample. Another issue with negative sampling in DPR is that the negative samples are generated from the same batch as the positive samples. This setting is very different from the actual use cases. In practice, both the matched and the mismatched passages are retrieved from the whole corpus instead of a set of selected passages. This mismatch leads to the situation where the model was trained with some simple negative samples but was asked to distinguish hard negative samples during inference. How does RocketQA work? RocketQA introduced a cross-attention encoder to rerank the retrieved results and a four-step pipeline to improve the training procedure. Let's understand both of them in detail: Cross Encoder Besides the dual encoders for independently encoding the questions and passages, RocketQA uses another transformer-based model to learn the cross-correlation between the questions and the passages. This model is called a cross encoder, which is more precise due to the cross attention to the question and the passages. However, it requires more computations and can only be applied to a limited number of candidates. Four-step Training Procedure RocketQA uses a four-step procedure to train the dual encoder and the cross encoder using an end-to-end pipeline. - RocketQA uses cross-batch sampling to generate hard negative samples. This solves the issue of in-batch sampling of DPR so that the model gets aware of the negative samples, which are hard to distinguish. - In this step, RocketQA trains the cross encoder. Instead of using BM25for generating negative samples as in step 1. RocketQA uses the dual encoder trained from step 1 to filter out the false positive ones from the hard negative samples. Another benefit of this is to finetune the cross encoder with the data distribution learned from the dual encoder. - In this step, RocketQA retrains the dual encoder. To filter out the false-positive samples, RocketQA uses both the dual encoder from step 1 and the cross encoder from step 2 to further remove the data noises. - In this step, together with the cross encoder from step 2, and the dual encoder from step 3 it further filters the noise in the data. As both the cross encoder and the dual encoder have been trained, RocketQA lets you use them for generating the training data from the unlabeled dataset. This augmented data is combined with the labelled data to finetune the dual encoder. After training, the dual encoder and cross encoder is used to retrieve the passages. The cross encoder will return a confidence score for each pair of questions and answers. Comparison between Conventional DPR and RocketQA Using an example, the following table will illustrate the advantages of RocketQA over the conventional DPR model: Using RocketQA with Jina RocketQA is available at Jina Hub, and it integrates seamlessly with Jina. We create a Flow for indexing the Document in the code below. The Document passages are stored in the .tags['para'] field. You can pass the .tags['title'] to improve accuracy. In the index Flow, we use RocketQADualEncoder to encode the passages into vectors and store them with SimpleIndexer. from jina import Document, Flow # Creating a Document object doc = Document(tags={'title': title, 'para': para}) # Creating the indexing flow with RockectQADualEncoder and SimpleIndexer flow = (Flow() .add(uses='jinahub+docker://RocketQADualEncoder', uses_with={'use_cuda': False}) .add(uses='jinahub://SimpleIndexer', uses_metas={'workspace': 'workspace_rocketqa'})) # Indexing the Documents using the flow with flow: flow.post(on='/index', inputs=[doc,]) For querying, we will create a query flow as shown below. Besides the RocketQADualEncoder, we also use RocketQAReranker for reranking the results which implements the cross encoder part in RocketQA. from jina import Flow # Creating the Query flow flow = (Flow(use_cors=True, protocol='http', port_expose=45678) .add(uses='jinahub+docker://RocketQADualEncoder', uses_with={'use_cuda': False}) .add(uses='jinahub://SimpleIndexer', uses_metas={'workspace': 'workspace_rocketqa'}, uses_with={'match_args': {'limits': 10}}) .add(uses='jinahub+docker://RocketQAReranker', uses_with={'model': 'v1_marco_ce', 'use_cuda': False})) # Opening the query flow for incoming queries with flow: while True: question = input('Question?: ') if not question: break f.post(on='/search', inputs=Document(text=question), on_done=print_answers) You can find the complete source code here. Summary In this post, we have given a short introduction to RocketQA and shown you how to use it with Jina. The ODQA field is an active topic attracting more and more researchers. At Jina AI, we are committed to reducing the friction between academic research and their real-world applications by making state-of-the-art frameworks accessible to all. Stay tuned for more updates and Happy searching!
https://jina.ai/blog/2022-01-03_RocketQA/
CC-MAIN-2022-05
refinedweb
1,083
54.83
Logistic regression explained¶ Logistic Regression is one of the first models newcomers to Deep Learning are implementing. The focus of this tutorial is to show how to do logistic regression using Gluon API. Before anything else, let’s import required packages for this tutorial. import numpy as np import mxnet as mx from mxnet import nd, autograd, gluon from mxnet.gluon import nn, Trainer from mxnet.gluon.data import DataLoader, ArrayDataset mx.random.seed(12345) # Added for reproducibility In this tutorial we will use fake dataset, which contains 10 features drawn from a normal distribution with mean equals to 0 and standard deviation equals to 1, and a class label, which can be either 0 or 1. The size of the dataset is an arbitrary value. The function below helps us to generate a dataset. Class label y is generated via a non-random logic, so the network would have a pattern to look for. Boundary of 3 is selected to make sure that number of positive examples smaller than negative, but not too small def get_random_data(size, ctx): x = nd.normal(0, 1, shape=(size, 10), ctx=ctx) y = x.sum(axis=1) > 3 return x, y Also, let’s define a set of hyperparameters, that we are going to use later. Since our model is simple and dataset is small, we are going to use CPU for calculations. Feel free to change it to GPU for a more advanced scenario. ctx = mx.cpu() train_data_size = 1000 val_data_size = 100 batch_size = 10 Working with data¶ To work with data, Apache MXNet provides Dataset and DataLoader classes. The former is used to provide an indexed access to the data, the latter is used to shuffle and batchify the data. To learn more about working with data in Gluon, please refer to Gluon Datasets and Dataloaders tutorial. Below we define training and validation datasets, which we are going to use in the tutorial. train_x, train_ground_truth_class = get_random_data(train_data_size, ctx) train_dataset = ArrayDataset(train_x, train_ground_truth_class) train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) val_x, val_ground_truth_class = get_random_data(val_data_size, ctx) val_dataset = ArrayDataset(val_x, val_ground_truth_class) val_dataloader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True) Defining and training the model¶ The only requirement for the logistic regression is that the last layer of the network must be a single neuron. Apache MXNet allows us to do so by using Dense layer and specifying the number of units to 1. The rest of the network can be arbitrarily complex. Below, we define a model which has an input layer of 10 neurons, a couple of inner layers of 10 neurons each, and output layer of 1 neuron. We stack the layers using HybridSequential block and initialize parameters of the network using Xavier initialization. net = nn.HybridSequential() with net.name_scope(): net.add(nn.Dense(units=10, activation='relu')) # input layer net.add(nn.Dense(units=10, activation='relu')) # inner layer 1 net.add(nn.Dense(units=10, activation='relu')) # inner layer 2 net.add(nn.Dense(units=1)) # output layer: notice, it must have only 1 neuron net.initialize(mx.init.Xavier()) After defining the model, we need to define a few more things: our loss, our trainer and our metric. Loss function is used to calculate how the output of the network differs from the ground truth. Because classes of the logistic regression are either 0 or 1, we are using SigmoidBinaryCrossEntropyLoss. Notice that we do not specify from_sigmoid attribute in the code, which means that the output of the neuron doesn’t need to go through sigmoid, but at inference we’d have to pass it through sigmoid. You can learn more about cross entropy on wikipedia. Trainer object allows to specify the method of training to be used. For our tutorial we use Stochastic Gradient Descent (SGD). For more information on SGD refer to the following tutorial. We also need to parametrize it with learning rate value, which defines the weight updates, and weight decay, which is used for regularization. Metric helps us to estimate how good our model is in terms of a problem we are trying to solve. Where loss function has more importance for the training process, a metric is usually the thing we are trying to improve and reach maximum value. We also can use more than one metric, to measure various aspects of our model. In our example, we are using Accuracy and F1 score as measurements of success of our model. Below we define these objects. loss = gluon.loss.SigmoidBinaryCrossEntropyLoss() trainer = Trainer(params=net.collect_params(), optimizer='sgd', optimizer_params={'learning_rate': 0.1}) accuracy = mx.metric.Accuracy() f1 = mx.metric.F1() The next step is to define the training function in which we iterate over all batches of training data, execute the forward pass on each batch and calculate training loss. On line 19, we sum losses of every batch per epoch into a single variable, because we calculate loss per single batch, but want to display it per epoch. def train_model(): cumulative_train_loss = 0 for i, (data, label) in enumerate(train_dataloader): with autograd.record(): # Do forward pass on a batch of training data output = net(data) # Calculate loss for the training data batch loss_result = loss(output, label) # Calculate gradients loss_result.backward() # Update parameters of the network trainer.step(batch_size) # sum losses of every batch cumulative_train_loss += nd.sum(loss_result).asscalar() return cumulative_train_loss Validating the model¶ Our validation function is very similar to the training one. The main difference is that we want to calculate accuracy of the model. We use Accuracy metric to do so. Accuracy metric requires 2 arguments: 1) a vector of ground-truth classes and 2) A vector or matrix of predictions. When predictions are of the same shape as the vector of ground-truth classes, Accuracy class assumes that prediction vector contains predicted classes. So, it converts the vector to Int32 and compare each item of ground-truth classes to prediction vector. Because of the behaviour above, you will get an unexpected result if you just apply Sigmoid function to the network result and pass it to Accuracy metric. As mentioned before, we need to apply Sigmoid function to the output of the neuron to get a probability of belonging to the class 1. But Sigmoid function produces output in range [0; 1], and all numbers in that range are going to be casted to 0, even if it is as high as 0.99. To avoid this we write a custom bit of code on line 12, that: Calculates sigmoid using Sigmoidfunction Subtracts a threshold from the original sigmoid output. Usually, the threshold is equal to 0.5, but it can be higher, if you want to increase certainty of an item to belong to class 1. Uses mx.nd.ceil function, which converts all negative values to 0 and all positive values to 1 After these transformations we can pass the result to Accuracy.update() method and expect it to behave in a proper way. For F1 metric to work, instead of one number per class, we must pass probabilities of belonging to both classes. Because of that, on lines 21-22 we: Reshape predictions to a single vector We stack together two vectors: probabilities of belonging to class 0 (1 - prediction) and probabilities of belonging to class 1. Then we pass this stacked matrix to F1 score. def validate_model(threshold): cumulative_val_loss = 0 for i, (val_data, val_ground_truth_class) in enumerate(val_dataloader): # Do forward pass on a batch of validation data output = net(val_data) # Similar to cumulative training loss, calculate cumulative validation loss cumulative_val_loss += nd.sum(loss(output, val_ground_truth_class)).asscalar() # getting prediction as a sigmoid prediction = net(val_data).sigmoid() # Converting neuron outputs to classes predicted_classes = mx.nd.ceil(prediction - threshold) # Update validation accuracy accuracy.update(val_ground_truth_class, predicted_classes.reshape(-1)) # calculate probabilities of belonging to different classes. F1 metric works only with this notation prediction = prediction.reshape(-1) probabilities = mx.nd.stack(1 - prediction, prediction, axis=1) f1.update(val_ground_truth_class, probabilities) return cumulative_val_loss Putting it all together¶ By using the defined above functions, we can finally write our main training loop. epochs = 10 threshold = 0.5 for e in range(epochs): avg_train_loss = train_model() / train_data_size avg_val_loss = validate_model(threshold) / val_data_size print("Epoch: %s, Training loss: %.2f, Validation loss: %.2f, Validation accuracy: %.2f, F1 score: %.2f" % (e, avg_train_loss, avg_val_loss, accuracy.get()[1], f1.get()[1])) # we reset accuracy, so the new epoch's accuracy would be calculated from the blank state accuracy.reset() Output: Epoch: 0, Training loss: 0.43, Validation loss: 0.36, Validation accuracy: 0.85, F1 score: 0.00 <!--notebook-skip-line--> Epoch: 1, Training loss: 0.22, Validation loss: 0.14, Validation accuracy: 0.96, F1 score: 0.35 <!--notebook-skip-line--> Epoch: 2, Training loss: 0.09, Validation loss: 0.11, Validation accuracy: 0.97, F1 score: 0.48 <!--notebook-skip-line--> Epoch: 3, Training loss: 0.07, Validation loss: 0.09, Validation accuracy: 0.96, F1 score: 0.53 <!--notebook-skip-line--> Epoch: 4, Training loss: 0.06, Validation loss: 0.09, Validation accuracy: 0.97, F1 score: 0.58 <!--notebook-skip-line--> Epoch: 5, Training loss: 0.04, Validation loss: 0.12, Validation accuracy: 0.97, F1 score: 0.59 <!--notebook-skip-line--> Epoch: 6, Training loss: 0.05, Validation loss: 0.09, Validation accuracy: 0.99, F1 score: 0.62 <!--notebook-skip-line--> Epoch: 7, Training loss: 0.05, Validation loss: 0.10, Validation accuracy: 0.97, F1 score: 0.62 <!--notebook-skip-line--> Epoch: 8, Training loss: 0.05, Validation loss: 0.12, Validation accuracy: 0.95, F1 score: 0.63 <!--notebook-skip-line--> Epoch: 9, Training loss: 0.04, Validation loss: 0.09, Validation accuracy: 0.98, F1 score: 0.65 <!--notebook-skip-line--> In our case we hit the accuracy of 0.98 and F1 score of 0.65. Tip 1: Use only one neuron in the output layer¶ Despite that there are 2 classes, there should be only one output neuron, because SigmoidBinaryCrossEntropyLoss accepts only one feature as an input. Tip 2: Encode classes as 0 and 1¶ For SigmoidBinaryCrossEntropyLoss to work it is required that classes were encoded as 0 and 1. In some datasets the class encoding might be different, like -1 and 1 or 1 and 2. If this is how your dataset looks like, then you need to re-encode the data before using SigmoidBinaryCrossEntropyLoss. Tip 3: Use SigmoidBinaryCrossEntropyLoss instead of LogisticRegressionOutput¶ NDArray API has two options to calculate logistic regression loss: SigmoidBinaryCrossEntropyLoss and LogisticRegressionOutput. LogisticRegressionOutput is designed to be an output layer when using the Module API, and is not supposed to be used when using Gluon API. Conclusion¶ In this tutorial I explained some potential pitfalls to be aware of. When doing logistic regression using Gluon API remember to: 1. Use only one neuron in the output layer 1. Encode class labels as 0 or 1 1. Use SigmoidBinaryCrossEntropyLoss 1. Convert probabilities to classes before calculating Accuracy
https://mxnet.apache.org/versions/1.6/api/python/docs/tutorials/getting-started/logistic_regression_explained.html
CC-MAIN-2020-34
refinedweb
1,806
51.24
HOWTO: MFC user defined message maps IN THE VIEW CLASS HEADER FILE Add the prototype for the function to which the message is to be mapped. This function will be named "On" + (Second half of message name), here: "OnFind()". Further, the function declaration will be preceded by the term "afx_msg". We will define this function to be of type LRESULT (32 bit identifier) , and take the parameters (WPARAM wParam, LPARAM lParam), the standard window messaging parameters. The reason for this lies in the macro expansion of the AFX message map. If your function is not declared in compliance with the AFX message map macro, you will get error C2642. Look for a section of code like this: // Generated message map functions protected: //{{AFX_MSG(CMyView) afx_msg void OnFileConfig(); //}}AFX_MSG DECLARE_MESSAGE_MAP() afx_msg LRESULT OnFind(WPARAM wParam, LPARAM lParam); IN THE VIEW CLASS BODY FILERegister the new window message. The string literal will later be used to retrieve the number allocated to WM_FIND. Add the following line somewhere at the top of the file: int WM_FIND = RegisterWindowMessage ("MYMESSAGE"); Map the message onto a member function. The AFX message map entry must use the key word ON_REGISTERED_MESSAGE, and associate the message number with the function name. Look for a section of code like this: BEGIN_MESSAGE_MAP(CMyView, CListView) //{{AFX_MSG_MAP(CMyView) ON_COMMAND(ID_FILE_CONFIG, OnFileConfig) //}}AFX_MSG_MAP END_MESSAGE_MAP() Add the following line to the above list of message maps: ON_REGISTERED_MESSAGE(WM_FIND, OnFind) Add a message handler function to the class. Replace "CMyView" with the class name of the view class in your application: LRESULT CMyView::OnFind(WPARAM wParam, LPARAM lParam) { // do something useful return (LRESULT) MyReturnValue; }; IN THE DOCUMENT CLASS HEADER FILEProvide the document class that is to post the message with scope to the view class window handle. Add a member to hold a window handle. HWND PostToView; IN THE VIEW CLASS BODY FILEObtain a reference to the document associated with the view and initialize the document's member variable (above) with the handle to the view's window. Add the lines shown below to a member function that will be executed before the user message is posted to the view. Replace "CMyDoc" with the class name of your document class. This assignment cannot take place in the view's constructor. The OnCreate() member, however, would be a great place. "CMyDoc"* DocPtr = GetDocument(); DocPtr->PostToView = this->m_hWnd; IN THE DOCUMENT CLASS BODY FILEWhere you want to post the message, retrieve the reference to WM_FIND, the user defined message and then post the message... A second call to RegisterWindowMessage() with the same string literal as an argument will retrieve the previously assigned ID for WM_FIND. Add the code shown below in a function member of the document class from which you wish to post a message. Substitute the question marks for your actual parameters. int WM_FIND = RegisterWindowMessage ("MYMESSAGE"); PostMessage(PostToView, // handle to view window WM_FIND, // message to post (WPARAM) "?", // window message parameter (LPARAM) "?"); // window message parameter Alternatively, you can use SendMessage(). In contrast to PostMessage, which sends the message `spiraling' through the Windows messaging system and returns control to the calling thread before the message is actually processed, SendMessage will invoke the target message handler before returning execution control to the caller of SendMessage. The first is appropriate for asynchronous, de-coupled processing, the latter for synchronous processing. Introducing the Jumpman logo, elephant strong situated and identifiable Exhausted the drum in behalf of to the Jordan postal usable jokesmith would bear viewer to moneymaking with this substantive guessPosted by koltchddt on 04/16/2013 11:18am A downright of 141 items of clothing were purchased in April 2012 in 29 countries [url=]hollister[/url] and regions worldwide from authorised retailers. The chemicals cause wide included tanked levels of toxic phthalates [url=]abercrombie paris[/url] in four of the garments, and cancer-causing amines from the disdain of traditional azo dyes in two garments. NPEs [nonylphenol ethoxylates] were institute in 89 garments (upright under two thirds of those tested), showing [url=]air jordan[/url] only somewhat respectability from the results of the anterior to experimentation into the air of these substances in sports clothing that was conducted in 2011. In addendum, the vicinity of various other another types of potentially touchy industrial chemicals was discovered across [url=]abercrombie[/url] a host of the products tested. The Nakate Engagement also works to conceive artisans in agricultural areas [url=]jordan[/url] of Uganda that we protection as theretofore untapped or undervalued. They endure in providing profits for the sake women that are struggling to put against themselves [url=]hollister[/url] and, on assorted of them, the families that are relying on their income. The shoot also adheres to impartial barter principles and environmentally generation practices [url=]abercrombie[/url] including maximizing the avail oneself of of uncovered materials from sustainably managed sources, buying locally where possible and encouraging our artisans to importune in environments of their choosing â which are constantly in the obtainable [url=]moncler[/url] air. Donât like your legs? Do you fondle that your legs are too short? Too beamy [url=]air jordan[/url]? Too white? Whatever your purpose, you donât like your legs and donât yearn anyone else to look at them when wearing a deck out, shorts, or a skirt [url=]michael kors outlet[/url]. No problem! A ruffled crest resolve indisputable the problem. Be indubitable to cynosure unclouded all the perturbation on your ascendancy half and manifest a downland, unassuming bottom.Got an hourglass figure? If so, on the condition that you! While you [url=]hollister france[/url] can sensibly undergo up anything you longing, you should peace watch out of ruffles. You donât indigence to rub off last ruffles on your curves to roll them dotty proportion. In desire to, mar them here your sleeves, collar, or [url=]louboutin[/url] fundament hem.Reply La plupart de bienvenue prime customarily les talons hauts etoiles: Christian Louboutin SupervisedPosted by Vetriatszy on 03/14/2013 05:55am Abercrombie en people from france and thus Fitch Vous Faire Suivre l'ensemble des Tare generallydances De option Maintenant supplmbonustaires et trendy england abercrombie and as well cuando stylishness Fitch a publi mme dom le march le nos jours. Partage diffrents shapes diffrent faire briller l'astigmatisme dans l'instant, l. a,chicago femelle s'av'e rrtre maintenant extrmement lgant et luxueux. Dans la poursuite prs de strive and consequently supplmentaire et. le monde des marques bien connues Abercrombie and so Fitch reprsente un format de setting, Une setting de la vie relle. Elle vous indique annotate le type of p haut niveau ne se limite pas dans le lans s capacits l'artisanat a montr proximit scne T. fashion peut peut-Tre partout proximit pour are generally scne. Vous produisez des choix concernant l. a,chicago beaut. Mais serve peu prs n'importe quelle femme, Placer sur ce, Tt ou tard pourrait tre sans doute la fte not to mention pour esthtique essentielle proximit journe. promote comme mtier n'est pas pas connu mme si dans the muse, Isol an elemen le monde extrieur, Le pattern n'est pas seulement mentionn dans le sens nufactured l'brille T-scne. Comme prolonge lorsque vous tes avec Abercrombie Fitch, Vous pouvez nanmoins participer des movements esthtiques proximit de la setting utile. serve les personnes qui dtidurantenunited nationst l. a,chicago proche de proximit proche de build Abercrombie Fitch utilisant le you can find out more mnge de jupe automne et hiver, Avec kid interprtation proche delaware los angeles way of life corss parcours tout-Amricaine, Il that simplement une dpression little, un peu indisciplins et occasionnels, Seulement n'te sduisante petite ou simplement loisir minuscules. Maintenant les choses very et cuando lgante libre mme trendiness le march le nufactured nos jours. Partage diffrents sorts diffrent faire briller l'astigmatisme dans l'instant, l. a,chicago femelle se 'vrrle rrtre maintenant extrmement lgant et luxueux. it gives unique approaches to the product's home owners according to ones adequate company. may full plan arrangement institution that the majority of renders supplements which include corporate headquarters credit. it's important to learn price tag Free stages of most recent clothes and devices that can become so popular-so fast this type of year's bunch of males athletic shoes m. often, they can not showcase which by way of many top reasonsReply Problem Posting RegisteredMessagesPosted by Legacy on 11/29/2003 12:00am Originally posted by: David I have a CDialog Based Application, where I use a RegisteredMessage to comunicate it with a thread, if I post the message from the thread using one XPropertyPage Library, nothing is sent but I get no error from PostMessage, if I post the message without using XPropertyPage Library, everything works fine, any idea of what is happenning? thanks in advance.Reply Use ::PostMessage for PocketPCPosted by Legacy on 10/26/2001 12:00am Originally posted by: John Jorsett Windows CE 3.0 has two forms of PostMessage. To get the one that accepts a window handle argument, use ::PostMessage.Reply Thanks this stopped me tearing my own hair outPosted by Legacy on 02/10/2001 12:00am Originally posted by: Simon Aldrich Just to let you know, this worked a treat for me, sending messages between MFC dialogs in the main app and ancillary dlls. Thanks so much. SimonReply dest . windows handlePosted by Legacy on 02/15/2000 12:00am Originally posted by: chengchun you can SendMessage((HWND)-1,WM_find,wparam,lparam))Reply is there any difference between non-MFC and MFC messaging?Posted by Legacy on 09/20/1999 12:00am Originally posted by: odessa my way of using message in MFC framework is different in some point. my method is based on classical user-defined message passing, but yours is something structural, i think. i wonder if there is any difference between non-MFC message and MFC messaging. my question is why should we use ON_REGISTERED_MESSAGE macro? i use ON_MESSAGE(WM_MY_MESSAGE, OnMyHandler) instead of it. is there any reason for that?
http://www.codeguru.com/cpp/misc/misc/article.php/c301/HOWTO-MFC-user-defined-message-maps.htm
CC-MAIN-2013-48
refinedweb
1,667
51.07
The Business Data Catalog PDC 2005 Session OFF321 This could be the best thing I've seen at PDC -- a powerful way to manage business objects in disparate systems and provide access to them over SharePoint. I think it's a no-brainer to say that of all the features in SharePoint 2006, this will have the most transforming effect on the enterprise, mainly because it does more for managing enterprise knowledge than any other advance. The Business Data Catalog allows you to centrally manage data models and data. You describe the metadata, upload it to the catalog, and the features "just light up." Data sources could include SAP, Siebel, legacy systems, etc. Metadata Model System - data source Entity - A real-world thing in a System Method - An operation on an Entity Association - A relationship between Entities Metadata has two purposes: to describe a System's API, and to give it meaning to the API and make it easily usable. Finder Methods This is a standard way to find instances of an entity. One method of an Entity is registered as the Finder, and another the SpecificFinder. For example, you could have two WebMethods in a class to retrieve instances of the class (e.g. GetItemCollection and GetItem). In the metadata you include the object schema and the Methods available (like a WSDL). Then you use a MethodInstance section to associate the methods with the types e.g. Finder or SpecificFinder. Adding a Source to the BDC The Business Data Catalog is a Shared Service for all Office apps. You get to it through Site Settings, Add Application, and browse to the XML metadata file. Then you click Import and after the Import is complete you get a View page which allows you to manage or display the contained objects. In a SPSite, you can now add Web Parts: Business Data List, Business Data Item. Then you'd configure the BDList to use your new Application, display certain fields, and boom it works. From there you can create a connection to provide an object to the BDItem web part. Once again, the BPItem WP allows you to choose which fields to display. In a SPList, there is a new column type called Business Data, which lets you include your business objects in lists. For example, you could bind a Customer database to a help-desk support list. You can also connect to a Database as a source. The metadata looks a little different only because of the connection type, and now the Methods are queries rather than web service methods. Back in the UI after an Import, the only different is that the Type is Database rather than Web Service, Filters Filters let you control the data returned by data finders. These are composed of annotations on a Methods input parameters. You might use a filter to create a dynamic picker so that in a custom list with a Business Data column, you can now have a rich browser with a search/filter feature (similar to Outlook Web Access). Properties Every MetaData object has a Properties collection. Business Data features make use of certain annotations, and you can extend the metadata for your own applications. Once again, Properties are created in the metadata, either on their own or inside Filters. So now when adding the customer to a custom list, the user can type a part of a name and there will be an automatic lookup which presents a list of choices -- the user doesn't even need to use the Browse to locate a customer, it can be done based on a part of the Customer Name. IDEnumerators and Search You can full-text search any Application by exposing an IDEnumerator and SpecificFinder for your data. For example, you can have a WebMethod to GetCustomerIDs, define another MethodInstance to bind this method to the Application. Once published, the Search service can now iterate through the CustomerIDs and retrieve data for each customer (using the SpecificFinder). Potentially, you could create several IDEnumerators with different names and get the same sort of behaviour as Audiences gives you in slicing Active Directory results. Actions and WriteBack Actions are links that travel everywhere with an entity. This allows users to take action in context. Business Data Catalog API There are two halvesL runtime and administration The Runtime API lets you browse metadata, execute methods, retrieve instances, and traverse relationships. This is great for custom application builders. The Administration API lets you create, read, update, and delete metadaata and manage permissions. It's designed for administrators. Namespace: Microsoft.Office.Server.ApplicationRegistry (ApplicationRegistry = BDC). Note that this namespace may change. AuthN, AuthZ, and Auditing You can set authentication per application. There are two patterns: trusted subsystem or deelegation. There's one place to set permissions, you can control who accesses which entities, and there is support for LDAP. There is also one place to audit and log who accesses which data when. Call to Action Expose your line-of-business data as with web services or databases. Write BDC-friendly methods or stored procedures.
http://weblogs.asp.net/erobillard/425364
CC-MAIN-2015-27
refinedweb
849
54.32
- Files - DB-API II - Qt SQL and Data-Aware Widgets - Gadfly - Object Databases - Object-Relational Mappers - A Simple GUI Using PyQt's Data-Aware Objects A Simple GUI Using PyQt's Data-Aware Objects If are looking for the easiest way to create a database application, you might want to take a look at PyQt. PyQt, the Python bindings to the cross-platform GUI toolkit Qt, supports the designer files created by Qt Designer. Using Qt Designer, you can easily create forms that connect to any database. From these forms, you can generate either Python or C++ code. I have written a book on working with PyQt and Designer (GUI Programming with Python: Qt Edition) that deals with this toolkit. Here, I just want to show you the steps you need to create data-aware forms. When would it be appropriate to use PyQt's database widgets? The following points need consideration: You are prototyping and might want to migrate your design to C++ at a certain point in time. A two-tier architecture suffices. You don't think you will need to provide alternative interfaces, such as a Web interface or a report interface, to your database (since those would require coding the application logic again.) You do not use complex queries that link tables. You want to deploy on UNIX/X11, Windows, or OS X. You are using PyQt anyway. Using PyQt's SQL classes enables you to eliminate the need for extra database-related modules, making distribution of your application easier. When you have decided to use Qt's data-aware controls, your life suddenly becomes very simple. In Qt Designer, you first create a new database connection. See Figure 1. Figure 1 Creating a database connection. When that is done, you can start designing a form. In this case, we have a simple window, of the type QDialog, and one of three data-aware controls will be placed in it. You have a choice between QDataTable, which presents the data in a tabular form; QDataBrowser, which presents one record in a simple form; and QDataView, a read-only form. Creating the form is made easy by the Data Table Wizard (see Figure 2). First you select the database connection and the table you want to select from. Figure 2 Selecting the database connection and the table. Then you select the fields that you want to include in the table (see Figure 3). Figure 3 Selecting the fields that should be displayed. In the next step, you select the way the form will interact with the user when confirmations are needed and whether the user can sort on columns (see Figure 4). Figure 4 User interaction options. It is possible to create a specific SQL WHERE clause, and to select the columns to use in the ORDER BY clause (see Figure 5). Figure 5 Adding SQL code. Finally, you can set autoediting on or off. Autoediting kicks in when the changes are made to the contents of the QDataTable; if it's true, changes are automatically committed when the user navigates to the next record. (See Figure 6.) Figure 6 Setting autoediting. The result is a database-aware table that runs even inside Qt Designer (see Figure 7). Figure 7 Running the form inside Qt Designer. You can save your design to a .ui designer file and then create Python source code using a simple command-line command: pyuic -x form1.ui > form1.py However, the generated form isn't immediately useful. You have to embed it in a script where you create a database connection using QSqlDatabase: import form1, sys if __name__ == "__main__": a = QApplication(sys.argv) QObject.connect(a,SIGNAL("lastWindowClosed()"), a,SLOT("quit()")) db = QSqlDatabase.addDatabase("QMYSQL3") if db: db.setHostName("hostname") db.setDatabaseName("database") db.setUserName("username") db.setPassword("xxx") if not db.open(): print "Could not open testdb database" print db.lastError().driverText() print db.lastError().databaseText() sys.exit(1) w = form1.Form1() a.setMainWidget(w) w.show() a.exec_loop() When you run this script and the database exists and is running, and when your Qt is compiled with the right drivers, you will be greeted once again by your form filled with data. Of course, Qt wouldn't be what it isas complete as Java, yet simple and compactif it didn't also provide classes that you can use for more complex database handling. These are the QSQl module classes, such as QSqlDatabase, QSqlQuery, and QSqlCursor. Using those, however, means that you are leaving the balmy country of data-aware widgets and that you have to start rolling your own solution again. Nowadays Python can hold its own in the wonderful world of databases and GUI applications. For simple tasks, you can select PyQt and its data-aware widgets, and be done quickly with good, efficient results. More complex tasks demand a more complex approach, but there is no dearth of powerful tools libraries to choose from.
https://www.informit.com/articles/article.aspx?p=30649&seqNum=7
CC-MAIN-2020-29
refinedweb
826
65.22
thanks! Search Criteria Package Details: sabnzbd 2.3.0-1 Dependencies (12) - curl (curl-git, curl-http2-git) - par2cmdline (par2cmdline-git, par2cmdline-tbb) - python2 (placeholder, pypy19, python26, stackless-python2) - python2-cheetah - python2-sabyenc - sqlite - unrar , xdg-utils-patched, xdg-utils-slock) (optional) – registration of .nzb files Required by (13) - headphones (optional) - headphones-git (optional) - lidarr (optional) - lottanzb (optional) - lottanzb-bzr (optional) - radarr (optional) - sabnzbd-knockstrap-git - sickbeard (optional) - sickbeard-mcmic (optional) - sickgear-git (optional) - sickrage (optional) - sonarr (optional) - sonarr-develop (optional) Sources (9) Latest Comments storrgie commented on 2017-09-01 18:52 Revelation60 commented on 2017-09-01 17:59 Sorry, I forgot to push... cryzed commented on 2017-09-01 16:15 @Revelation60 This has been outdated for a bit now -- are you planning on updating to 2.2.1 anytime soon? Revelation60 commented on 2017-04-09 18:42 PSA for update to 2.0.0: The organization of the download queue is different from 0.7.x releases. So 2.x.x will not see the existing queue, but you can go to Status->QueueRepair and "Repair" the old queue. storrgie commented on 2016-10-04 20:34 Is anyone else seeing excessive redirects when trying to download the package? Sorry, using pacaur. ? francoism90 commented on 2016-03-20 17:35 @Revelation60: could you update the systemd file? It should point to the config file: ExecStart=/opt/sabnzbd/SABnzbd.py -10 -f /opt/sabnzbd/sabnzbd.ini Thanks. :) PS. I have updated the SABnzbd Wiki, would be great if you guys could take a look and fix/improve when needed. :) Revelation60 commented on 2016-03-16 10:17 Sabnzbd is updated to version 1.0.0. This is a major update, so some reconfiguration using the web wizard is required. Remember to update the API key in /etc/conf.d/sabnzbd if you want to open .nzb files with sabnzbd. Revelation60 commented on 2016-02-01 15:23 Done. Hopefully it is ok now. Spyhawk commented on 2016-01-30 09:18 Please update the .SRCINFO file too, as the information provided by the RPC interface isn't correct and is needed by some helpers. Paviluf commented on 2015-11-30 18:00 I think it's better to start sabnzbd only when you want to use it instead of having it always running for the obvious reason that you don't always download something. Thanks to look at it ! Revelation60 commented on 2015-11-30 16:55 Ah, I was looking at a wrong version it seems. Anyway, I will see if I can make a hybrid that also works to start sabnzbd. That being said, justin8 is right, the default procedure would be to start it as a service. justin8 commented on 2015-11-30 16:52 Looks like he's right.. Either way, sab on linux is designed to be run as a service, which is included. I believe the .desktop is only there for file-association. so you can double click an nzb in nautilus/thunar/etc and have it add to sab. I'm not sure if there is a way to mark an application for that and not show up in applications menus otherwise though. Paviluf commented on 2015-11-30 16:42 I don't know what happen but this is the content of sabnzbd.desktop and as you can see it call addnzb.sh : [Desktop Entry] Type=Application Version=1.0 Name=SABnzbd+ GenericName=Binary Newsreader Icon=/opt/sabnzbd/sab2_64.png Exec=sh /opt/sabnzbd/addnzb.sh %u Terminal=false Categories=Network MimeType=application/x-nzb I'm on Manjaro KDE and I can't launch sabnzbd from the menu, only from the command line "sabnzbd". Revelation60 commented on 2015-11-30 15:47 I don't understand what you mean. sabnzbd.desktop doesn't call addnzb.sh at all. addnzb.sh is used to register the .nzb extension so that an nzb can be loaded. For this, sabnzbd needs to be running. You can use `systemctl start sabnzbd` to start sabnzbd. Paviluf commented on 2015-11-29 11:07 "sabnzbd.desktop" launch "addnzb.sh" but that doesn't work (Manjaro). sabnzbd doesn't start. If I simply start sabnzbd from a terminal with "sabnzbd", sabnzbd start and works. Why are you using this to launch sabnzbd ? #!/bin/sh . /etc/conf.d/sabnzbd curl -s -F apikey="$API_KEY" -F mode="addfile" -F name=@"$1" $URL/sabnzbd/api &> /dev/null Thanks carlwgeorge commented on 2015-11-14 16:31 Your PKGBUILD is at 0.7.20-2, but your .SRCINFO is at 0.7.20-1. Please fix it. Paviluf commented on 2015-07-18 09:19 Thank you, that work now ;) Revelation60 commented on 2015-07-17 12:23 I heard some stories that the people of Sourceforge are injecting spyware into their packages. I don't have the old version so I cannot verify if this is the case here. So yeah, let's switch to github. Paviluf commented on 2015-07-17 12:19 Maybe we can use the archive from github ? Paviluf commented on 2015-07-17 12:17 The sourceforge link don't seem to work ==> Validating source files with md5sums... SABnzbd-0.7.20-src.tar.gz ... FAILED sabnzbd ... Passed sabnzbd.desktop ... Passed addnzb.sh ... Passed nzb-2.png ... Passed sab2_64.png ... Passed x-nzb.xml ... Passed sabnzbd.service ... Passed sabnzbd.confd ... Passed ==> ERROR: One or more files did not pass the validity check! ==> ERROR: Makepkg was unable to build . crookedasterisk commented on 2015-04-12 06:12 I like the idea of SABnzbd running as its own user for security purposes, but I just see one thing that would allow the sabnzbd user to gain access to another user: addnzb.sh is in sabnzbd's home directory where it as full permissions to it. addnzb.sh is invoked by sabnzbd.desktop which is run by other users. If SABnzbd was ever exploited, it could change the contents of addnzb.sh to something malicious and it would be run under other users. Perhaps addnzb.sh should be in /usr/lib. There's also a typo in sabnzbd.install. ==> If you want to associate .nzb-files with SABnzbd, run 'xdg-mime default sabnzbd.desktop applications/x-nzb' should be: ==> If you want to associate .nzb-files with SABnzbd, run 'xdg-mime default sabnzbd.desktop application/x-nzb' (applications should be without an s) tixetsal commented on 2015-04-03 19:09 Is anyone else experiencing an issue where the ETAs are no longer listed? I just get "checking" instead of the time to download. tbgconno commented on 2014-12-07 13:30 Sourceforge link isn't working for some reason. I used this instead: Bitl0rd commented on 2014-11-26 15:55 sourceforge is offline!!! mirror? Revelation60 commented on 2014-09-23 10:14 Done. brando56894 commented on 2014-09-23 01:51 Can you add this in as a(n optional) dependency? par2cmdline-tbb It enables par2 multi-threading. justin8 commented on 2014-04-23 14:05 The install script tries to run things that are not dependencies: /tmp/alpm_jkHuMg/.INSTALL: line 12: xdg-mime: command not found /tmp/alpm_jkHuMg/.INSTALL: line 13: xdg-icon-resource: command not found Maybe add in an 'if [[ -f /usr/bin/xdg-mime ]]' around the command before running it and add it as an opt depend? justin8 commented on 2014-04-02 00:10 That is what the sed commands in my first comment did. I use that to set it to a common use on my CI server before building it. You also need to change the user in the service file if you want to run it as a regular service. bowhuntr commented on 2014-04-01 22:42 I changed the pkgdir to install it to my home dir but that gave an error when I tried to run it. The simple solution was to go into the .install and change the user from sabnzbd to my user and the group I wanted. I now have access to the files it downloads. bowhuntr commented on 2014-04-01 17:44 If I do that will it give my user ownership of the files? donvla commented on 2014-04-01 17:08 @bowhuntr: Sabnzbd tar.gz is selfcontained. That means you can simply extract its content to which directory you like and start sab by running "/usr/bin/python2 ./SABnzbd.py" from inside the sabnzbd directory. No real need for installation. bowhuntr commented on 2014-04-01 16:47 If I do that will it give my user ownership of the files? justin8 commented on 2014-04-01 09:10 That's not really how a service normally works. But it's up to you. change everything that has $pkgdir/opt to be $pkgdir/home/foo/bar/wherever/you/want. bowhuntr commented on 2014-04-01 08:52 Could the install location be changed in the pkgbuild to make it install to my home directory? justin8 commented on 2014-03-31 23:36 I do this before building it; just replace 'downloads' with whatever user you want it to install/run as (I have sickbeard/couchpotato/sab/transmission etc running as downloads user, and have my user accounts in the downloads group). sed -i 's/="sabnzbd"/="downloads"/g' sabnzbd.install sed -i 's/=sabnzbd/=downloads/g' sabnzbd.service updpkgsums makepkg -rcfs bowhuntr commented on 2014-03-31 22:50 Since this app installs to /opt that means root owns it and my user can't access the files that are downloaded. Is there a work around for this? Pietro_Pizzi commented on 2014-03-09 21:02 There are an 0.7.17RC2 with RAR 5.01 support. Is it possible to get that package? Eg. as a second beta AUR package if u think it isn't stable enough? Pietro_Pizzi commented on 2014-03-09 20:57 There are an 0.7.17RC1 with RAR 5.01 support. Is it possible to get that package? Eg. as a second beta AUR package if u think it isn't stable enough? LeoKesler commented on 2014-02-22 14:51 error: File "SABnzbd.py", line 20 print "Sorry, requires Python 2.5, 2.6 or 2.7." ^ SyntaxError: invalid syntax emphire commented on 2013-12-15 11:14 Here is a pkgbuild with using /var/lib and /usr/lib rather than /opt. It also removes the call that stops the service on upgrade as that is not considered a good practice in PKGBUILDS. ice9 commented on 2013-11-19 00:30 /var/lib/sabnzbd/ sounds like an excellent suggestion for the default installation location of sabnzbd.ini. And with the /etc/sabnzbd.conf file, the user could always change it to another location if desired. emphire commented on 2013-11-18 23:48 I second that suggestion. I've been running it that way for some time (but I used /opt/sabnzbd/config/sabnzbd.ini). It would be nice not to have to change the permissions back on each upgrade. I'd suggest putting the config in /var/lib (ie: CONFIG_FILE="/var/lib/sabnzbd/sabnzbd.ini") and then putting everything else under /usr/lib/sabnzbd. You could add some symlinks to /usr/bin and /etc/sabnzbd.ini. ice9 commented on 2013-11-18 23:09 By the way, if you set the location of the sabnzbd.ini file, then sabnzbd will also create the admin/ and logs/ directories in the same location, and those are the only files and directories that need to be writeable. Which means that you can install all the other sabnzbd program files in a more standardized location than /opt/sabnzbd/ , like /usr/bin/ and /usr/lib/sabnzbd/ ice9 commented on 2013-11-18 22:54 How about changing the systemd unit file to include a configurable location for the sabnzbd.ini file? Keeping it in /opt/sabnzbd/ is not the best idea. With the unit file linked above, the user would create an /etc/sabnzbd.conf file with contents such as: CONFIG_FILE="/home/sabnzbd/sabnzbd.ini" or perhaps something in their own user directory if they have overridden the User=sabnzbd with another user by creating the path/file: /etc/systemd/system/sabnzbd.service.d/user.conf [Service] User=anotherusername cdemoulins commented on 2013-08-11 08:54 It's already an optional dependency as it should be. carlwgeorge commented on 2013-08-11 00:51 My downloads recently stopped working. Turns out it was because I only had ssl sources configured, and you now need python2-pyopenssl installed for that to work. Can that be added as a dependency? mrhanman commented on 2013-08-10 22:25 This doesn't work for me, either. It starts up, but then does nothing with no indication of what is wrong. Anonymous comment on 2013-07-28 00:52 This doesn't work at all anymore. Tried from scratch brand new machine, multiple machines. It starts up ok, but hangs about 3-5 seconds in and just doesn't do anything. Searched the logs extensively and nothing to indicate any errors. When I stop the service, it hangs for ~2 minutes then says the process entered a failed state and kills it. Don't know what's wrong, but wiped my machine twice and tried installing this both on this machine and others. zeppelinlg commented on 2013-07-11 21:26 Hi, I have an error when the source are downloaded. ==> Récupération des sources... -> Téléchargement de SABnzbd-0.7.14-src.tar.gz... ** Resuming transfer from byte position 40960 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 2221k 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume. Thanks for your PKGBUILD :) Revelation60 commented on 2013-05-19 09:24 I was going to update the ExecStart when a new version of sabnzbd is released. The reason for updating the log level is that SABnzbd will otherwise spam the journal with all sorts of info messages. This log level variable has no effect on SABnzbds log level of the web interface. ajs124 commented on 2013-05-19 01:08 Is there any reason for the systemd service to be like this? Why not simply write "ExecStart=/opt/sabnzbd/SABnzbd.py -l0" instead of the current ExecStart? And while I'm at it, what's the reason for overriding the default log level? Revelation60 commented on 2013-04-09 20:46 No problem ;) That would be a nice feature indeed! Romesnil commented on 2013-04-09 15:01 Oh sorry! I never noticed that optional dependencies are not shown in AUR web-pages. That should be added! Revelation60 commented on 2013-04-09 13:21 It is already. Romesnil commented on 2013-04-09 12:41 Hi! Can you add python2-pyopenssl [Extra] as optional dependency (required for ssl support)? Thanks ! lonaowna commented on 2013-04-06 18:30 Hey Hutchism, you need to change complete_dir in sabnzbd.ini. You can also do this through the web-interface. Anonymous comment on 2013-03-21 01:49 Hey all. Trying to figure how to get sab to download to my home folder instead of /opt/sabnzbd. The sabnzbd_systemd file in the wiki is apparently obsolete, so how do I change the default folder location. In the sab configuration I've tried putting "/home/USER/Downloads" and "~/Downloads". I know this is probably really simple, but I'm relatively new to linux. ibexmonj commented on 2013-03-13 17:38 Sorry my bad these were from yenc i installed yesterday Fixed it just removed those files and installed sabnzbd using packer Thank you so much Now moving to couch and sickbeard ibexmonj commented on 2013-03-13 17:23 I tried to check what package ownsthose files using pacman -Qo but it says "No package owns...." ibexmonj commented on 2013-03-13 17:23 I tried to check what package ownsthose files using pacman -Qo but it says "No package owns...." ibexmonj commented on 2013-03-13 17:20 I tried to check what package ownsthose files using pacman -Qo but it says "No package owns...." ibexmonj commented on 2013-03-13 17:20 ibexmonj commented on 2013-03-13 17:17 Revelation60 commented on 2013-03-13 16:43 Certainly, because installing packages yourself is not recommended. Download packer from here: and follow the "Installing packages" instruction from here: Once packer is installed you can use it the same way as pacman, only for the AUR. So packer -S sabnzbd will install sabnzbd properly. ibexmonj commented on 2013-03-13 16:33 I do not have AUR setup in my repository and I dont know how to. I will have to look that up. Hence I downloaded the above tar.gz and all the dependencies and it works fine only thing is I have to manually run the sabnzbd.py file all the time to make it run Do you still suggest I use packer ? Please advise any links for this Revelation60 commented on 2013-03-13 16:20 By "from source" do you mean that you have not used this package? If so, then you have to write your own configuration files (or copy them from this package). Anyway, I recommend you use this package instead of installing sabnzbd yourself. To install packages from the AUR you can use packer for example. You can read more about this in the wiki. If you are using this package however, there should be a service file in: /usr/lib/systemd/system/sabnzbd.service ibexmonj commented on 2013-03-13 15:28 @Revelation60 Thank you for your response Actually, I installed sabnzbd from source and the only place I have sabnzbd files are in my /home directory even the sabnzbd.py and sabnzbd.ini files are there There is no sab nzbdfiles in either /usr/lib/systemd/system OR /etc/systemd/system So do I have to make those files manually ? Revelation60 commented on 2013-03-13 07:53 Normally, you'd just do: systemctl enable sabnzbd.service to start on boot. If you've moved your sabnzbd files, you have to copy /usr/lib/systemd/system/sabnzbd.service to /etc/systemd/system/sabnzbd.service and edit the path to SABnzbd.py. ibexmonj commented on 2013-03-13 03:22 I am really sorry about this question pardon my noobness I have tried to search a lot about starting SABnzbd on startup automaticall. the archwiki is listing files in /opt and /etc that dont even exists the only sabnzbd files i have are in my hoe directory and I use the following command to start sabnzbd always ~> python2 ./SABnzbd.py How can I automate this ? Please advise gtmanfred commented on 2013-03-04 02:19 Scratch that, i missed the find line, anyway, you don't need Type=simple and ExecStart should be ExecStart=/opt/sabnzbd/SABnzbd.py -l0 gtmanfred commented on 2013-03-04 02:05 please don't use sh -c in ExecStart or the python2 in it either, you should be replacing /usr/bin/python in all of the .py files to be /usr/bin/python2 to follow the guidelines from the rebuild for python3.3 and to follow ... and once you do that, you can just launch ExecStart=/opt/sabnzbd/SABnzbd.py -l0 Revelation60 commented on 2013-03-01 09:59 It's an optional dependency, as you can see when you install the package. jswagner commented on 2013-03-01 08:19 Re: extra/python2-pyopenssl Seems sabnzbd should depend on this package, since it's necessary for SSL to work. This must be new, because I didn't need it when I set this up awhile ago.. Revelation60 commented on 2013-02-09 09:40 @farnoy: I'll check it out @rodyakj: the wiki is outdated. The only config file is /etc/conf.d/sabnzbd and it only serves to link nzbs to sabnzbd. Anonymous comment on 2013-02-08 22:58 Archwiki SABnzbd page gives instructions on editing /etc/conf.d/sabnzbd_systemd as if it already exists, and all its example scripts source variables from it. But the file doesn't exist and I had to add all the variables manually. Should this file be included in the AUR package, or does the wiki just need updating? farnoy commented on 2013-02-08 22:27 Could we maybe include conf.d/sabnzbd.conf as EnvironmentFile for systemd unit file and provide an EXEC_OPTS option for users to specify additional options to pass to sab? I find myself modifying the unit file to specify my options. Anonymous comment on 2013-01-28 00:24 For anyone who was having problems with nothing downloading (and using SSL connection with your provider), check to make sure extra/python2-pyopenssl is still installed. Was stuck with this issue for a while before I read through the PKGBUILD again and saw it in optdepends. Sure enough, it somehow got uninstalled (perhaps through a recent cleaning out of old packages on my system) and thus couldn't make a proper connection to my provider (ie. no downloads). n17ikh commented on 2013-01-20 08:55 For anyone (like me) who is still an initscripts holdout and had the somewhat unpleasant surprise that after updating the init script for sabnzbd had disappeared, I can report that the sabnzbd.init file included in the sabnzbd-git package on the AUR works fine, if you edit the path to SABnzbd.py. gtmanfred commented on 2013-01-20 06:37 Why is your service unit so complicated... it could be better see if this works Anonymous comment on 2013-01-17 19:04 works very good. Used it on windows before, but decided to go linux, since windows 8. Evilandi666 commented on 2013-01-15 00:26 It works now, don't know what the problem was. Sorry for the confusing comments.. Revelation60 commented on 2013-01-08 08:57 Is it not downloading or does it not add files to the queue anymore? Try adding nzbs through the web interface. If this is your issue, fill in the API key in /etc/conf.d/sabnzbd. Evilandi666 commented on 2013-01-07 23:05 Downgrade to 0.7.7 does NOT fix this. ... :( Evilandi666 commented on 2013-01-07 23:02 0.7.9 breaks download for me. Nothing works anymore. See (also 0.7.8 seemed to be affected). nhasian commented on 2013-01-07 16:59 @Gilrain: You are absolutely right. After I removed /etc/systemd/system/sabnzbd.service it loaded successfully. Thanks for your help! Gilrain commented on 2013-01-07 15:55 Or because you're using an override (/etc/systemd/system/sabnzbd.service) based on the previous file... Gilrain commented on 2013-01-07 15:54 @nhasian: The problem comes from systemd which tries to load the old service file. Issue "systemctl daemon-reload" as root to allow proper loading. nhasian commented on 2013-01-07 15:51 updated from 0.7.7 to 0.7.9 and now SABnzbd no longer starts. -- Unit sabnzbd.service has begun starting up. Jan 07 08:42:46 archtop systemd[1]: Failed to load environment files: No such file or directory Jan 07 08:42:46 archtop systemd[1]: sabnzbd.service failed to run 'start' task: No such file or directory Jan 07 08:42:46 archtop systemd[1]: Failed to start SABnzbd binary newsreader. -- Subject: Unit sabnzbd.service has failed --------------------------------------- $ systemctl status sabnzbd.service sabnzbd.service - SABnzbd binary newsreader Loaded: loaded (/etc/systemd/system/sabnzbd.service; enabled) Active: failed (Result: resources) CGroup: name=systemd:/system/sabnzbd.service Revelation60 commented on 2013-01-07 06:55 Updated the package to 0.7.9. It is safe to update again. Lucky commented on 2013-01-07 02:03 @ALL DON'T UPDATE!!! @Revelation60 please go back to 0.7.7 und wait for 0.7.9, because: bug reporters have verified the solution to the problem. Revelation60 commented on 2013-01-06 10:31 I did see, but I haven't had time to properly test. I'll update today :) Lucky commented on 2013-01-06 01:06 @Revelation60 maybe you didn't see it, there is 0.7.8 out right now. :) wanna update? Anonymous comment on 2013-01-05 13:02 I'm verry sorry, i just forgotten to set the user in installation process, now it works. Thanks for the fast reply. Revelation60 commented on 2013-01-05 12:43 And is this a clean install? Are you sure that the directory of the logfile is owned by sabnzbd? Anonymous comment on 2013-01-05 12:35 Yes, i do. Revelation60 commented on 2013-01-05 12:24 "Can't write to logfile", are you using sabnzbd as user and group? Anonymous comment on 2013-01-05 12:21 I also can't start the service, but I have another problem: $ sudo systemctl status sabnzbd sabnzbd.service - SABnzbd binary newsreader Loaded: loaded (/etc/systemd/system/sabnzbd.service; enabled) Active: failed (Result: exit-code) since Sa, 2013-01-05 13:12:12 CET; 47s ago Process: 5205 ExecStart=/bin/sh -c python2 /opt/sabnzbd/SABnzbd.py (code=exited, status=2) CGroup: name=systemd:/system/sabnzbd.service Jan 05 13:12:10 asus systemd[1]: Starting SABnzbd binary newsreader... Jan 05 13:12:10 asus systemd[1]: Started SABnzbd binary newsreader. Jan 05 13:12:12 asus sh[5205]: Error: Jan 05 13:12:12 asus sh[5205]: Can't write to logfile Jan 05 13:12:12 asus systemd[1]: sabnzbd.service: main process exited, code=exited, status=2/INVALIDARGUMENT Jan 05 13:12:12 asus systemd[1]: Unit sabnzbd.service entered failed state Revelation60 commented on 2013-01-04 11:01 jrussell: looks like you're not the only one: Or is this your post as well? Maybe this issue is fixed in the next release. jrussell commented on 2013-01-04 01:00 All from a fresh install of sabnzbd, clean config file and clean /etc/conf.d files jrussell commented on 2013-01-04 00:56 cant run sabnzbd: admin@russell-server ~ % sudo systemctl status sabnzbd.service sabnzbd.service - SABnzbd binary newsreader Loaded: loaded (/usr/lib/systemd/system/sabnzbd.service; enabled) Active: failed (Result: exit-code) since Fri, 2013-01-04 02:53:11 SAST; 8s ago Process: 6687 ExecStart=/bin/sh -c python2 /opt/sabnzbd/SABnzbd.py -l0 (code=exited, status=1/FAILURE) CGroup: name=systemd:/system/sabnzbd.service Jan 04 02:53:11 russell-server sh[6687]: import sabnzbd.downloader Jan 04 02:53:11 russell-server sh[6687]: File "/opt/sabnzbd/sabnzbd/downloader.py", line 33, in <module> Jan 04 02:53:11 russell-server sh[6687]: from sabnzbd.newswrapper import NewsWrapper, request_server_info Jan 04 02:53:11 russell-server sh[6687]: File "/opt/sabnzbd/sabnzbd/newswrapper.py", line 449, in <module> Jan 04 02:53:11 russell-server sh[6687]: _EXTERNAL_IPV6 = test_ipv6() Jan 04 02:53:11 russell-server sh[6687]: File "/opt/sabnzbd/sabnzbd/newswrapper.py", line 435, in test_ipv6 Jan 04 02:53:11 russell-server sh[6687]: socket.IPPROTO_IP, socket.AI_CANONNAME) Jan 04 02:53:11 russell-server sh[6687]: socket.error: [Errno 2] No such file or directory Jan 04 02:53:11 russell-server systemd[1]: sabnzbd.service: main process exited, code=exited, status=1/FAILURE Jan 04 02:53:11 russell-server systemd[1]: Unit sabnzbd.service entered failed state Revelation60 commented on 2013-01-03 11:25 I've changed the file. Great work! benjarobin commented on 2013-01-03 00:14 Please fix addnzb.sh and replace it by this much simpler and portable script (Do not need to create temporary file, ...) #!/bin/sh . /etc/conf.d/sabnzbd curl -s -F apikey="$API_KEY" -F mode="addfile" -F name=@"$1" $URL/sabnzbd/tapi &> /dev/null benjarobin commented on 2013-01-02 19:41 Please fix addnzb.sh and replace the 2 lines (copy and delete) by these (If the input file has the mode 400, the copy keep the 400 mode and sabnzb cannot read it) : cp --no-preserve=mode,ownership,timestamps "$1" "$TEMP_NZB" ... rm -f "$TEMP_NZB" Revelation60 commented on 2012-12-30 09:15 @azeotrope: unfortunately the user and group cannot be set from an environment file. What you can do is copy the service file to /etc/systemd/system/sabnzbd.service. There you can make your changes. This file will remain untouched after an upgrade. You still probably have to change the first variable in the install file or chmod the sabnzbd directory after an upgrade. azeotrope commented on 2012-12-29 19:17 Would there be any chance of you adding the user and group that sabnzbd runs as into the conf.d file and loading this via systemd? It's not a big deal, would just be a nice feature so that service file won't need to be edited every time for those of us who use a different user/group. Thanks for the great work on this package. Revelation60 commented on 2012-12-29 18:33 Ok guys, here's a new release with the following changes: * Reintroduced /etc/conf.d/sabnzbd. This file has been made much easier and only serves for nzb association * Cleaned up addnzb.sh * Added parameter -l0 to sabnzbd to prevent spamming the system log with info messages azeotrope commented on 2012-12-25 15:51 Revelation60 commented on 2012-12-25 12:29 The conf.d file has become obsolete, barring perhaps addnzb.sh. There is nothing in the service file anymore that requires parameters. azeotrope commented on 2012-12-25 12:11 Revelation60, I don't really understand why the conf.d file was gotten rid of, since you can use it in systemd. systemd is able to read shell configuration files using the EnvironmentFile variable in a systemd service file. Have a look at how the NFS servive files are set up through systemd if you're not sure. This should get rid of the problem where the service file has to be edited EVERY time. I'm guessing this would also fix the addnzb.sh problem since the conf.d file doesn't have to be removed. This seems like a much cleaner solution than scripting around the problem. Revelation60 commented on 2012-12-24 10:26 I've tried the script, but unfortunately sabnzbd.ini only has read permission for the sabnzbd user. Since the addnzb.sh script is executed as the current user, it can't read the data. So in order for this to work, I have to give read permission to everyone. This file contains the usenet server login information and the sabnzbd password, so I guess people who use password protection on their sabnzbd web interface are not going to be happy about this. Lucky commented on 2012-12-23 22:10 No problem Revelation60. :) SABNZBD_CONFIG = ini file, please report back if it works for you. I have backup of 0.7.6/0.7.7-1 and -2 so i don't need the old scripts, but thanks.). Lucky commented on 2012-12-23 22:09 No problem Revelation60. :) SABNZBD_CONFIG = ini file, please report pack if it works for you. I have backup of 0.7.6/0.7.7-1 and -2 so i don't need the old scripts.). Revelation60 commented on 2012-12-23 21:09 Thanks, Lucky! I'll test your script tomorrow :) If you still want the old scripts for reference (maybe you don't have them anymore), I can send them to you. The choice for the /opt/ path was made by the previous mainainer, and I can't remember why this was chosen. Anyway, if you guys want a different path, I'll just change it. Merry Christmas! Lucky commented on 2012-12-23 20:15 PARSE_CONFIG(){ sed -n "/^\[misc\]$/ ba; d; :a p; n; /^\[/ d; ba" < "${SABNZBD_CONFIG}" | \ grep "${1}" | awk '{print $3}' } USERNAME="$(PARSE_CONFIG "^username =")" PASSWORD="$(PARSE_CONFIG "^password =")" API_KEY="$(PARSE_CONFIG "^api_key =")" NZB_KEY="$(PARSE_CONFIG "^nzb_key =")" PROTOCOL="$([[ "$(PARSE_CONFIG "^enable_https =")" -eq "0" ]] && echo "http" || echo "https")" HOST="$(PARSE_CONFIG "^host =")" PORT="$([[ "$(PARSE_CONFIG "^enable_https =")" -eq "0" ]] && echo "$(PARSE_CONFIG "^port =")" || echo "$(PARSE_CONFIG "^https_port =")")". Lucky commented on 2012-12-23 20:11 @Revelation60 PARSE_CONFIG(){ sed -n "/^\[misc\]$/ ba; d; :a p; n; /^\[/ d; ba" < "${SABNZBD_CONFIG}" | \ grep "${1}" | awk '{print $3}' } API_KEY="$(PARSE_CONFIG "^api_key =")" NZB_KEY="$(PARSE_CONFIG "^nzb_key =")". Revelation60 commented on 2012-12-21 21:29 To load an NZB, the web interface has to be used via an URL. This URL should have the correct port, the correct protocol (http vs https) and the correct API key. Since there is no way for the script to know all this, I suggest you hardcode it in the script. So what you have to do is replace all variables by the correct values for your setup. I am still thinking about a nice solution. The only thing I could come up with is parsing sabnzbd.ini (which may be not that easy). wilberfan commented on 2012-12-21 17:19 As a noob, may I ask how addnzb.sh should be edited to make it work properly? What changes do we need to make? Revelation60 commented on 2012-12-18 19:48 I see that this update breaks addnzb.sh :( I'll think about a good solution. In the mean time I guess you should edit it by hand, unfortunately. Revelation60 commented on 2012-12-18 19:29 I have updated the package with many changes to make it easier for most users, so BEWARE before updating! Read this first: 1. It turns out (from this discussion:) that sabnzbd does actually shut down properly when a shutdown event is fired. This means that the service file can be simplified tremendously and that actually robotanarchy's version is just fine! The only small down side is that sabnzbd is quite verbose in the logs. 2. This also means that the configuration files are also no longer necessary. They have been REMOVED. 3. Initscripts have been REMOVED. 4. The install script has been simplified: on the top line you specify which user and group the sabnzbd files belong to. These default to sabnzbd. If you use custom users and groups you have to change them EVERY time the package is upgraded OR chmod /opt/sabnzbd yourself. You also (still) have to change the user and group in the service file (by copying it to /etc/systemd/system/sabnzbd.service and modifying it there, these changes will be kept). Note that the install script does not modify your custom groups, so just a single chmod has to be done after an update. I hope the transition will be smooth. Anonymous comment on 2012-12-14 17:33 Forgive me if this is a repost--I was unable to find this solution earlier--but anyone struggling to change the host IP to 0.0.0.0 must change it in /etc/conf.d/sabnzbd_systemd instead of sabnzbd.ini. Anonymous comment on 2012-12-05 12:26 I'm having issues with this crashing and I'm unable to access it from another machine on the network even though I have set it to 0.0.0.0. See my pastebin here: robotanarchy commented on 2012-12-02 21:08 oh and please also test my .service file if you want to add it to the package. I currently use it and it works fine, but I am not sure if it uses the same folder for all the data for example. robotanarchy commented on 2012-12-02 17:28 (you should really get that right before removing the sysv-init style scripts) robotanarchy commented on 2012-12-02 17:27 Hello, the .service file doesn't work quite good. When you install sabnzbd for the first time, it seems like it does not create the pid file while you run the wizard, so systemd will kill it after a 2 minutes or something (-> you are in the middle of the wizard and don't know why it crashed). I suggest a .service file like this: [Unit] Description=SABnzbd binary newsreader After=network.target [Service] Type=simple ExecStart=/bin/sh -c "python2 /opt/sabnzbd/SABnzbd.py" User=sabnzbd Group=sabnzbd [Install] WantedBy=multi-user.target More advantages: - you can actually use http[s] settings in the interface - you don't need the extra config file with all its variables (user, password, api key, ...) - you don't need to edit the .service file (!!!) in case you change the port - you can see what sabnzbd is doing (because here the -d flag isn't used) with systemctl status sabnzbd - this also 'survives' restarts from the webinterface, just checked that Disadvantage: - you can't stop sabnzbd with systemd. But I'd rather have it like that than copy over all the variables! If someone really need that, we should try to the an upstream fix for an easy shutdown without all these variables that only works from the local machine. Shutting it down via the webinterface and curl (if you really want that) still works anyway. You could also add an example ExecStop line with example values and comment it out. Anonymous comment on 2012-11-29 15:41 This might be an ignorant question, I'm not too familiar with packaging. I was wondering why we put sabznbd in /opt instead somewhere in /usr, as seems to be the case for most Arch packages? Seems like Debian puts sabnzbd also in /usr (). kvasthval commented on 2012-11-26 02:31 Is it possible for you to make python2-pyopenssl and python2-feedparser optional dependencies? Having to delete them from depends=() every time is pretty annoying when you don't need SSL and RSS support. psychedelicious commented on 2012-11-22 02:25 Anyone know why Sabnzd doesn't actually shut the PC down any more? Revelation60 commented on 2012-11-18 16:50 @bjo, are you sure you have set the port in the appropriate places? Please check the wiki if you're unsure. @betrunkenaffe, the error message about not having found the pid yet is not really an issue, since it takes slightly more time for sabnzbd to create one. This message is unrelated to sabnzbd not starting properly. Are you sure you have set the ports and other settings correctly? bjo commented on 2012-11-18 15:30 I get this issue, trying to run sabnzbd on 192.168.200.20:8888 (eth) WARNING::[__init__:1063] Cannot access PID file /run/sabnzbd/sabnzbd-8888.pid Revelation60 commented on 2012-11-18 13:11 I've updated the package. Note the following important things: 1. This will be the last version to support initscripts. When a new update comes, I will remove the files. Additionally, the sabnzbd_systemd.conf file will be renamed to sabnzbd.conf. To make this process seamless and to prevent loss of settings, overwrite sabnzbd.conf with sabnzbd_systemd.conf if you're using systemd now. 2. Some people still have issues with the PID file or with restarts, and I am not sure what's causing this. Maybe I shouldn't use forking, but make the process simple. If I look at the opensuse script (), it is much shorter. My only worry is that sabnzbd may not shut down properly, which may cause data loss. Because I am uncertain about this, I'd like to have your input on this matter. What do you think is the best and safest approach? Anonymous comment on 2012-11-18 03:38 If that was directed at me, tried that, still doesn't work, same error message. bjo commented on 2012-11-17 15:50 sabnzbd.service has a typo. "ExecStart=/bin/sh/ -c" but it has to be /bin/sh Anonymous comment on 2012-11-17 15:20 Trying to finish off the sysvinit switchover (and updated sab) and getting this error when it starts Nov 17 09:06:46 hephaestus systemd[1]: Starting SABnzbd binary newsreader... Nov 17 09:06:46 hephaestus systemd[1]: PID file /run/sabnzbd/sabnzbd-8080.pid not readable (yet?) after start. Nov 17 09:06:47 hephaestus systemd[1]: Started SABnzbd binary newsreader. It works fine via sysvinit, the user specified in /usr/lib/systemd/system/sabnzbd.service is sabnzbd and permissions on /run/sabnzbd are correct (it creates file, just can't read it). Any suggestions? simon04 commented on 2012-11-08 23:52 Didn't try that, but I expect that to work as expected. HTTPS is neither essential for me nor did I use it ^^ … Revelation60 commented on 2012-11-08 21:33 If you want https, you can read what to do on the wiki page. Have you tried this? simon04 commented on 2012-11-08 18:21 Changing the following entry in /usr/lib/systemd/system/sabnzbd.service > ExecStart=/opt/sabnzbd/SABnzbd.py -f /opt/sabnzbd/sabnzbd.ini -s 127.0.0.1:8080 -d --pid /run/sabnzbd/ as well as disabling https in /opt/sabnzbd/sabnzbd.ini > enable_https = 0 (to have /run/sabnzbd/sabnzbd-8080.pid instead of /run/sabnzbd/sabnzbd-9090.pid) did the trick for me. Still not sure what causes these problems … Revelation60 commented on 2012-11-08 07:42 That is very strange, since this package only updates /usr/lib/systemd/system/sabnzbd.service and nothing in the /etc/ directory except for /etc/conf.d/sabnzbd_systemd. But this file is included in backup. Anonymous comment on 2012-11-07 23:07 @brando56894 @Revelation60 I ran into the same problem. The ,pid file reference in "/etc/systemd/system/sabnzbd.service" didn't contain the right port number because it seems to have been overwritten in a update. Revelation60 commented on 2012-11-04 16:21 Well, this is a completely different issue, because sabnzbd.service is found and executed, only execution fails. Are you using custom settings, like custom ports, etc? Have you read the wiki ()? brando56894 commented on 2012-11-04 16:08 Didn't fix anything [bran@ra ~]$ sudo systemctl --system daemon-reload [sudo] password for bran: [bran@ra ~]$ sudo systemctl start sabnzbd Job for sabnzbd.service failed. See 'systemctl status sabnzbd.service' and 'journalctl -n' for details. [bran@ra ~]$ sudo systemctl status sabnzbd.service sabnzbd.service - SABnzbd binary newsreader Loaded: loaded (/usr/lib/systemd/system/sabnzbd.service; enabled) Active: failed (Result: exit-code) since Sun, 2012-11-04 07:05:34 EST; 23s ago Process: 562 ExecStart=/bin/sh/ -c python2 ${SABNZBD_DIR}/SABnzbd.py ${SABNZBD_ARGS} --pid /run/sabnzbd (code=exited, status=2) CGroup: name=systemd:/system/sabnzbd.service Nov 04 07:05:34 ra systemd[1]: Failed to start SABnzbd binary newsreader. Nov 04 07:05:34 ra systemd[1]: Unit sabnzbd.service entered failed state [bran@ra ~]$ journalctl -n Unprivileged users can't see messages unless persistent log storage is enabled. Users in the group 'adm' can always see messages. [bran@ra ~]$ sudo journalctl -n -- Logs begin at Sat, 2012-11-03 18:44:01 EDT, end at Sun, 2012-11-04 07:06:42 EST. -- Nov 04 07:05:34 ra systemd[1]: Starting SABnzbd binary newsreader... Nov 04 07:05:34 ra systemd[1]: sabnzbd.service: control process exited, code=exited status=2 Nov 04 07:05:34 ra systemd[1]: Failed to start SABnzbd binary newsreader. Nov 04 07:05:34 ra systemd[1]: Unit sabnzbd.service entered failed state Nov 04 07:05:34 ra sudo[559]: pam_unix(sudo:session): session closed for user root Nov 04 07:05:58 ra sudo[623]: bran : TTY=pts/2 ; PWD=/home/bran ; USER=root ; COMMAND=/usr/bin/systemctl status sabnzbd.service Nov 04 07:05:58 ra sudo[623]: pam_unix(sudo:session): session opened for user root by (uid=0) Nov 04 07:05:58 ra sudo[623]: pam_unix(sudo:session): session closed for user root Nov 04 07:06:42 ra sudo[708]: bran : TTY=pts/2 ; PWD=/home/bran ; USER=root ; COMMAND=/usr/bin/journalctl -n Nov 04 07:06:42 ra sudo[708]: pam_unix(sudo:session): session opened for user root by (uid=0) Revelation60 commented on 2012-11-04 11:11 Probably systemd caches all the files. Have you tried running 'systemctl --system daemon-reload'? Revelation60 commented on 2012-11-04 11:11 It is working here. Probably systemd cashes all the files. Have you tried running 'systemctl --system daemon-reload'? brando56894 commented on 2012-11-04 02:55 the SABnzbd target doesn't seem to exist in the right places or something since when I try to do "systemctl start <tab complete>" nothing shows up as shown here: [bran@ra ~]$ sudo systemctl start alsa-restore.service systemd-fsck@dev-disk-by\x2duuid-27df0f80\x2d1b79\x2d4bf1\x2dac89\x2d1dc0eb68a149.service alsa-store.service systemd-fsck@dev-disk-by\x2duuid-30cd911b\x2dfb5d\x2d4f41\x2d9067\x2d39444c4726d6.service auditd.service systemd-fsck@dev-disk-by\x2duuid-450fec66\x2dbe30\x2d4245\x2db81e\x2d857e6169a113.service emergency.service systemd-fsck@dev-disk-by\x2duuid-5fb000cd\x2dd8cb\x2d4474\x2d8457\x2de7f5ebdae05b.service emergency.target systemd-fsck@dev-disk-by\x2duuid-761c9327\x2dd247\x2d43ce\x2d8d93\x2db58e14d977d3.service nss-user-lookup.target systemd-fsck@dev-disk-by\x2duuid-b2e2cf8d\x2d1cbe\x2d4b6c\x2daa52\x2d45eb5b93cec3.service plymouth-quit-wait.service systemd-fsck-root.service plymouth-start.service systemd-initctl.service proc-sys-fs-binfmt_misc.mount systemd-journal-flush.service rc-local.service systemd-random-seed-load.service rescue.service systemd-random-seed-save.service rescue.target systemd-readahead-collect.service sshdgenkeys.service systemd-readahead-done.service sys-kernel-config.mount systemd-readahead-done.timer syslog.service systemd-readahead-replay.service syslog.socket systemd-shutdownd.service syslog.target systemd-tmpfiles-clean.service systemd-ask-password-console.service systemd-update-utmp-runlevel.service systemd-ask-password-wall.service systemd-update-utmp-shutdown.service systemd-binfmt.service but it does exist as seen here: [bran@ra ~]$ locate .service|grep -i sabnzbd /etc/systemd/system/multi-user.target.wants/sabnzbd.service /usr/lib/systemd/system/sabnzbd.service Maybe I'm doing something wrong since I just migrated from sysvinit to systemd about a week or so ago and I'm still trying to understand it fully. I copied /usr/lib/tmpfiles.d/sabnzbd.conf to /etc/tmpfiles.d/sabnzbd.conf since it wouldn't start on boot for me. disastro commented on 2012-10-31 07:58 Set my user in sabnzbd_systemd, Can't write to logfile and it fails to start. Chown /opt/sabnzbd, systemctl timeouts after starting it succesfulyl thus killing it How the hell am I supposed to get this thing running? robotanarchy commented on 2012-10-25 18:16 Could you make xdg-utils optional? It pulls xorg dependencies on headless machines. Thanks for the PKGBUILD! terminalmage commented on 2012-10-24 03:21 @splippity: Please review the systemd wiki page. /etc/systemd and /etc/tmpfiles.d take precedence over the files installed by the package. splippity commented on 2012-10-24 01:45 I too have the same issue. I also have to keep changing my /usr/lib/systemd/system/sabnzbd.service to reflect the correct user. I just took the advice about copying the conf file from /lib/tmpfiles.d to /etc/tmpfiles.d so hopefully that stabilizes that issue. thanks Evilandi666 commented on 2012-10-23 21:11 Ah sorry for bothering again, I should read better next time .. thx! Revelation60 commented on 2012-10-23 21:05 No, he is correct. This means you shouldn't have to alter /usr/lib/tmpfiles.d/sabnzbd.conf. You have to copy it to /etc/tmpfiles.d/sabnzbd.conf and make your desired changes there. Evilandi666 commented on 2012-10-23 20:55 For my own luck, I saw that you didn't add /usr/lib/tmpfiles.d/sabnzbd.conf to backup array again and added it on my own. :( Is Gilrain's comment wrong on how to handle tmpfiles? Revelation60 commented on 2012-10-23 16:58 I have added the environment file to the backup array and included After=network.target. I think that after a few releases I will make systemd the default since Arch moved to this as well (so no more ugly _systemd) and maybe even deprecate the rc.d script. Does anyone still have the restart issue? carlwgeorge commented on 2012-10-23 09:11 This kept failing to start at boot for me. I had to add this line to [Unit] section of the file sabnzbd.service. After=network.target Please include this line in the package by default. I can't think of a situation where you would be running sabnzbd without a network connection. Thanks! roguewolf commented on 2012-10-16 14:29 @archtaku Ahh, thanks mate; It seems I was overlooking the obvious. SABNZBD_PORT was still set to 8080 in /etc/conf.d/sabnzbd_systemd. I've changed it to match the PIDFile in /etc/systemd/system/sabnzbd.service and all's well. Thanks for the assistance! terminalmage commented on 2012-10-16 14:20 roguewolf: please pastebin your /etc/conf.d/sabnzbd_systemd and /etc/systemd/system/sabnzbd.service. roguewolf commented on 2012-10-16 08:15 Could someone tell me, if I want to run sabnzbd with a custom port (e.g. 9765), what do I need to do? I've tried copying /usr/lib/systemd/system/sabnzbd.service to /etc/systemd/system/ and changing PIDFile to 9765 but when I do I get 'failed (Result: timeout)' when attempting to start the service. I'm still getting to grips with systemd so apologies if I'm overlooking anything obvious. holyArch commented on 2012-10-08 10:49 backup is missing "etc/conf.d/${pkgname}_systemd" Gilrain commented on 2012-10-08 08:18 > Could you please add /usr/lib/tmpfiles.d/sabnzbd.conf and /etc/conf.d/sabnzbd_systemd to the backup section? The first one should not have been tampered with but copied to /etc/tmpfiles.d/sabnzbd.conf, where it will not be touched. See <>. I second adding "etc/conf.d/${pkgname}_systemd" to the backup array. Evilandi666 commented on 2012-10-06 23:10 Could you please add /usr/lib/tmpfiles.d/sabnzbd.conf and /etc/conf.d/sabnzbd_systemd to the backup section? Otherwise it is a bit annoying for systemd users who don't use the standard user. The update deleted half of my config..:( Thx! stefanwilkens commented on 2012-10-03 11:45 I created a wiki article: Please add, might help prevent some of these recurring questions :) Anonymous comment on 2012-10-01 11:02 @archtaku: Ah, thank you! That was somewhat trivial... I hope this helps another poor sod overlooking the obvious :) terminalmage commented on 2012-09-28 04:37 @poltak: are you running sabnzbd as a user other than the sabnzbd user? If so, it might be permissions issues on /run/sabnzbd that are preventing the pidfile from being created. You might need to chown/chmod the /run/sabnzbd directory to make it accessible. Anonymous comment on 2012-09-28 03:20 Mine doesn't even create a PID file. I've made sure that it's specified to create it in /run/sabnzbd using the --pid argument in the .service file's ExecStart parameter. Just running "sabnzbd" as user works fine and I can access the web UI, although trying "sabnzbd --pid /run/sabnzbd" as same user does not create a PID file at given path, and also does not even start the program (because the PID file doesn't exist, it times out starting the URLGrabber). I've searched through my system for all present PID files while running sabnzdb, but none for this process. Of course, I want to be able to run it as a systemd daemon which requires the PID file. Anyone had similar issue with creation of PID file (from --pid arg)? Revelation60 commented on 2012-09-26 07:32 That's what I said in my reply ;) stefanwilkens commented on 2012-09-25 18:25 excuse the spam, my previous reply here has a simple answer: When starting sabnzbd with https enabled, the pidfile changes from sabnzbd-8080.pid to sabnzbd-9090.pid (or whatever you've set your normal / ssl ports to in sabnzbd.ini). The fix is to let systemd be aware of this though the service file: 1, edit /etc/systemd/system/sabnzbd.service to use sabnzbd-9090.pid 2. run "sudo systemctl --system reload" Revelation60 commented on 2012-09-25 18:21 @stefanwilkens: did you change the port in the config file and in the sabnzbd.service file as well? If you haven't done the latter, systemd will look for the wrong pid file. stefanwilkens commented on 2012-09-25 18:12 Another minor issue: When enabling https (through sabnzbd.ini), systemctl start sabnzbd.service does start sab with ssl support on port 9090, yet systemd doesn't seem to be aware that it actually started. Sab is available and working, but after some time (~30 seconds) systemd appears to run into a time-out and kills the service. This is evident in the status report: [stefan@lapsteef sabnzbd]$ sudo systemctl status sabnzbd.service sabnzbd.service - SABnzbd binary newsreader Loaded: loaded (/usr/lib/systemd/system/sabnzbd.service; disabled) Active: failed (Result: timeout) since Tue, 25 Sep 2012 20:03:02 +0200; 13s ago Process: 1800 ExecStart=/bin/sh/ -c python2 ${SABNZBD_DIR}/SABnzbd.py ${SABNZBD_ARGS} --pid /run/sabnzbd (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/sabnzbd.service Again, it does start successfully with ssl support. It's systemd that seems to be unaware of this and thus kills it. Revelation60 commented on 2012-09-24 19:08 @archtaku, in the comment I mention that you can't use environment file variables for the PIDfile, nor for User and Group. The same is true for variables in variables in the environment file. It didn't work in previous versions of systemd, perhaps it works now. I'll check if it does. Anyhow, I don't like the aggressive and disrespectful tone of your comments. @stefanwilkens, I'll see if I can find the cause. stefanwilkens commented on 2012-09-24 18:22 Has anybody been noticing that, when using systemd, sab seems unable to restart itself? Saving settings is done properly, but restarting the service is failing. terminalmage commented on 2012-09-24 16:15 Reposting previous comment, made a mistake in the pastebin. Deleted original comment.: terminalmage commented on 2012-09-24 16:13: Revelation60 commented on 2012-09-22 09:16 1) I am talking about line 2 in sabnzbd_systemd.confd. 2) Parameters in definitions of parameters are not allowed in systemd configuration files. So repetition is necessary. The port variable is used in ExecStop. terminalmage commented on 2012-09-22 04:44 1) Of course I haven't read all 216 comments. Deal with it. 2) I have no idea where you're going with this, since I made no mention of the sysvinit confd file. I was referring to the SABNZBD_ARGS parameter within sabnzbd_systemd.confd. It makes no sense to hard-code the port on this line when you have already assigned the port number to SABNZBD_PORT just a few lines earlier. Revelation60 commented on 2012-09-21 07:48 If you've read the comments, you'll see that it is impossible at the moment to use a parameter for PIDfile. This is a missing feature of systemd. And second of all, I don't understand why you would use a variable in the configuration file itself. sabnzbd_systemd.confd is the configuration file for systemd and has nothing to do with sabnzbd.confd, because the syntax is different. terminalmage commented on 2012-09-21 03:39 There are a couple things that should be changed to make this run more smoothly for those of us that are running sabnzbd on non-standard ports: 1) The systemd unit file (sabnzbd.service) should be modified so that the PIDfile parameter uses ${SABNZBD_PORT}, rather than 8080. 2) sabnzbd_systemd.confd should likewise be modified so that it uses ${SABNZBD_PORT} instead of 8080. Revelation60 commented on 2012-09-10 08:01 The default sabnzbd download location /opt/sabnzbd/downloads and /opt/sabnzbd/downloads/incomplete is owned by the sabnzbd user. If you want a different download folder, make sure to enter the full path, and that the folder is accessible by members of the Users group. This means that all the underlying folders need the +x permission. holyArch commented on 2012-09-09 21:46 holyArch: I cannot change Incomplete downloads folder. Revelation60: Is de folder accessible by the sabnzbd user? /etc/group sabnzbd:x:999: /etc/passwd sabnzbd:x:999:999:SABnzbd user:/opt/sabnzbd:/sbin/nologin Incomplete folder can be read by Users Revelation60 commented on 2012-09-09 08:22 You can't install straight from the AUR using pacman, did you mean packer (sudo packer -S sabnzbd)? Anyhow, this is probably just a mistake on your end. You have used an outdated package stored in your filesystem, or there is a reason why sabnzbd won't upgrade. You can see if there is an error message during the upgrade. Also, if you are upgrading, you have to restart sabnzbd. Anonymous comment on 2012-09-09 02:05 I just installed this package using pacman but the sabnzbd web interface shows version 0.7.2 (and update available) even though this page says version 0.7.3. How do I fix this? Revelation60 commented on 2012-09-06 19:42 @Vash63, I don't know yet what's causing this issue. @holyArch What do you mean? holyArch commented on 2012-09-06 15:43 @Revelation60 how can I know? sabnzbd:x:999:999:SABnzbd user:/opt/sabnzbd:/sbin/nologin Vash63 commented on 2012-09-02 22:24 I almost have it working... it keeps resetting the host field to 127.0.0.1. I want it set to my PC's IP so I can access it remotely. Whether I set it in /etc/conf.d/sabnzbd_systemd or /opt/sabnzbd/sabnzbd.ini, it resets to 127.0.0.1 and I can only access it through localhost on the host system. Pretty annoying. I'm not sure it's an arch-bug or not, it wasn't happening before I switched to systemd and I don't have that issue in Windows so I'm not sure if it's a bug with this specific package or something I should submit upstream. emphire commented on 2012-09-02 22:02 I got it working. Just had to run: systemctl --system daemon-reload emphire commented on 2012-09-02 20:29 I just switched to systemd and now I'm having issues with sabnzbd. When I try to run load it manually (systemctl start sabnzbd) Sabnzbd starts-up and works when I try to access it from my browser, but systemctl doesn't exit. After a minute, systemctl says the job failed and sabnzbd gets a SIGTERM and exits. "systemctl status" shows that sabnzbd timed-out. I do have sabnzbd set-up to run on an alternate port (5480) but I edited sabnzbd.service to use the changed PID file name: PIDFile=/run/sabnzbd/sabnzbd-5480.pid I can see that the PID file is created when sabnzbd starts and remains there until it gets the SIGTERM: $ cat /run/sabnzbd/sabnzbd-5480.pid 9797 Does anyone have any ideas how I can troubleshoot this and get it working? Thanks! jrussell commented on 2012-08-30 20:07 Just discovered /etc/conf.d/sabnzbd - nevermind my previous 4 comments :) jrussell commented on 2012-08-28 18:56 In my sabnzbd log Ii get "API Key missing, please enter the api key from Config->General into your 3rd party program" upon every "rc.d restart sabnzbd" attempt jrussell commented on 2012-08-28 18:51 In my sabnzbd log Ii get "API Key missing, please enter the api key from Config->General into your 3rd party program" upon every "rc.d restart sabnzbd" attempt jrussell commented on 2012-08-28 18:50 I also get "Staring SABnzbd FAIL" when trying to restart sabnzbd with "rc.d restart sabnzbd" jrussell commented on 2012-08-28 18:39 My host varibale in the sabnzbd.ini file keeps reseting to 127.0.0.1 upon every reboot. the /opt/sabnzbd folder is owned by sabnzbd, sabnzbd is running as sabnzbd...all other settings are still saved, my servers etc, I cant fiqure this out? Im running sabnzbd as a daemon from rc.conf Revelation60 commented on 2012-08-22 08:14 You have to change the owner in sabnzbd.tmp as well. This gets installed at /usr/lib/tmpfiles.d/sabnzbd.conf Anonymous comment on 2012-08-21 20:29 I've edited sabnzbd_systemd so I can run sabnzbd as myself, I've also taken ownership of /run/sabnzbd so the pid can be created, but on reboot the owner of /run/sabnzbd gets reset to sabnzbd:sabnzbd meaning my user cannot create the pid file and sabnzbd fails to start Revelation60 commented on 2012-08-17 18:42 I do mention that you have to change it in the service file as well, in the comments of sabnzdb_systemd :) Anonymous comment on 2012-08-17 17:09 I had some errors with the .pid file after moving to systemd, for some reason the .pid file was generated as sabnzbd-9090.pid. I solved it by changing the .service file in /etc/systemd/system/sabnzbd.service to use sabnzbd-9090.pid, did a systemctl --system reload and then it worked again :) Revelation60 commented on 2012-08-14 16:33 Is de folder accessible by the sabnzbd user? holyArch commented on 2012-08-14 15:41 I cannot change Incomplete downloads folder! After setting download_dir to /home/holyarch/Downloads/incomplete/ in /opt/sabnzbd/sabnzbd.ini SABnzbd keeps ignoring and resetting it to "/Downloads/incomplete". Any fix? Anonymous comment on 2012-08-11 22:56 Not sure if this was a rookie mistake, but putting in my actual local ip didn't work, I tried replacing 127.0.0.1 with 0.0.0.0 and it works like a charm! thanks Revelation60 commented on 2012-08-11 17:34 You can get more information from journalctl, but this might be a race condition error. I am just guessing, but maybe the network service has to be loaded before sabnzbd. Anonymous comment on 2012-08-11 16:26 I am having trouble changing the local ip (from 127.0.0.1 to local ip) with the systemd setup (so that I can access it from my other computers). If I change it in the sabnzbd_systemd file, the unit fails to auto-load on reboot. But, I can start it manually (systemctl start sabnzbd.service) with the updated ip after boot and it works just fine. any ideas? Revelation60 commented on 2012-08-10 08:45 Well, it should boot with the default configuration. I can imagine that if you only change your username in the conf file, it wouldn't work. You also have to chmod the sabnzbd folder. Anonymous comment on 2012-08-09 20:18 ive updted to the newset version, but it no longer runs at boot, even though it is enabled to do so. if i try to run it manually, with "/etc/rc.d/sabnzbd start" it then asks me for my password, but in the /etc/conf.d/sabnzbd file, i have my username set to my system user . if i type in my password, or roots password, it says its failed to start: [eric@serv ~]$ su eric -c '/etc/rc.d/sabnzbd start' :: Starting SABnzbd [BUSY] [FAIL] any ideas ? Revelation60 commented on 2012-08-06 08:31 I have the same thing, but I don't know what causes this. I think this is a non-critical bug in the package itself. nicoulaj commented on 2012-08-05 20:39 When the rc.d service starts, it prints message to stdout: $ sudo rc.d start sabznbd ** (process:16917): WARNING **: Trying to register gtype 'GMountMountFlags' as enum when in fact it is of type 'GFlags' ** (process:16917): WARNING **: Trying to register gtype 'GDriveStartFlags' as enum when in fact it is of type 'GFlags' ** (process:16917): WARNING **: Trying to register gtype 'GSocketMsgFlags' as enum when in fact it is of type 'GFlags' Revelation60 commented on 2012-08-04 09:19 Maybe Restart=on-success should be set. Though I doubt if sabnzbd can then be shutdown from the webinterface. There could also be a different reason for a failing restart, I'll look into it. zebulon commented on 2012-08-03 16:09 I have tested and it seems to work fine. The only glitch is that, after using the wizard to fill up the settings, the daemon is not restarted, but left dead. Of course using systemctl start or systemctl enable and rebooting makes it work again. It is just as if the wizard was unable to restart a daemon with systemctl. Revelation60 commented on 2012-08-03 12:43 I have added the temporary file and an entry in .install to execute systemd-tmpfiles (if it exists) as to create the directory straight after the upgrade. zebulon commented on 2012-08-03 10:51 I cracked it! See To create a temp file at boot, I created a file named /etc/tmpfiles.d/sabnzbd.conf containing: d /run/sabnzbd 0755 sabnzbd sabnzbd - Then it works! That is the "proper" method to create /run files and directories with systemd by the way. See also Now for the PKGFILE, could you please add the /etc/tmpfiles.d/sabnzbd.conf as I described? Thanks in advance. zebulon commented on 2012-08-02 15:38 :) I am trying to familiarise myself with systemd. I added: ExecStart=/bin/mkdir "/run/sabnzbd" ; \ /bin/chown sabnzbd:sabnzbd "/run/sabnzbd" ; \ /bin/sh/ -c "python2 ${SABNZBD_DIR}/SABnzbd.py ${SABNZBD_ARGS} --pid /run/sabnzbd" but I get this error: sabnzbd.service has more than one ExecStart setting, which is only allowed for Type=oneshot services. Refusing. Revelation60 commented on 2012-08-02 15:27 I am guessing he didn't reboot. :) zebulon commented on 2012-08-02 14:32 Thanks. So I guess we need to fix the service file then? Iam surprised BuissonVert reported it was working properly. Or is it specific to my installation? Note I used the latest installer and converted my system to a "pure" systemd (no initscripts, but using systemd-sysvcompat). Revelation60 commented on 2012-08-02 14:25 I uses a script to find the PID and it doesn't store it to a file. zebulon commented on 2012-08-02 14:07 OK, but why is this working with initscripts? Revelation60 commented on 2012-08-02 13:43 The installer creates the directory, but I guess the run folder is cleared at shutdown. The problem is that you have to be root in order to write a file in the run directory, so that's why I created the folder in the installer. The ExecStart line is run as sabnzbd, so it cannot create the folder. zebulon commented on 2012-08-02 12:59 Further to my last comment, it appears that /run/sabnzbd is deleted at each reboot. Do you know what is causing this? zebulon commented on 2012-08-02 12:57 Further to my last comment, it appears that /run/sabnzbd is deleted at each reboot. Do you know what is causing this? zebulon commented on 2012-08-02 12:54 Yes, I am using 8080, and the server runs for one minutes, before it shuts down actually. /run/sabnzbd does not exist. After I created /run/sabnzbd and chowned it to sabnzbd:sabnzbd, then the service works normally. Should not /run/sabnzbd be created during installation? Revelation60 commented on 2012-08-02 12:28 Are you using port 8080? Can you verify if the directory /run/sabnzbd exists and if it is owned by user sabnzbd? I get this message about the pid too, but it is created very soon after. Revelation60 commented on 2012-08-02 12:27 Are you using port 8080? Can you verify if the directory /run/sabnzbd exists and if it is owned by user sabnzbd? zebulon commented on 2012-08-02 12:25 Hi, systemd sabnzbd.service fails to run here. The log says: PID file /run/sabnzbd/sabnzbd-8080.pid not readable (yet?) after start. archbox systemd[1]: sabnzbd.service operation timed out. Terminating. Indeed, the PID file is not created. Do you know what is causing this? I have Arch running in VirtualBox, but this may be unrelated. Anonymous comment on 2012-08-01 20:14 The systemd support is working here. Thank you. Revelation60 commented on 2012-08-01 10:18 I have added support for systemd. Due to limitations of the systemd unit file, you must set the user, group and port in the sabnzbd.service file as well. Also note that systemd has its own config file (in /etc/conf.d/sabnzbd_systemd). Please test if everything is working. mrohnstock commented on 2012-07-21 15:24 please add "xdg-utils" as dependancy Revelation60 commented on 2012-06-19 14:42 I'll see if I can write one in the near future. Evilandi666 commented on 2012-06-14 16:19 Could you please include your Systemd Unit File, since systemd is now official in core repo ? skydrome commented on 2012-04-08 06:28 sqlite3 has been renamed to sqlite Anonymous comment on 2012-02-25 02:43 POSSIBLE BUG REPORT: Is the sabnzbd.desktop correct with this 'Exec=sh /opt/sabnzbd/addnzb.sh %u'? Because this doesn't work for me... If I run that in a terminal I get this back; /opt/sabnzbd/addnzb.sh %u curl: (7) couldn't connect to host rm: cannot remove `/var/tmp/%u': No such file or directory If all we're trying to do here is just launch sabnzbd then running it from /usr/bin works just fine, so I'm not sure why the Exec= isn't this; Exec=/usr/bin/sabnzbd THANKS emphire commented on 2012-02-06 19:04 If you create a directory "/opt/sabnzbd/config" and set SABNZBD_CONF="/opt/sabnzbd/config/sabnzbd.ini" in the conf.d file, then you can set sabnzbd as the owner of "/opt/sabnzbd/config" and leave everything else under "/opt/sabnzbd/" as owned by root. I think this might be a bit more secure. holyArch commented on 2012-01-24 16:36 Great app. Lucky commented on 2012-01-22 01:43 removed last comment, because outdated par2cmdline is now in community par2cmdline-git is in AUR provide by BlackEagle Lucky commented on 2012-01-20 12:19 So... par2cmdline is back by BlackEagle It was at community, but now its back @ AUR. Revelation60 commented on 2012-01-20 09:45 I guess you could try par2cmdline-tbb for the time being, but I haven't tested if this works. Anonymous comment on 2012-01-20 00:33 i see...well hope it makes it soon...just came back to arch so need sabnzb now harveythedog commented on 2012-01-20 00:18 It's going to community apparently. Anonymous comment on 2012-01-20 00:08 par2cmdline is gone... francoism commented on 2012-01-12 20? /cat/passwd: sabnzbd:x:999:999:SABnzbd user:/opt/sabnzbd:/sbin/nologin Thanks! francoism commented on 2012-01-12 20? I'm I missing the sabnzbd user? Thanks! Revelation60 commented on 2012-01-05 12:16 Did you update the configuration file /etc/conf.d/sabnzbd? zatricky commented on 2012-01-05 12:15 Ah. Sorry for comment-spam. I forgot to merge /etc/conf.d/sabnzbd.pacnew All sorted now :) zatricky commented on 2012-01-05 12:11 I updated today but hadn't been doing updates for a while. Now see that sabnzbd isn't daemonising - its staying in the foreground. :-/ I'll post an update once I figure out why. Anonymous comment on 2011-12-12 16:22 thanks, now it is working again! Revelation60 commented on 2011-12-10 18:19 I've updated the package. Good work, Lucky and nicoulaj! Lucky commented on 2011-12-10 18:04 I know nicoulaj, but with this upgrade i want a package that doesn't break anything. The USE_SYSTEM_IDS var is only temporary and will removed with a 0.7.x version, then system ids is for all users default (new install and upgrade). The UID, GID, AGROUP options are on my mind ;), because i want static IDs and use the users group for dl dir access. (OpenVZ setup and sabnzbd runs in a container) So i want provide this options to other users, maybe they want this too. I will think about it to remove this options later, but for now i will provide it. I see a "bug", if you set USE_SYSTEM_IDS only and upgrade the package the user/group id is non a system id! So there are 3 options how to fix this: change UID/GID manually, change UID/GID in confd file and upgrade, remove old user/group and recreate them [manually or with upgrade] (thats what i used for 0.6.10 package). nicoulaj commented on 2011-12-10 14. nicoulaj commented on 2011-12-10 14. Lucky commented on 2011-12-10 02:59 CHANGELOG: - Note: the above are the differences from 0.6.10 Releases 0.6.11/12/13 have been cancelled. CHANGES: - update to version 0.6.14 - sabnzbd uses now a system[user,group] by default only clean installs <- thx nicoulaj - sabnzbd stops for package upgrade, you need to start it again! - rework of the installscript to support uid/gid and other stuff and make it cleaner. - rework of the initscript, now supports force-{stop,restart} (kills SABnzbd process, not recommended), status. - rework /etc/confd/sabnzbd file, please merge with new one (SABNZBD_DIR, SABNZBD_UID, SABNZBD_GID,USE_SYSTEM_IDS , ALTERNATIV_GROUPS) - If you want use SYSTEM IDS too, upgrade this package -> merge confd and enable USE_SYSTEM_IDS (if you also want your own uid/gid enable this too) -> upgrade package again and sabnzbd uses now new uid/gid. chown your incomplete/download dirs too, if required. PACKAGE: SOURCE: @nicoulaj please test this again. thanks @all feel free to test this, too and give feedback :) @Revelation60 I think now you can merge the AUR package with this source. Or wait 1-2 days, if someone has trouble with my package/sources. kakalaky commented on 2011-12-07 19:44 Experiencing the same problem as Ralfk. Anonymous comment on 2011-12-07 18:32 2011-12-07 19:28:33,432::ERROR::[SABnzbd:1410] Failed to start web-interface: Traceback (most recent call last): File "/opt/sabnzbd/SABnzbd.py", line 1402, in main cherrypy.engine.start() File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 184, in start self.publish('start') File "/opt/sabnzbd/cherrypy/process/wspbus.py", line 147, in publish output.append(listener(*args, **kwargs)) File "/opt/sabnzbd/cherrypy/process/servers.py", line 62, in start self.wait() File "/opt/sabnzbd/cherrypy/process/servers.py", line 97, in wait raise self.interrupt error: [Errno 97] Address family not supported by protocol 2011-12-07 19:28:33,433::ERROR::[SABnzbd:300] Fehler beim Starten der Weboberfläche. : [Errno 97] Address family not supported by protocol so I think I have the mentioned problem... Revelation60 commented on 2011-12-07 06:58 No problem here, but if any of you report it, I'll revert. Lucky commented on 2011-12-07 02:25 by shypike on December 6th, 2011 3:47 pm "We're withdrawing releases 0.6.11 and 0.6.12 while preparing 0.6.13. There are too many reports of systems affected by the "will now listen on all localhost addresses" feature. We're removing the feature all together." @bszmyd I don't have a issue with the "listen on all localhost addresses" feature, but maybe other users... so feel free to wait for an 0.6.13 release. Thank you for this information. bszmyd commented on 2011-12-07 01:54 0.6.11 and 0.6.12 have been withdrawn by the developers, anyone having issues? Revelation60 commented on 2011-12-06 19:46 For now, I'll just increment the version number. Lucky commented on 2011-12-05 20:21 @Revelation60 Thats ok, please use NOT the system (UID/GID) sources yet. I need to change some stuff there before we can merge this to the AUR Package. (upgrade issues and so on) Revelation60 commented on 2011-12-05 20:05 I'll update tomorrow. You can probably update yourselves by changing the release number. francoism commented on 2011-12-05 19:45 Again new version.. 0.6.12.. Thanks for your package :) nicoulaj commented on 2011-12-01 08:04 Thanks, I tested it. I don't know if it's a packaging issue or a pure Sabnzbd one, but after the upgrade, *some* of the settings were lost, like the queue directory, so I lost the whole queue of downloads. This can be fixed manually by moving <TMP_DIR>/**/*.nzb.gz back to your watch directory though. Lucky commented on 2011-12-01 00:33 IMPORTANT You also need to chown your dirs/files again to your user/group! chown -R sabnzbd.sabnzbd /path/to/dl/dir PLEASE TEST THIS PACKAGE. Lucky commented on 2011-12-01 00:24 CHANGELOG: Improve detection of encrypted RAR files during download SABnzbd will now listen on all "localhost" addresses This should prevent problems on IPV6-enabled systems Remove unneeded extra temporary folder level in Generic Sort When par2 fails and SFV-check enabled, verify using SFV files Perform extra checks on job administration Fix logging of pre-queue script result Better support for Yahoo pipes Accept NZB files containing incorrect dates Make newzbin "Get bookmarks now" button independent of automatic readout OSX: Fix Growl issues OSX: Show the promised 10 queue entries in the OSX menu instead of 9 CHANGES: update to version 0.6.11 sabnzbd uses now a system[user,group] <- thx nicoulaj PACKAGE: SOURCE: PLS STOP SABNZBD BEFORE YOU UPGRADE! Anonymous comment on 2011-11-17 17:09 I'm not sure if it's the same thing, but yesterday I had to do a hard reset of my server as well. I couldn't contact sabnzbd and rebooting was stuck on shutting down sabnzbd. So possibly the same thing... Haven't had the problem since then though. Anonymous comment on 2011-11-17 16:44 Is anyone else having problems with python2-2.7.2-1 (x86_64) crashing the system? The only thing I use the python2 package for is SABnzbd+. The crashes occur pretty much every 2 or 3 hours of use. It's making my server unusable. Anonymous comment on 2011-11-07 20:36 It's okay for me, I just checked my sickbeard-git dependencies and it said python2-cheetah. Anyway, you can always forcefully remove python-cheetah using pacman -Rdd (I think, check man pacman to be sure). Then install python2-cheetah. wilberfan commented on 2011-11-07 18:41 Is it still the case that we can't upgrade to sabnzbd 0.6.10 if we have sickbeard installed, because of the python- python2- issue? Or is there a workaround for that...? nicoulaj commented on 2011-10-31 16:20 @Lucky Why don't you just let the system attribute the UID/GID ? Creating a system user will ensure a UID < 1000, isn't this sufficient in this case ? Anonymous comment on 2011-10-22 22:11 I appreciate the help Dala, I am running out of ideas. How do I run python in debug mode? The frustrating part is it is so random. I think I am going to put this on the back burner for now. Dala commented on 2011-10-22 21:36 okay, just making sure :) good luck with debugging, you could try if you can run python in debug mode to get more information about the segfault? Since segfault is not something you'd expect from a evaluated language.. Anonymous comment on 2011-10-22 20:24 @Dala Yes, I have run memtest86+ numerous passes, no failures. Dala commented on 2011-10-22 08:30 @brad0383 Have you tried running a memcheck86+ to make sure it's not your memory causing problems? Anonymous comment on 2011-10-21 18:39 @Lucky Thanks for your help. I have been using the sabnzbd forum as well. They are pointing the finger at everything but sabnzbd. The weird thing, is that sometimes it will run for a day or two, then it will crash every 5 minutes. Lucky commented on 2011-10-21 17:07 @brad0383 sry, but i can't help you because on my side sabnzbd works fine. please use the sabnzbd forum, irc channel or fill a bug report, maybe there would be help for you. Anonymous comment on 2011-10-21 11:51 I just wiped my drive and re-installed Arch thinking that there might be some strange corrupt installation of something. I am still getting sabnzbd crashes with a segmentation fault. I can't believe I am the only one getting this. Revelation60 commented on 2011-10-20 17:45 I've updated the package, but because of the transition to python2-cheetah, it's going to be one of those painful updates again. If you have sickbeard installed, you can't update, because the sickbeard package still depends on python-cheetah. If you don't have it installed, you can remove python-cheetah with pacman -Rdd python-cheetah. Anonymous comment on 2011-10-19 18:48 I spoke too soon! It ran all day yesterday just fine. Now it's crashing again. Anonymous comment on 2011-10-19 13:02 Update to my crashing problem: If your download queue size is more than the available space on the incomplete directory, it will crash. I had the completed directory on my /home folder but not the incomplete. I changed the incomplete directory to the /home partition and now everything works great. Lucky commented on 2011-10-17 23:38 ALL OK my fault s, h, b, create_time, original_req^headers = cache_date to s, h, b, create_time, original_reqheaders = cache_date A typo in the build package, but i don't know how i made it ;) Lucky commented on 2011-10-17 23:13 CURRENTLY NOT WORK i got some errors Traceback (most recent call last): File "/opt/sabnzbd/SABnzbd.py", line 43, in <module> import cherrypy File "/opt/sabnzbd/cherrypy/__init__.py", line 161, in <module> from cherrypy import _cptools File "/opt/sabnzbd/cherrypy/_cptools.py", line 228, in <module> from cherrypy.lib import caching as _caching, wsgiapp as _wsgiapp File "/opt/sabnzbd/cherrypy/lib/caching.py", line 130 s, h, b, create_time, original_req^headers = cache_data SyntaxError: can't assign to operator Lucky commented on 2011-10-17 22:55 @krzd thanks for info CHANGELOG: Allow saving of category paths ending in a *. This feature (*) will prevent the creation of job folders in the final folder Fix incompatibility with unrar 4.01 regarding detection of encrypted files Create .bak (backup) file for sabnzbd.ini before modifying it Convert ambiguous Windows paths like D: and D:folder to D:\ and D:\folder Fix file name encoding problems when verifying using SFV files Prevent reading newzbin bookmarks when newzbin credentials are not set OSX: Compatible with Growl 1.2.2 and 1.3 OSX: Prevent changes to SABnzbd.app folder which confused the OSX Firewall OSX: Fix access rights of SABnzbs.app so that restricted users can run SABnzbd OSX: Combined SnowLeopard/Lion DMG and separate Leopard DMG CHANGES: update the dependency "python-cheetah" to "python2-cheetah" update to version 0.6.10 PACKAGE: SOURCE: Anonymous comment on 2011-10-17 20:27 0.6.10 is out. python-cheetah is no longer in AUR, it is now available in [community] under the correct named python2-cheetah. Anonymous comment on 2011-10-17 02:39 I ran from the CLI and got this message: /usr/bin/sabnzbd: line 3: 1698 Segmentation fault python2 /opt/sabnzbd/SABnzbd.py -f ${HOME}/.sabnzbd.ini "${@}" Revelation60 commented on 2011-10-16 18:51 Do you get a crash message? Maybe there is something wrong with the rights of the log folder or something like that. Anonymous comment on 2011-10-16 16:34 sabnzbd-git does the same thing. I may just give up on this. Anonymous comment on 2011-10-15 22:10 Every time I think it's working it starts crashing again. I am going to give sabnzbd-git a try and report back. Anonymous comment on 2011-10-15 20:13 @Revelation60 Wow! How did I miss that one?! Thanks! Revelation60 commented on 2011-10-15 19:34 @brad0383: did you correctly set the permissions of your /home/brad/ folder? You must allow browsing access (+x). Anonymous comment on 2011-10-15 19:32 I re-installed and seems to be working now. Why can't I change the completed Download directgory to my home Videos directory? I have tried changing the permissions to 777, I also tried chown to sabnzbd:sabnzbd and it still gives me the error of: "Incorrect parameter Cannot create complete_dir folder /home/brad/Videos" Lucky commented on 2011-10-15 18:24 @nicoulaj I see, but i want a unique uid and gid for sabnzbd. id 420 maybe, is there a list which ids other services use, i found only a list for ids <100. The ID need to be <1000. @brad0383 Mhmm, sry no idea, but give a try to sabnzbd-git or sabnzbd-develop-git. If this also not help, fill a bug upstream. () nicoulaj commented on 2011-10-15 17:37 Hi, great package but it creates a non-system user that shows up as a regular user in GNOME logon screen for example. Replacing the "useradd" line in sabnzbd.install with the following one should fix it (untested): useradd -g sabnzbd -d /opt/sabnzbd -s /bin/false -m -r sabnzbd &> /dev/null Anonymous comment on 2011-10-15 13:56 @Lucky I spoke too soon, it's still crashing. Any other ideas? Anonymous comment on 2011-10-14 22:34 @Lucky It appears to be working now. The crashes were very random so I will let it run for awhile and post an update in a day or 2. THANKS! Lucky commented on 2011-10-14 20:21 @brad0383 Test with old python2-yenc please: Anonymous comment on 2011-10-14 18:57 I am having a problem with SABnzbd crashing. It will just stop running. I have ran it with debug mode on and it just stops, nothing is the log. It is completely random how long it will run, but it stops during decoding. The last line of the log is always similar to this: 2011-10-14 13:30:55,661::INFO::[assembler:85] Decoding /home/<user>/Downloads/Sabnzbd/incomplete/xxxxxxxxxxxxxxxxxx.r00 yenc Any ideas? LeCrayonVert commented on 2011-09-24 08:14 Lucky > thx, it works ;) Lucky commented on 2011-09-22 06:48 @LeCrayonVert Hi, did you add the correkt SABNZBD_KEY= in /etc/conf.d/sabnzbd, then the rc.d script will work. I know "this problem" so i add a kill command in sabnzbd.install (for remove).. LeCrayonVert commented on 2011-09-21 18:49 It seems that the SABNzbd.py process isn't killed when you stop the daemon with /etc/rc.d/sabnzbd stop You might want to review the stop) section in the rc.d script (especially, the curl call). Revelation60 commented on 2011-09-14 07:16 I've updated the package. The transition won't be smooth, because python-yenc en python2-yenc are currently the same package. I've requested the maintainer of python2-yenc to add a conflict for now. I'm also going to try and write a systemd service file with the same features as the current initv script. Lucky commented on 2011-09-10 19:11 INFO: If anyone want a separate datadir like kylef yet, use (sabnzbd-git) by dryes. Maybe with the next release i will change this, too. CHANGES: update the dependency "python-yenc" to "python2-yenc" update to version 0.6.9 PACKAGE: SOURCE: cdemoulins commented on 2011-08-29 12:16 Hi, please update the dependency "python-yenc" to "python2-yenc". The package "python-yenc" will be deleted in two weeks. Anonymous comment on 2011-08-15 20:46 @Lucky builds and runs as expected, allows me to pass the --pause parameter for pausing on startup. It is a bit unusual to build the full command line in SABNZBD_ARGS, normally something like SABNZBD_ARGS is just appended to the hard-coded arguments (like setting the config file, etc.) inside of the init.d script, but it certainly works, and I suppose it's more flexible. mrohnstock commented on 2011-08-15 20:28 @Lucky works as expected. Thanks. Lucky commented on 2011-08-15 00:21 @kylef I will do this maybe later or Revelation60 does it. I did read your post from Sat, 09 Jul 2011 15:49:20. does the my 0.6.8 package work without problems? I got 6 dls but no feedback. kylef commented on 2011-08-14 23:09 Instead of making the sabnzbd user/group capable of editing and modifying the sabnzbd source code, it might be advisable to separate the datadir from the package dir in the next release. Start sabnzbd with "-f /var/lib/sabnzbd/sabnzbd.ini" and make /var/lib/sabnzbd the homedir for sabnzbd user. /opt/sabnzbd then can be root:root while /var/lib/sabnzbd is sabnzbd:sabnzbd. The sabnzbd user really shouldn't be able to modify the static package data. Lucky commented on 2011-08-14 09:59 @potatoe Ahh this was my fault, because i forgot remove the post_upgrade (only for testing) line at the end of sabnzbd.install. Also added SABNZBD_ARGS, have fun and please test it. Lucky commented on 2011-08-14 09:58 @potatoe Ahh this was my fault, because i forgot remove the post_upgrade (only for testing) line at the sabnzbd.install. Also added SABNZBD_ARGS, have fun and please test it. Anonymous comment on 2011-08-14 02:15 sabnzbd.install throws an error on a fresh install -- post_upgrade's "chown -R sabnzbd" seems to be running before post_install's "useradd sabnzbd" because it complains about "chown: invalid user: 'sabnzbd'". Also, would it be possible to add a SABNZBD_ARGS variable to conf.d/sabnzbd which gets appended to the command line for sabnzbd in the init script? Most other init scripts with a conf.d file do seem to provide a PROGRAMNAME_ARGS option to pass custom command-line options to the daemon. I'd like to make sabnzbd start paused, which I believe is only possible via command-line options, not config files. Lucky commented on 2011-08-06 10:00 Ahhh Revelation60 thanks for the info, i will change it. Revelation60 commented on 2011-08-06 09:54 Thanks, Lucky! I've adapted most of your changes. Note that your version still had the old python-feedparser dependency (should be python2-feedparser). Lucky commented on 2011-08-06 09:33 @Revelation60 I clean up your pkg and add more variables. maybe you want to merge something. If you upload it to AUR and Sources are wrongly displayed replace pkgname="${_pkgname,,}" with pkgname=sabnzbd in the PKGBUILD @Amarant I added user/group support to post_upgrade, change /etc/conf.d/sabnzbd to your USER/GROUP combination. defaults: SABNZBD_USER="sabnzbd" SABNZBD_GROUP="sabnzbd" johnnyponny commented on 2011-08-04 19:16 0.6.7 is out. Works fine. Anonymous comment on 2011-08-02 14:48 I don't run sabnzbd under the default sabnzbd user, so every time this package is updated I have to manually chown -R all the files. In order to fix this I would like the username that is specified in /etc/conf.d/sabnzbd to be used in the post_upgrade hook. So the post_upgrade function in sabnzbd.install would look like this: post_upgrade() { SABNZBD_USER=sabnzbd . /etc/conf.d/sabnzbd chown -R $SABNZBD_USER:$SABNZBD_USER /opt/sabnzbd } Anonymous comment on 2011-07-28 14:07 0.6.6 is out. gee commented on 2011-07-24 01:07 In case anyone is interested here's my systemd service file: [Unit] Description=NZB grabber [Service] Type=simple ExecStart=/usr/bin/sabnzbd Restart=always User=sabnzbd Group=sabnzbd [Install] WantedBy=multi-user.target It works just fine! Revelation60 commented on 2011-07-10 15:02 I have fixed the dependency and I have added the removal of the group and the mime associations. The uninstall script also tries to close sabnzbd if it is running. kylef commented on 2011-07-09 15:49 The "python-feedparser" dependency is incorrect, it should be "python2-feedparser", "python-feedparser" would be the python3 version. (See ). The install script doesn't cleanup properly, it creates a group but the group is never removed. Do (xdg-mime install) and (xdg-icon-resource install) also need to be removed on post_uninstall? You could check the debian/ubuntu package out because they install sabnzbd in the FHS, /usr/bin/sabnzbdplus, /usr/share/sabnzbdplus/{interfaces,locale}/, etc. You can find the diff against the original sabnzbd source here: Anonymous comment on 2011-06-27 04:59 Try adding the user that runs sabnzbd to the 'power' group. kevku commented on 2011-06-26 21:59 what do i need to do to make the suspend/shutdown on queue finish work? Revelation60 commented on 2011-06-10 15:33 Forgot to submit. Done now :) Lucky commented on 2011-06-10 15:16 No Problem Revelation60, but you didn't replace it. ;) but you don't need to raise up the pkgrel. btw. You can also use ' instead of " quotes, but i like the " one ;) Lucky commented on 2011-06-10 15:15 No Problem Revelation60, but you didn't replace it. ;) but you don't raise up the pkgrel. btw. You can also use ' instead of " quotes, but i like the " one ;) Revelation60 commented on 2011-06-10 13:14 I have updated sabnzbd and I've replaced the pkgbuild with your superior one, Lucky. Thanks! Lucky commented on 2011-06-10 10:46 maybe you can made a rework of your PKGBUILD change startdir to srcdir and pkgdir change arch to any and so on... This is my Version of this PKGBUILD: Lucky commented on 2011-06-10 10:05 New Version SABnzbd v0.6.4 gablink commented on 2011-06-03 10:40 Now everything is ok thanks :) Revelation60 commented on 2011-06-02 16:44 Fix it now. Sorry, thought I'd fixed that before! gablink commented on 2011-06-02 13:24 python2 command in /usr/bin/sabnzbd please! :( Revelation60 commented on 2011-06-02 09:17 Good news! nzb file association is working now. This means you can open the nzbs from within the browser and downloading should start automatically. Don't forget to put the NZB KEY (in addition to the API key) in /etc/conf.d/sabnzbd. @monty: fixed this too mrohnstock commented on 2011-05-29 09:07 python2-fix in this package is not complete. /usr/bin/sabnzbd needs also the fix. Anonymous comment on 2011-05-25 11:18 Looks like 0.6.2 was released. psychedelicious commented on 2011-05-14 16:18 Can you make the sabnzbd script installed to /usr/bin/sabnzbd call python2 please? Revelation60 commented on 2011-05-04 21:03 Updated to 0.6.0. To add nzb's by file association, a new key is needed. I'll see if I can add support for that. Revelation60 commented on 2011-05-04 21:03 Updates to 0.6.0. To add nzb's by file association, a new key is needed. I'll see if I can add support for that. johnnyponny commented on 2011-05-02 18:46 Works with 0.6.0RC2 as well. Great man :D Revelation60 commented on 2011-04-06 17:12 Thanks! I had already tested it myself, but it's good to know that it works elsewhere too :) Anonymous comment on 2011-04-06 16:49 I just used your PKGBUILD for 0.6.0RC1, and everything went smoothly, so the next release should be no problem I reckon. Just thought I'd let you know! Revelation60 commented on 2011-03-03 11:26 Thanks LeCrayonVert, I'll check it out! LeCrayonVert commented on 2011-03-03 11:24 Well I think you should take a look at how the lottanzb package manages the mime type, especially the following files (in the lottanzb package) |-- lottanzb.applications.in |-- lottanzb.desktop.in |-- man | `-- lottanzb.1 `-- mime |-- lottanzb.keys.in |-- lottanzb.mime.in `-- lottanzb.xml.in LeCrayonVert commented on 2011-03-02 21:36 And regarding the script, it seems it just call "addnzb" , so it will not start the sabnzbd daemon....but if it is running, and IF you pass an nzb file as an argument, it will be added to the download list. So pretty useless in the gnome menus (you can't pass argument)...So I guess this .desktop file is only to be use with the "open with" function of your file manager. LeCrayonVert commented on 2011-03-02 21:32 Revelation60 > I still can't see that icon because you should NOT put an absolute path to the icon (/opt/...) but just the basename of it (without the extension .png) and put the PNG file in /usr/share/pixmaps/ by adding a line in the PKGBUILD ;) Revelation60 commented on 2011-02-20 12:07 I've added an icon for sabnzbd and for the .nzb files. :) The script to automatically load nzbs into sab when clicked still doesn't work correctly. Could someone have a look at that? LeCrayonVert commented on 2011-02-17 20:07 There is no icon in the gnome menu entry for sabnzbd+ ... Could you please add an icon and put Icon=xxxxxxx in the desktop file ? Thx ;) Revelation60 commented on 2011-01-31 09:13 The idea behind that is that you know for certain that the stop IP and port is correct. If I were to remove this and you would change the IP and port in sabnzbd and not in the config file, the call to the shutdown script would fail. Anonymous comment on 2011-01-31 09:07 Oh I get the need to use the IP and port settings (set in /etc/conf.d/sabnzbd) to STOP the daemon. What I don't understand is why we use them to START the daemon. The way it is now, this doesn't allow you to change the IP from within sabnzbd, it always changes back when you restart the daemon. By the way, I noticed that you actually can use 0.0.0.0 as $SABNZBD_IP and it is still able to stop properly using the rc script. So my whole reason to make a comment is kind of void :) Revelation60 commented on 2011-01-31 08:54 The IP and port are used to shutdown sabnzbd. I think it is really unfortunate too that the best way to shutdown sabnzbd is through the web api instead of through calling the python file. If the last were possible, the whole configuration file could be removed. Anonymous comment on 2011-01-30 22:40 I was wondering why the rc script sets the IP and port when starting the daemon? It doesn't really serve a purpose as far as I can tell? The IP and port are set in sabnzbd.ini anyway. The way it is now, I couldn't set the IP to 0.0.0.0 (which I need to to, because I need to access from localhost AND remote) using the $SABNZBD_IP variable, because then the stop script doesn't work (kind of hard to send a command to 0.0.0.0 :-) ). I just removed '-s $SABNZBD_IP:$SABNZBD_PORT' from the start script. Now I can set the IP to what I want in the General config of sabnzbd, and I just leave the $SABNZBD_IP set to 127.0.0.1. This seems like the most logical to me? I'm no expert by any means, so please correct me if I'm missing something! Revelation60 commented on 2011-01-26 09:53 Ah I see, the package has been renamed because pyopenssl is for python 3.1 now. I'll fix the package. Anonymous comment on 2011-01-25 23:53 I had to install python2-pyopenssl to get it working with ssl. teek commented on 2011-01-06 21:43 Well, got it all nice and working now, made myself the user in /etc/conf.d/sabnzbd and "chown -R myself /opt/sabznbd" to prevent "can't write logs" -errors. I find it pretty hard to configure a standalone, boot up starting sabnzbd! Hope all my comments will help someone. teek commented on 2011-01-06 21:07 Strangely it also overwrites the folders I define (for monitoring, complete etc.) when using the startup script. teek commented on 2011-01-06 20:56 Ok, thanx for the info :) To be honest I was complaining based on older observations, when I opened /etc/con.d/sabnzbd instructions were right there. Although it asks for the API key, not the session key which had me a bit confused at the start (but not for long ;) ). Another tip, if you want sabnzbd to be accessible from other computers you will be told to change the "host" value, in my case it worked by choosing this to be the same as the actual IP. You can change this in the web interface and in /opt/sabnzbd/sabnzbd.ini but it will revert on every boot... until you set SABNZBD_IP="xxx.xxx.xxx.xxx" to the IP you want for the "host" value. Just putting it here to prevent other from looking as long as me :) Good work, keep it up! Revelation60 commented on 2011-01-06 09:08 The session key is there to prevent a certain web exploit:. The session key can be found under Config -> General if I remember correctly. The key has to be addded to /etc/conf.d/sabnzbd because I use an API command to shut down sabnzbd. Without the key, that won't work. Maybe I can find another way to shut it down, using the python file. That way, the session key won't be needed. teek commented on 2011-01-06 07:53 Am I the only one that wonders what a session-key does, where to get it and why we need it? Why doesn't it just work? Anonymous comment on 2010-11-16 00:46 If you edit /etc/conf.d/sabnzbd and insert the relevant data, the stop/start script that comes with the package works fine Anonymous comment on 2010-11-10 12:19 Ok, I've fixed stopping the daemon... /etc/rc.d/sabnzbd needs to be edited so that it's at follows... stop) stat_busy "Stopping SABnzbd" curl -f "$SABNZBD_PROTOCOL://$SABNZBD_USPW$SABNZBD_IP:$SABNZBD_PORT/sabnzbd/api?mode=shutdown&apikey=$SABNZBD_KEY" &> /dev/null if [ $? -gt 0 ]; then stat_fail else pkill -f "python2 /opt/sabnzbd/SABnzbd.py" rm_daemon sabnzbd stat_done fi ;; All i've done is add the line: pkill -f "python2 /opt/sabnzbd/SABnzbd.py" Anonymous comment on 2010-11-10 11:39 Hi, I'd like to mention that the stop and restart commands for the daemon aren't actually stopping the sabnzbd process. Revelation60 commented on 2010-11-07 22:15 Yeah, of course. I accidentally used an old version of the PKGBUILD which didn't have that yet and I overlooked. Thanks! Anonymous comment on 2010-11-07 21:59 Wouldn't it be relevant to change the dependencies in the PKGBUILD from python to python2? Revelation60 commented on 2010-11-07 21:02 I've updated the package to 0.5.5 and I've added the sed command. I had my doubts at first whether to change the package itself, but it seems like we don't really have a choice. kevku commented on 2010-11-07 18:47 0.5.5 is out mhellwig commented on 2010-10-29 15:54 @MyWorld: well yeah, I already did, I'm just saying the package should be fixed .. MyWorld commented on 2010-10-29 08:17 "$@" mhellwig commented on 2010-10-28 14:11 exactly .. as in: /usr/bin/sabnzbd calls python instead of python2 also I'm not sure about all the .py-files in /opt/sabnzbd .. all the ones that have a #!/usr/bin/python line at the top should be #!/usr/bin/python2 no? would be a simple sed in the PKGBUILD .. psychedelicious commented on 2010-10-28 03:42 please patch the sabnzbd file to use python2 too. I run sabnzbd by running the command 'sabnzbd' in a terminal. Thanks. Revelation60 commented on 2010-10-27 18:58 I am afraid I don't understand. This version calls python2: su - $SABNZBD_USER -c "python2 /opt/sabnzbd/SABnzbd.py -f $SABNZBD_CONF -s $SABNZBD_IP:$SABNZBD_PORT -d" -s /bin/sh mhellwig commented on 2010-10-27 18:52 aaagh I had my old version patched to call python2 and now this "new" version again calls python? have to go back across everything, damn. Revelation60 commented on 2010-10-24 18:39 I've updated the package and fixed a bug concerning file associations. That still doesn't work as well as I'd wish (for example with files containing spaces), so if someone with more bash skills can take a look at that, that would be great :) Anonymous comment on 2010-10-22 15:55 Fresh arch install a few days ago, so have python3.. I just changed python to python2 in the init script (line 15), seems to be working fine. Assuming you have python2 installed. Revelation60 commented on 2010-10-20 18:35 I am on a trip in the us now, so sorry for the late reaction. I will try to fix it tonight, but I may not be able to get to a decent workstation until monday. Sorry! mhellwig commented on 2010-10-20 18:27 due to the Big Python Rebuild this package REALLY should be updated asap to change all calls to python to python2 instead. Of course one can do that manually but it is a bit annoying Anonymous comment on 2010-10-19 19:53 Thank you gee. Your suggestion fixed the problem. Anyone needing any extra help just edit the file gee mentioned in a text editor (with root privileges) and change the python tag to python2. gee commented on 2010-10-10 22:09 I have an issue now as it defaults to python 3 and not python 2 and then cannot start. If I change the script in /usr/bin/sabnzbd to use explicitly python2 it works like before... Can you add that to the package? Thanks Revelation60 commented on 2010-09-10 06:50 You're absolutely right. Sorry for the typo, I will fix it as soon as possible. wilson commented on 2010-09-09 22:45 I had to change the curl line in the init file from ?apikey to &apikey to make it shutdown properly. Otherwise it was giving me the error "error: API Key Required", which has an exit status of 0 and so stat_fail doesn't catch it. Revelation60 commented on 2010-09-03 17:54 Okay, I have made some changes. The first is a better way to shut down sabnzbd using the API and the other is file association with nzb files. So from now on you can just click on .nzb files (or use xdg-open) and the nzb will automatically be loaded into sabnzbd. Revelation60 commented on 2010-08-27 16:10 As you may have noticed, I am the new package maintainer. I have updated sabnzbd to version 0.5.4 and I have implemented some of your suggestions, such as UlyssesNL's fix to support proper shutdowns on password protected sabnzbd instances. I also added a protocol variable to the configuration file, so if you use https you are able to shutdown correctly. Malstrond commented on 2010-08-26 02:50 SABnzbd 0.5.4 has been released.. I have flagged this out-of-date to notify you, I hope you dont mind.. amkan13 commented on 2010-07-12 14:48 on a fresh install i get the error "find: `/opt/sabnzbd': No such file or directory" but /opt/sabnzbd shouldn't exist because i have not installed it before hyness commented on 2010-07-10 13:20 @VuDu: was your comment directed at me or the package maintainer? All I did was move that rm from the PKGBUILD to an install file so I could install the latest version. I agree that it should not be necessary. Anonymous comment on 2010-07-09 13:26 "find /opt/sabnzbd -name "*" -type f -not \( -name sabnzbd.ini \) -exec rm -rf {} \;" -> this seems wrong. There should be no need to manually delete previous installed files. hyness commented on 2010-07-09 12:50 To fix the problem with the PKGBUILD trying to remove files, I commented out that line and created a sabnzbd.install file with these contents... pre_install() { find /opt/sabnzbd -name "*" -type f -not \( -name sabnzbd.ini \) -exec rm -rf {} \; } You can also use the --asroot option, but really this PKGBUILD should be fixed with an install file because the build shouldn't modify any files Revelation60 commented on 2010-07-03 08:34 It fails here because I don't have the rights to remove files from /opt/sabnzbd. ben-arch commented on 2010-07-03 03:20 Just installed, seems OK. Thank you farhany commented on 2010-07-02 13:33 OK folks, new PKGBUILD. I made both backup entries relative. Tell me if there is a problem. Anonymous comment on 2010-07-02 11:37 I'm also having that problem: ==> ERROR: Invalid backup entry : /etc/conf.d/sabnzbd ==> ERROR: Makepkg was unable to build sabnzbd. but from : backup An array of files to be backed up as file.pacsave when the package is removed. This is commonly used for packages placing configuration files in /etc. The file paths in this array should be relative paths (e.g. etc/pacman.conf) not absolute paths (e.g. /etc/pacman.conf). Revelation60 commented on 2010-07-02 08:33 @farhany: you should remove the / before /etc/conf.d/sabnzbd. The backup entries are referring to the locations in the package, not to the file system itself. farhany commented on 2010-07-02 00:14 Fresh build here, please let me know if there's a problem. Malstrond commented on 2010-07-01 23:46 I'm always getting ERROR: Invalid backup entry : /etc/conf.d/sabnzbd when using this PKGBUILD. I have to remove the backup entries. Revelation60 commented on 2010-06-28 08:03 wilcoxjay, I used the same patch, but I removed opt/sabnzbd/sabnzbd.ini from backup, since it complained. Anonymous comment on 2010-06-27 22:13 this straightforward patch to the PKGBUILD brought 0.5.3 in with no problems. Anonymous comment on 2010-06-06 09:05 Having sabnzbd in my user groups, make me to be able to navigate in the /opt/sabnzbd directory as a user. But it doesn't permit sabnzbd to use my home folders. It's weird that i am the only one wanting to use my home directory...i've found anything on the official website... Revelation60 commented on 2010-06-05 13:17 Well, it shouldn't make a difference if you use gpasswd. Anyway, if you have sabnzbd as a user for sabznbd, you should add sabnzbd to your usergroup celos. That way sabnzb can access your folder. In theory, but I see it isn't working :( Anonymous comment on 2010-06-05 12:25 hmm with usermod, i think it should be sudo usermod -aG sabnzbd celos, but i did it with gpasswd: sudo gpasswd -a celos sabnzbd. Still doesn't work ... Revelation60 commented on 2010-06-05 10:29 I think you are doing the opposite :P Try something like sudo usermod -aG celos sabnzbd. Anonymous comment on 2010-06-05 10:22 I added the user to the sabnzbd and did'nt work (after restarting the user session), still can't create directories in the home folder. add sabnzbd to your user group? sabnzbd to you use group? Anonymous comment on 2010-06-05 09:53 Ok, I didn't know about that file. It is SABNZBD_USER="sabnzbd", if I modify this to SABNZBD_USER="celos", i'll be able to modify directories folder to /home/celos/videos for example ? Revelation60 commented on 2010-06-05 09:28 Did you edit the sabnzbd.confd file? If you leave the line SABNZBD_USER="sabnzbd" unmodified, it should work. Anonymous comment on 2010-06-05 09:25 If i run sabnzbd as a daemon using /etc/rc.d/sabnzbd start , i can't modify directories folder, it seems i don't have the rights...even if i modify in sabnzbd.ini file, anyone as the same problem ? thx teek commented on 2010-05-28 07:28 Ah, yes, I see now... thanx, it works now :s Revelation60 commented on 2010-05-25 08:41 As I said in my previous post, it is required to delete the sabnzbd folder. If you don't old python files will remain on the filesystem and will still be loaded into sabnzbd. This will cause conflicts. teek commented on 2010-05-25 08:24 Build and installed all dependencies again... same errors. teek commented on 2010-05-25 07:36 I get eroors with the latest version... any suggestions? [me@mybox ~]$ sabnzbd Traceback (most recent call last): File "/opt/sabnzbd/SABnzbd.py", line 63, in <module> import sabnzbd File "/opt/sabnzbd/sabnzbd/__init__.py", line 66, in <module> import sabnzbd.nzbqueue as nzbqueue File "/opt/sabnzbd/sabnzbd/nzbqueue.py", line 37, in <module> import sabnzbd.assembler File "/opt/sabnzbd/sabnzbd/assembler.py", line 40, in <module> import sabnzbd.postproc File "/opt/sabnzbd/sabnzbd/postproc.py", line 41, in <module> import sabnzbd.emailer as emailer File "/opt/sabnzbd/sabnzbd/emailer.py", line 212, in <module> from email.message import Message ImportError: No module named message ben-arch commented on 2010-05-23 16:34 Thank you Anonymous comment on 2010-05-04 19:33 I always have the problem that when i use a username and password to login, the daemon either doesnt start or doesnt stop using the default scripts (when you do add the username and pass it doesnt start if you dont add it it doesnt stop). I manually apply the changes to the scripts everytime but maybe you can update the package. Below you can see what i changed in the scripts. conf.d SABNZBD_USER="sabnzbd" SABNZBD_CONF="/opt/sabnzbd/sabnzbd.ini" # Put the session key from Config > General here SABNZBD_KEY="0ed6ebaab38d00260213f31ee8ff7b2e" # Set to the IP and port sabnzbd is listening on # This is needed to stop sabnzbd properly # If you use a username and password to access sabnzbd use "user:pass@ip" SABNZBD_IP="127.0.0.1" SABNZBD_PORT="8080" SABNZBD_USPW="user:pass@" init #!/bin/bash . /etc/rc.conf . /etc/rc.d/functions . /etc/conf.d/sabnzbd case "$1" in start) stat_busy "Starting SABnzbd" if [ -f /var/run/daemons/sabnzbd ]; then echo -n "Sabnzbd is already running as a daemon!" stat_fail else su - $SABNZBD_USER -c "python /opt/sabnzbd/SABnzbd.py -f $SABNZBD_CONF -s $SABNZBD_IP:$SABNZBD_PORT -d" -s /bin/sh if [ $? -gt 0 ]; then stat_fail else add_daemon sabnzbd stat_done fi fi ;; stop) stat_busy "Stopping SABnzbd" curl -f &> /dev/null if [ $? -gt 0 ]; then stat_fail else rm_daemon sabnzbd stat_done fi ;; restart) $0 stop sleep 1 $0 start ;; *) echo "usage: $0 {start|stop|restart}" esac exit 0 Anonymous comment on 2010-05-04 13:14 Thanks Revelation60 and farhany. Used these files with updated PKGBUILD from Revelation and 5.2 install went great. No issues. farhany commented on 2010-05-04 04:13 New build coming up soon as I get my virtual arch installation working. Revelation60 commented on 2010-05-03 10:13 Here is the PKGBUILD for the latest version:. Remember to completely remove the sabnzbd dir before installing!
https://aur.archlinux.org/packages/sabnzbd/?ID=13691&detail=1&comments=all
CC-MAIN-2017-43
refinedweb
19,484
66.54
std::codecvt::length, do_length From cppreference.com < cpp | locale | codecvt Revision as of 00:01, 12 June 2014 by 98.207.250.35 (Talk) 1) public member function, calls the member function do_lengthof the most derived class. 2) attempts to convert the externTcharacters from the character array defined by [from, from_end), given initial conversion state state, to at most max internTcharacters, and returns the number of externTcharacters that such conversion would consume. Modifies stateas if by executing do_in(state, from, from_end, from, to, to+max, to) for some imaginary [to, to+max)output buffer. [edit] Return value The number of externT characters that would be consumed if converted by do_in() until either all from_end-from characters were consumed or max internT characters were produced, or a conversion error occurred. The non-converting specialization std::codecvt<char, char, std::mbstate_t> returns std::min(max, from_end-from) [edit] Example Run this code #include <locale> #include <string> #include <iostream> int main() { // narrow multibyte encoding std::string s = "; std::mbstate_t mb = std::mbstate_t(); std::cout << "Only the first " << std::use_facet<std::codecvt<wchar_t, char, std::mbstate_t>>( std::locale("en_US.utf8") ).length(mb, &s[0], &s[s.size()], 2) << " bytes out of " << s.size() << " would be consumed " " to produce the first 2 characters\n"; } Output: Only the first 3 bytes out of 10 would be consumed to produce the first 2 characters
http://en.cppreference.com/mwiki/index.php?title=cpp/locale/codecvt/length&oldid=71513
CC-MAIN-2014-42
refinedweb
227
52.19
Hey Kodlogs, I hope all of you are doing good. Well, I am struggling with a problem, regarding C++ programming. I am trying to write codes that should delete the last node in the linked list. For example, if I give an input of 1 -> 2 -> 3 -> 4 -> 5 -> NULL The output should be 1 -> 2 -> 3 -> 4 -> NULL Perhaps, I miscalculated the algorithm. So, the question is simple. How can I delete the last node in the linked list? Please, solve my problem with the legit algorithm. Well, to delete the last node of a linked list, we have to find the second last node and make the next pointer of that node null. The algorithm behind it is: # Create a Data/Node list # If the program finds the first node null or there is only one node, then return null # Create extra space and let in go the linked list until the second last node. # Delete the last node. Let’s write the program now. #include <iostream> using namespace std; struct Node { int data; struct Node* next; }; Node* lastNodeRemove(){ push(&head, 5); push(&head, 4); push(&head, 3); push(&head, 2); push(&head, 1); head = lastNodeRemove(head); for (Node* temp = head; temp != NULL; temp = temp->next) cout << temp->data << " "; return 0; } This above program should print what you are looking for. Thanks.
https://kodlogs.com/38745/delete-last-node-in-linked-list-c
CC-MAIN-2021-21
refinedweb
224
80.11
There’s a difference between defining an object and declaring it. First, multiple declaration is ok, but multiple definition leads to compilation error. We declare a function by omitting their bodies like: int function(int a, int b); And declare+define it including their bodies: int function(int a, int b){ return a+b; } As stated before, multiple declaration is allowed, thus the following code is correct: #include <cstdio> int f(); int f(); int f(){ return 1; } int main (){ printf("%d\n", f()); return 0; } With variables it’s quite different since they don’t have body. The way to declare a variable without defining it consists in prefixing it with the keyword extern: extern int x; And to declare+define it, we do the old way: int x; With the same reasoning, the following is ok: #include <cstdio> extern int x; extern int x; int x = 42; int main (){ printf("%d\n", x); return 0; } Header and source files In the header file (.hpp), it’s important to only declare functions and variables without defining it, specially if we are using the source (.cpp) file. Otherwise we may run into multiple definition errors. Consider the following example: //some_header.hpp int x; //some_source.cpp int local_function(){ x=2; } //main.cpp #include "some_header.hpp" int main(){ return 0; } We may compile them like: g++ -E main.cpp some_source.cpp After macro expansions and linking, it’s the same we were compiling a sigle source file like this: //from some_source.cpp int x; // #include "some_header.cpp" expanded int local_function(){ x = 2; } //from main.cpp int x; // #include "some_header.cpp" expanded int main(){ return 0; } And here we get x multiply defined. Note that include guards wouldn’t solve the problem, since main.cpp and some_source.cpp were expanded and compiled independently. Further reading
https://kuniganotas.wordpress.com/2010/07/15/define-and-declare/
CC-MAIN-2017-34
refinedweb
301
56.86
This article explains the absolute basics of WPF data binding. It shows four different ways how to perform the same simple task. Each iteration moves closer to the most compact, XAML-only implementation possible. This article is for people with no experience in WPF data binding. Programming in WPF involves a lot of data binding. WPF user interfaces typically use much more data binding than most Windows Forms or ASP.NET user interfaces. Most, if not all, data movement in the user interface is accomplished with data binding. This article should help WPF newbies to start thinking in terms of WPF data binding, by showing how to translate a code-only solution into a compact XAML-only solution. This article does not discuss the binding API much. It only discusses what is relevant to the simple example. If you would like to read more about the technical details of WPF data binding, you can read my article about it here. Throughout this article, we will examine several ways to implement the same simple functionality. Our goal is to create a WPF program that allows us to edit a person’s first and last name. The application should also display that person’s name, formatted as <LastName>, <FirstName>. The formatted name should immediately update whenever the first or last name changes. The user interface should look something like this: First, we will not use data binding to implement this. Let’s create a simple class to hold the person’s name: public class Person { public string FirstName { get; set; } public string LastName { get; set; } public string FullName { get { return String.Format("{0}, {1}", this.LastName, this.FirstName); } } } Next, we declare a simple user interface in XAML. These controls will display the three properties of our Person class. They exist in our application’s main Window: <StackPanel> <TextBox x: <TextBox x: <TextBlock x: </StackPanel> Finally, we can write some code in the Window’s code-behind file to manually move the data around as necessary: Person _person; // This method is invoked by the Window's constructor. private void ManuallyMoveData() { _person = new Person { FirstName = "Josh", LastName = "Smith" }; this.firstNameTextBox.Text = _person.FirstName; this.lastNameTextBox.Text = _person.LastName; this.fullNameTextBlock.Text = _person.FullName; this.firstNameTextBox.TextChanged += firstNameTextBox_TextChanged; this.lastNameTextBox.TextChanged += lastNameTextBox_TextChanged; } void lastNameTextBox_TextChanged(object sender, TextChangedEventArgs e) { _person.LastName = this.lastNameTextBox.Text; this.fullNameTextBlock.Text = _person.FullName; } void firstNameTextBox_TextChanged(object sender, TextChangedEventArgs e) { _person.FirstName = this.firstNameTextBox.Text; this.fullNameTextBlock.Text = _person.FullName; } Bugs are born in this type of code, like a swamp. This implementation requires the UI code to keep track of what controls need to be updated when certain property values change. This forces us to duplicate knowledge of the problem domain in our UI code, which is never a good thing. If we were dealing with a more complex problem domain, this type of code can get very ugly very fast. There must be a better way… Using the exact same XAML in our Window, let’s rewrite the code-behind so that the controls are data bound to the Person object. Instead of having the Window’s constructor call the ManuallyMoveData method, as seen before, now it will call this method instead: private void BindInCode() { var person = new Person { FirstName = "Josh", LastName = "Smith" }; Binding b = new Binding(); b.Source = person; b.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; b.Path = new PropertyPath("FirstName"); this.firstNameTextBox.SetBinding(TextBox.TextProperty, b); b = new Binding(); b.Source = person; b.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; b.Path = new PropertyPath("LastName"); this.lastNameTextBox.SetBinding(TextBox.TextProperty, b); b = new Binding(); b.Source = person; b.Path = new PropertyPath("FullName"); this.fullNameTextBlock.SetBinding(TextBlock.TextProperty, b); } In this version, we are no longer directly assigning values to the Text property of a TextBox or TextBlock. Now we are binding those properties on the controls to a property on the Person object. The Binding class is part of WPF, in fact it is a core piece of all WPF data binding. Setting a Binding object’s Source property indicates the data source of the binding (i.e. where the data comes from). Setting the Path property indicates how to get the bound value from the data source. Setting the UpdateSourceTrigger property to ‘ PropertyChanged’ tells the binding to update as you type, instead of waiting for the TextBox to lose focus before updating the data source. This seems all well and good, but there is a problem. If you run the program now, the formatted full name will not update when you edit the first or last name. In the previous version the formatted full name updated because we hooked each TextBox’s TextChanged event and manually pushed the new FullName value into the TextBlock. But now all of those controls are data bound, so we cannot do that. What’s the deal? The WPF data binding system is not magical. It has no way to know that our Person object’s FullName property changes when the FirstName or LastName properties are set. We must let the binding system know that FullName has changed. We can do that by implementing the INotifyPropertyChanged interface on the Person class, as seen below: public class Person : INotifyPropertyChanged { string _firstName; string _lastName; public string FirstName { get { return _firstName; } set { _firstName = value; this.OnPropertyChanged("FirstName"); this.OnPropertyChanged("FullName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.OnPropertyChanged("LastName"); this.OnPropertyChanged("FullName"); } } public string FullName { get { return String.Format("{0}, {1}", this.LastName, this.FirstName); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; void OnPropertyChanged(string propName) { if (this.PropertyChanged != null) this.PropertyChanged( this, new PropertyChangedEventArgs(propName)); } #endregion } Notice that the new implementation of Person does not use automatic properties. Since we need to raise the PropertyChanged event when FirstName or LastName is set to a new value, we must use a normal property and field instead. If we run the app now, the formatted full name text updates as we edit the first or last name. This shows that the binding system is listening to the Person object’s new PropertyChanged event. At this point, we have gotten rid of that ugly, bug-prone code in the previous version. Our code-behind has no logic in it that determines when to update which fields. We still have quite a bit of code. It would be better if we could declare the relationships between controls and data in XAML. That would neatly separate the UI layout and configuration away from the application logic. This is especially appealing if you want to use a design tool, such as Microsoft Expression Blend, to create your user interfaces. Now let’s comment out the second version and see how to move all of this binding code into XAML. In the code-behind we will have the Window’s constructor call this method: private void BindInXaml() { base.DataContext = new Person { FirstName = "Josh", LastName = "Smith" }; } The rest of the work is done in XAML. Here is the content of the Window: <StackPanel> <TextBox x: <TextBox.Text> <Binding Path="FirstName" UpdateSourceTrigger="PropertyChanged" /> </TextBox.Text> </TextBox> <TextBox x: <TextBox.Text> <Binding Path="LastName" UpdateSourceTrigger="PropertyChanged" /> </TextBox.Text> </TextBox> <TextBlock x: <TextBlock.Text> <Binding Path="FullName" /> </TextBlock.Text> </TextBlock> </StackPanel> That XAML uses the property-element syntax to establish bindings for each control’s Text property. It looks like we are setting the Text property to a Binding object, but we’re not. Under the covers, the WPF XAML parser interprets that as a way to establish a binding for the Text property. The configuration of each Binding object is identical to the previous version, which was all in code. Running the application at this point shows that the XAML-based bindings work identically to the code-based bindings seen before. Both examples are creating instances of the same class and setting the same properties to the same values. However, this seems like a lot of XAML, especially if you are typing it by hand. It would be nice if there were a less verbose way to create the same bindings… Using the same method in the code-behind as the previous example, and the same Person class, we can drastically reduce the amount of XAML it takes to achieve the same goal. The key here is the fact that the Binding class is actually a markup extension. Markup extensions are like a XAML parlor trick allowing us to create and configure an object in a very compact way. We can use them to create an object within the value of an XML attribute. The XAML of the final version of this program is below: <StackPanel> <TextBox x: <TextBox x: <TextBlock x: </StackPanel> That XAML is almost as short as in the original version. However, in this example there is no plumbing code wiring the controls up to the data source. Most data binding scenarios in WPF use this approach. Using the Binding markup extension feature vastly simplifies your XAML and allows you to spend time working on more important things. There are many ways to hook a user interface up to data. None of them are wrong, and all of them are useful in certain situations. Most of the time WPF developers use data binding via the convenient markup extension syntax. In more complicated, dynamic scenarios, it can be useful to create bindings in code. I hope that this article has shed some light on the topic, so that you can make an informed decision about how you want to get the job done. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/WPF/MovingTowardWpfBinding.aspx
crawl-002
refinedweb
1,585
56.55
Scans your Python project for all installed third party pip libraries that are used and generates a requirements.txt based output. This module should be used when no virtual environments are used in a project and you forgot to keep track of your requirements.txt file. You can find the full project and it's source code on GitHub. Using the scanner is incredibly simple. Open a terminal and navigate to your project folder, run the script and watch magic happen before your eyes. You can also easily integrate the scanner code in your own project so you can get the output of the scanner yourself or modify the class to suit your own needs. Usage $ cd ~/projects/my-awesome-project/ $ pip-module-scanner foo==1.0.0 bar==2.1.0 baz==0.0.1 Specifying a custom path You can specify a custom path in which you want to run the script with the -p or --path argument. Example: $ pip-module-scanner --path ~/projects/my-awesome-project/ foo==1.0.0 bar==2.1.0 baz==0.0.1 Output redirection You can write the output of the script to a file by using the -o or --out argument. Example: $ cd ~/projects/my-awesome-project/ $ pip-module-scanner -o requirements.txt $ cat requirements.txt foo==1.0.0 bar==2.1.0 baz==0.0.1 Installation Installing the scanner is easy, either clone the repository and run the script or install it via pip like so: $ pip install pip-module-scanner Integrating the code in your project You can easily integrate the scanner code in your own project so you can get the output of the scanner yourself or modify the class to suit your own needs. To do this, you can use it like so: from pip_module_scanner.scanner import Scanner scanner = Scanner() scanner.run() # do whatever you want with the results here # example: for lib in scanner.libraries_found: print ("Found module %s at version %s" % (lib.key, lib.version)) Specifying a path would work like so, make sure to also import the ScannerException as it will check if the path you specified is actually a real path: from pip_module_scanner.scanner import Scanner, ScannerException try: scanner = Scanner(path="~/projects/my-awesome-project/") scanner.run() # do whatever you want with the results here # example: for lib in scanner.libraries_found: print ("Found module %s at version %s" % (lib.key, lib.version)) except ScannerException as e: print("Error: %s" % str(e)) For the one-liner junkies out there (like me) you can also get all libraries with this nifty little one-liner (I'm so considerate) from pip_module_scanner.scanner import Scanner libs = Scanner().run().libraries_found # Isn't it beautiful?.
https://www.paradoxis.nl/projects/pip-module-scanner
CC-MAIN-2017-51
refinedweb
449
56.25
This, help others, and be more efficient and productive, we all will keep them working and improving. And that's what we are all doing. 🤓🚀 Intro You might have heard not long ago about PEP 563, PEP 649, and some changes that could affect Pydantic and FastAPI in the future. If you read about it, I wouldn't expect you to understand what all that meant. I didn't fully understand it until I spent hours reading all the related content and doing multiple experiments. It might have worried you and maybe confuse you a bit. Now there's nothing to be worried about. But still, here I want to help clarify all that and give you a bit more context. Brace yourself, you are about to learn a bit more about how Python works, how FastAPI and Pydantic work, how type annotations work, and more. 👇 Details Start with a basic FastAPI app FastAPI is based on Pydantic. Let's see a simple example using them both. Imagine that we have a file ./main.py with the following code: from typing import Optional import uvicorn from fastapi import FastAPI from pydantic import BaseModel class Item(BaseModel): name: str description: Optional[str] = None price: float app = FastAPI() @app.post("/items/") def create_item(item: Item): return item if __name__ == "__main__": uvicorn.run(app) You could run this example and start the API application with: $ python ./main.py INFO: Started server process [4418] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on (Press CTRL+C to quit) Then you could open your browser and interact with the API docs at, etc. But here we want to focus on what happens behind the scenes. Note: Instead of using the last two lines, you could have used the uvicorn command, and that's what you would normally do. But for this example, it will be useful to see everything from the point of view of the python command. How Python works By running that command above, you are asking your system to start the program called python. And to give it the file main.py as a parameter. Note: In Windows, the program might be called python.exe instead of just python. That program called python (or python.exe) is written in another programming language called "C". Maybe you knew that. And what that program python does is read the file main.py, interpret the code that we wrote in it using the Python Programming Language, and execute it step by step. So, we have two things with more or less the same name "python" that represent something slightly different: python: the program that runs our code (which is actually written in the C programming language) - "Python": the name of the programming language we use to write our code So, you could say that python (the program) can read Python (the programming language). What is Runtime Now, when that program python is executing our code written in the Python programming language, we call that "runtime". It's just the period of time when it is executing our code. When our code is not being executed, for example, when we are editing the file ./main.py, it is not running, so we are not at runtime. The way that program works is that, at runtime (when our code is being executed), Pydantic and FastAPI read those type annotations (or type hints) to extract their data and do things with it. So, for example, in the Item class above, we have: class Item(BaseModel): name: str description: Optional[str] = None price: float At runtime, Pydantic and FastAPI will see that name is a str and price is a float. And if we send a JSON request with a price that is not a float, they will be able to validate the data for us. FastAPI and Pydantic are written in pure Python. How can these tools do that? Python is so powerful that it has features to allow exactly that, to read type annotations at runtime from the same Python code. And Pydantic and FastAPI take advantage of those features. Another term commonly used to refer to doing things at runtime is to do things dynamically. What is Static Analysis The counterpart of runtime would be static. It just means that the code is not being executed. It's treated just as a text file containing code. In many cases, "static" is used when saying Static Analysis, Static Checking, Static Type Checking, etc. It refers to tools that understand the rules of the Python Programming Language and that can analyze the code, but that doesn't execute the code itself. These tools for static analysis can check if the code is following the rules correctly, checking that the code is valid, providing autocompletion, and other features. When you are editing code and your editor shows a squiggly red line with an error somewhere, that is static analysis. In some cases, the code could be valid, but it would still be incorrect. For example, if you try to add a str and a float together: name = "Rick" price = 1.99 total = name + price In terms of the rules of the language itself, the code is valid, all the quotes are where they should be, the equal signs are correctly placed, etc. But this code is still incorrect and will not work because you can't add a str with a float. Many editors will be able to show you a very valuable squiggly red line with the error message under name + price that might save you hours debugging. That is also static analysis. Some tools that do static analysis and that you might have heard of are: mypy, the official and main Static Type Checker flake8, checks for style and correctness black, autoformats the code in a consistent way that improves efficiency - PyCharm, one of the most popular Python editors, has internal components that do static analysis to check for errors, provide autocompletion, etc. - VS Code, the other of the most popular Python editors, using Pylance, also has internal tools to do static analysis to check for errors, provide autocompletion, etc. These tools have saved tons of development hours by detecting many bugs earlier in the development process and in the exact place where those errors happened. I bet that in many cases you might have seen the red line, realize what the error is, think "ah, yeah, right", fix it, and not even consider that there was a bug in your code, even for some seconds. If I counted all the times these tools have saved me from these bugs, I would get overwhelmed quickly. 😅 And if you have ever added type annotations to a code base that didn't have them before, you probably would have seen lots of broken sections in the code base and broken corner cases, that were suddenly obvious and you could then fix them. I surely have. Type Annotations in Python The Type Annotations (also called Type Hints) that we have available in all the supported modern Python versions (Python 3.6 and above) were designed to improve all that static analysis. The original intention was to allow mypy and others to help developers while writing the code. And that was the main focus for a while. But then tools like dataclasses (from the standard library) and Samuel Colvin's Pydantic started using these type annotations to do more than only static analysis, and to use these same type annotations at runtime. In the case of Pydantic, to extract that information to do data conversion, validation, and documentation. Type Annotations with Forward References Now, imagine we have a class (it could be a Pydantic model) like this: from typing import Optional from pydantic import BaseModel class Person(BaseModel): name: str child: Optional[Person] = None Here we have a Person that could have a child, that would also be a Person. It all looks fine, right? But now when we run the code (or with the help of some static analysis in editors) we will see that we declared child: Optional[Person] inside the body of the class Person. So, when that part of the code is run by python, the Person inside of name: Optional[Person] doesn't exist yet (that class is still being created). This is called a Forward Reference. And it would make the code break. And again, the main purpose of these type annotations was to help with static analysis. Using them at runtime was not yet an important use case. And having the code break just because we are trying to improve static analysis would be very annoying. To overcome that problem, it's also valid to declare that internal Person as a literal string, like this: from typing import Optional from pydantic import BaseModel class Person(BaseModel): name: str child: Optional["Person"] = None That looked weird to me when I discovered it. It's the name of a class just put there inside a string. But it's valid. When python is running, it will see that as a literal string, so it will not break. And most static analysis tools know this is valid and will read the literal string and understand that it actually refers to the Person class. By knowing that the Optional["Person"] actually refers to the Person class, static analysis tools can, for example, detect that this would be an error: parent = Person(name="Beth") parent.child = 3 A smart editor will use its static analysis tools to detect that parent.child = 3 is an error because it expects a Person. This solves the problem of the forward reference in the code and allows us to still use static analysis tools. ...we are not talking about using these type annotations at runtime yet, but we'll get there later. PEPs in Python PEP stands for Python Enhancement Proposal. A PEP is a technical document describing changes to Python, additions to the standard library (for example, adding dataclasses), and other types of changes. Or in some cases, they just provide information and establish conventions. The name says Proposal, but when they are finally accepted they become a standard. PEP 563 - Postponed Evaluation of Annotations Knowing what's a PEP, let's go back to the code example above. If you hadn't seen something like the Optional["Person"] part before, you might have cringed a bit. I did the first time I discovered that was valid, but it was understandable as it would solve the problem. Then Łukasz Langa had a smart idea and wrote PEP 563. If the way type annotations were interpreted changed, and if they were implicitly understood by Python as if they were all just strings, then we would not have to put all those classes inside strings in strange places in our code. So, we would write our code like: from typing import Optional from pydantic import BaseModel class Person(BaseModel): name: str child: Optional[Person] = None And then whenever python read our file ./main.py it would see it as if it was written like this: from typing import Optional from pydantic import BaseModel class Person(BaseModel): name: "str" child: "Optional[Person]" = None So, python would run our code happily and without breaking. And we, the developers would be much happier not having to remember where to put things inside strings and where not. And we would be able to keep using autocompletion and type checks even in these type annotations with forward references. For example, triggering autocompletion inside a string, with the previous technique, might not always work, but with this change that wouldn't be a problem anymore. And in the case that some tool ended up using these type annotations at runtime for other reasons, there were still ways to get the information at runtime, with some small caveats, but it was still possible. Spoiler Alert: These small caveats are what later would become a cumbersome problem for Pydantic, but we'll get there. Note: Have in mind that this was done several years ago, in fact, the same year Pydantic was released for the first time. Using type annotations at runtime for other purposes than static analysis was not a common use case if at all. It's remarkable that it was even accounted for. Now, as this would change the behavior of Python internally in a more or less drastic way, it would not be enforced by default yet. Instead, it was made available using a special import, from __future__ import annotations: from __future__ import annotations from typing import Optional from pydantic import BaseModel class Person(BaseModel): name: str child: Optional[Person] = None And as now these type annotations were treated as just strings, it allowed some interesting tricks when using them only for static analysis, like using typing features from future versions of Python in previous versions. For example, declaring Person | None instead of Optional[Person], avoiding the extra Optional and the extra import, even in Python 3.7 (that feature is available in Python 3.10 but not in Python 3.7): from __future__ import annotations class Person: name: str child: Person | None = None Note: Have in mind that this would only work for static analysis tools, your editor could understand that even in Python 3.7, but Pydantic wouldn't be able to use it and wouldn't work correctly. This has been there, available since Python 3.7. And that behavior was planned to be the default for Python 3.10 onwards (not now, but keep reading). Pydantic and PEP 563 Now, forward to the present, a couple of months ago. Pydantic already has some support for using from __future__ import annotations in the code as made possible by PEP 563. And in many cases, it works fine. For example, this works: from __future__ import annotations from typing import Optional from fastapi import FastAPI from pydantic import BaseModel # ✅ Pydantic models outside of functions will always work class Item(BaseModel): name: str description: Optional[str] = None price: float app = FastAPI() @app.post("/items/") def create_item(item: Item): return item But there are some caveats that wouldn't work. For example, this doesn't work: from __future__ import annotations from typing import Optional from fastapi import FastAPI from pydantic import BaseModel def create_app(): # 🚨 Pydantic models INSIDE of functions would not work class Item(BaseModel): name: str description: Optional[str] = None price: float app = FastAPI() @app.post("/items/") def create_item(item: Item): return item return app app = create_app() If you run that code, you would get a disconcerting error: NameError: name 'Item' is not defined To solve it in this case, you could move the Item class outside of the function. And there are some other similar corner cases. These types of disconcerting problems would be especially inconvenient for newcomers to Python (and probably to many experienced Python developers as well), as the problem is not obvious at all for someone that doesn't know the internals (it wasn't obvious to me, and I built FastAPI and Typer 😅). Python is an example of a very inclusive global tech community, welcoming newcomers from all around the world, from many disciplines. It is being used to solve the most complex problems, including taking pictures of black holes, running drones on Mars, and building the most sophisticated artificial intelligence systems. But at the same time, it's many people's first programing language for its ease of use and its simplicity. And many Python developers don't even consider themselves "developers", even while they use it to solve problems. So, having an inconvenience like this by default would not be ideal. There are other caveats but I don't want to go deeper into the technical details than I already have. You can read more about them on the Pydantic issue, the mailing list thread, and Łukasz's detailed explanation. PEP 649 - Deferred Evaluation Of Annotations Using Descriptors Recently, Larry Hastings that had been working on an alternative to PEP 563, PEP 649, contacted Samuel Colvin (Pydantic's author) and me (author of FastAPI and Typer), as suggested by Brett Cannon (from the Python Steering Council), to see if and how those changes would affect us. We realized that the changes from PEP 563 (the other one) would be permanently added to Python 3.10 (not requiring the from __future__ import annotations), and the caveats and problems still didn't have a solution. Suddenly it was also clear that these use cases of using type annotations at runtime instead of only for static analysis were not an obvious use case for everyone involved, including the same Larry Hastings who was working on what would be a potential solution for these use cases. Asking for Reconsideration Sadly, we realized all this very late, only weeks before these changes would be set in stone in Python 3.10 (in the end they weren't). Nevertheless, we showed our concerns. If you read about all this before, that's probably why. It was shared a lot, and it got a bit out of hand. And sadly, there were some radical comments attacking several of the parts involved (the Python Steering Council, us, etc), as if it was a fight between different groups. 😕 In reality, we are just one big group, the Python Community, and we are all trying to do the best for all of us. Sadly, all this sudden friction brought a lot of increased stress to all the parties involved. To the Python Steering Council, Core Python Developers, and us, library authors. Fortunately, everything came out well in the end. Here's a big shoutout to Carol Willing that, despite the added stress generated for her and everyone else involved, she helped a lot reconciling different points of view, reducing the friction, and calming down all the situation. That capacity of acknowledging and adopting other's points of view is priceless. We need more Carol Willings in the world. 🤓 Python Steering Council decision In case you didn't know, the decision of what goes into Python and what doesn't is done by the Python Steering Council. It is currently formed by: - Barry Warsaw - Brett Cannon - Carol Willing - Pablo Galindo Salgado - Thomas Wouters Now, back to the story, after a couple of days of that previous discussion, during the next Python Steering Council meeting, they unanimously decided to roll back the decision of making these type annotations as strings (as described in PEP 563) being the default behavior. Having those string type annotations by default in Python 3.10 had been decided some time ago, and rolling that change back only weeks before the "feature freeze" (the moment where no more changes are accepted into the next version) was a big decision, involving a lot of extra stress and effort. Nevertheless, they took the decision in order to support the community of users of FastAPI, Pydantic, and other libraries using these features: We can’t risk breaking even a small subset of the FastAPI/pydantic users, not to mention other uses of evaluated type annotations that we’re not aware of yet. This, again, shows the strong commitment of the Python community, starting from the Steering Council, to be inclusive, and supportive of all users, with different use cases. Here's another big shoutout to Pablo Galindo, who took all the extra work to perform all the last-minute changes, and even voted in favor of them. What's Next The decision was to keep the current behavior, of allowing from __future__ import annotations in the code, as defined by PEP 563, but not as the default behavior. This will provide enough time to find a solution or an alternative that works for all the use cases, including Pydantic, FastAPI, and also the use cases that are interested exclusively in static analysis. This is the best possible outcome for everyone. 🎉 It gives enough time to find an alternate solution and it avoids hurried decisions with little time that could have unknown negative effects. Who cares about FastAPI and Pydantic Now, in general, how does the future of FastAPI and Pydantic look like? Who cares about them? FastAPI, using Pydantic, was included for the first time in the last Python Developer Survey, and despite being the first year in it, it was already ranked as the third most popular web framework, after Flask and Django. This shows that it's being useful for many people. It was also included in the latest ThoughtWorks Technology Radar, as one of the technologies that enterprises should start trying out. FastAPI and Pydantic are currently being used by many products and organizations, from the biggest ones you've heard of, to the smallest teams, including solo developers. Several popular and widely used cloud providers, SaaS tools, databases, etc. are adding documentation, tutorials, and even improving their offers to better serve the FastAPI users. The most popular code editors for Python, PyCharm and Visual Studio Code, have been working on improving their support for FastAPI and Pydantic. I have even talked to both teams directly. 🤓 This is particularly interesting because FastAPI was designed to have the best support from editors, to provide the best developer experience possible. FastAPI and Pydantic use almost exclusively standard features of the language. When editors improve their support (even more) for these tools, they are actually improving their support for the features of the language itself. And this benefits many other use cases apart from FastAPI and Pydantic. Conclusion Python is a great community. We are all trying to make it better for all of us, from the Steering Council and Core Developers to library authors and even those who help others using these libraries. FastAPI and Pydantic are part of this community that includes and supports everyone, with all their use cases. And that's the main reason why the future of FastAPI and Pydantic is so bright. Because the future of Python is bright. We all make this future. ✨ Thanks Thanks to everyone involved in finding a solution and improving the Python community. 🙇 And special thanks to: for their review and feedback on this article before publishing. About me Hey! 👋 I'm Sebastián Ramírez (tiangolo). You can follow me, contact me, see what I do, or use my open source code: Discussion (3) I've been using fastapi professionally, its been a joy to work with. Thanks for your hard work. Nice article Glad that all worked out! I found fastapi apparently very early on and love it. Time to MVP is insanely quick developing on such a wonderfully pieced together library!
https://dev.to/tiangolo/the-future-of-fastapi-and-pydantic-is-bright-3pbm
CC-MAIN-2021-31
refinedweb
3,775
60.45
Results 1 to 1 of 1 - Join Date - Dec 2010 - Location - United States of America - 505 - Thanks - 39 - Thanked 47 Times in 46 Posts Problem with "getopt" in Linux (GCC) Hi all, I have this little snippet of test code: Code: #include <ctype.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <getopt.h> int main (int argc, char **argv) { int c, w, aflag, bflag, cflag; c = w = aflag = bflag = cflag = 0; opterr = 0; while ((c = getopt (argc, argv, "abc")) != -1) { w++; /* debug count how many times we go through */ switch (c) { case 'a': aflag = 1; break; case 'b': bflag = 1; break; case 'c': cflag = 1; break; case '?': fprintf(stdout, "Help shown here\n"); return 1; default: fprintf(stdout, "Default: Should not see\n"); return 1; } } fprintf(stdout, "While loop count = %d\n", w); for (c = 0; c < argc; c++) { fprintf(stdout, "argv[%d] = %s\n", c, argv[c]); } return 0; } Running the code with an invalid option SHOULD simply print the help line, yet I get this: root@michael:~/c-progs# ./getopt -x ./getopt: invalid option -- 'x' Help shown here The line in red isn't supposed to display. I tried setting "opterr" to 1, -1 and leaving it out... no difference. What the heck? Anyone know? Thanks. -- Roger"Anything that is complex is not useful and anything that is useful is simple. This has been my whole life's motto." -- Mikhail T. Kalashnikov
http://www.codingforums.com/computer-programming/246034-problem-getopt-linux-gcc.html
CC-MAIN-2017-17
refinedweb
237
80.01
Java Byte Code is the language to which Java source is compiled and the Java Virtual Machine understands. Unlike compiled languages that have to be specifically compiled for each different type of computers, a Java program only needs to be converted to byte code once, after which it can run on any platform for which a Java Virtual Machine exists. Bytecode is the compiled format for Java programs. Once a Java program has been converted to bytecode, it can be transferred across a network and executed by Java Virtual Machine (JVM). Bytecode files generally have a .class extension. It is not normally necessary for a Java programmer to know byte code, but it can be useful. Other LanguagesEdit There are a number of exciting new languages being created that also compile to Java byte code, such as Groovy. - GNAT - The GNU Ada-Compiler, is capable of compiling Ada into Java-style bytecode. - - JPython - Compiles Python to Java-style bytecode. - - Kawa - Compiles Scheme to Java-style bytecode. - ExampleEdit Consider the following Java code. outer: for (int i = 2; i < 1000; i++) { for (int j = 2; j < i; j++) { if (i % j == 0) continue outer; } System.out.println (i); } A Java compiler might translate the Java code above into byte code as follows, assuming the above was put in a method: Code: 0: iconst_2 1: istore_1 2: iload_1 3: sipush 1000 6: if_icmpge 44 9: iconst_2 10: istore_2 11: iload_2 12: iload_1 13: if_icmpge 31 16: iload_1 17: iload_2 18: irem # remainder 19: ifne 25 22: goto 38 25: iinc 2, 1 28: goto 11 31: getstatic #84; //Field java/lang/System.out:Ljava/io/PrintStream; 34: iload_1 35: invokevirtual #85; //Method java/io/PrintStream.println:(I)V 38: iinc 1, 1 41: goto 2 44: return Example 2Edit As an example we can write a simple Foo.java source: public class Foo { public static void main(final String[] args) { System.out.println("This is a simple example of decompilation using javap"); a(); b(); } public static void a(){ System.out.println("Now we are calling a function..."); } public static void b(){ System.out.println("...and now we are calling b"); } } Compile it and then move Foo.java to another directory or delete it if you wish. What can we do with javap and Foo.class ? $javap Foo produces this result: Compiled from "Foo.java" public class Foo extends java.lang.Object { public Foo(); public static void main(java.lang.String[]); public static void a(); public static void b(); } As you can see the javac compiler doesn't strip any (public) variable name from the .class file. As a result the names of the functions, their parameters and types of return are exposed. (This is necessary in order for other classes to access them.) Let's do a bit more, try: $javap -c Foo Compiled from "Foo.java" public class Foo extends java.lang.Object{ public Foo(); Code: 0: aload_0 1: invokespecial #1; //Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: getstatic #2; //Field java/lang/System.out:Ljava/io/PrintStream; 3: ldc #3; //String This is a simple example of decompilation using javap 5: invokevirtual #4; //Method java/io/PrintStream.println:(Ljava/lang/String;)V 8: invokestatic #5; //Method a:()V 11: invokestatic #6; //Method b:()V 14: return public static void a(); Code: 0: getstatic #2; //Field java/lang/System.out:Ljava/io/PrintStream; 3: ldc #7; //String Now we are calling a function... 5: invokevirtual #4; //Method java/io/PrintStream.println:(Ljava/lang/String;)V 8: return public static void b(); Code: 0: getstatic #2; //Field java/lang/System.out:Ljava/io/PrintStream; 3: ldc #8; //String ...and now we are calling b 5: invokevirtual #4; //Method java/io/PrintStream.println:(Ljava/lang/String;)V 8: return } The Java bytecodesEdit See Oracle's Java Virtual Machine Specification[1] for more detailed descriptions The manipulation of the operand stack is notated as [before]→[after], where [before] is the stack before the instruction is executed and [after] is the stack after the instruction is executed. A stack with the element 'b' on the top and element 'a' just after the top element is denoted 'a,b'.
http://en.m.wikibooks.org/wiki/Java_Programming/Byte_Code
CC-MAIN-2015-18
refinedweb
703
56.76
To read all comments associated with this story, please click here. Indigo = Webobjects Avalon = InterfaceBuilder Both have been around for a very long time. Of course if you mean "OSX doesn't" in the terms of .NET integration, you're right, but why would you want that if you weren't developing on/for a MS platform? Avalon will be nothing more than a gussied up window designer ala VS.NET's dialog editor, except with 5000 more options, and much more arcane xml syntax to go along with it. And of course, it will be in all managed code rather than native (like WO and IB/Xcode). A true accomplishment. Indigo is basically an API to generate web service applications. Its basically the next incarnation of Web Services in the .Net Framework. From Microsoft: Advanced Web services support in Indigo provides secure, reliable, and transacted messaging along with interoperability. Indigo's service-oriented programming model is built on the Microsoft .NET Framework and simplifies development of connected systems. Indigo unifies a broad array of distributed systems capabilities in a composable and extensible architecture, spanning transports, security systems, messaging patterns, encodings, network topologies and hosting models. Indigo will be available for Windows "Longhorn" as well as for Windows XP and Windows Server 2003. See:... I've been using Indigo for a while now; having been exposed to WebObjects in the past I can safely say that Indigo is much broader and deeper in scope than WebObjects. Indigo is much more than web services; it's a fully managed library that wraps up MSMQ, Remoting, Web Services, Sockets, and some aspects of COM+ into a single unified programming interface/API rather than the disparate namespace mess we have now. It's pretty impressive actually. Of course it won't have much impact on a normal home user, but from a development standpoint it's a godsend (for .Net at least); businesses will gain a lot from this. Member since: 2005-07-09 I don't think you understand perfectly. Longhorn will do much more than OSX. Does OSX have something like Avalon? No. How about something like Indigo? No. Those are two huge things Longhorn has that OSX doesn't. And there are many more little things.
http://www.osnews.com/thread?4264
CC-MAIN-2015-32
refinedweb
376
57.57
How to make a point system I was wondering how to make a point system. I've tried using a "Point" variable and changing it to its current state by one every time the player wins but it seems just to show 1 every time since at the beginning I set points to zero but even then, I can't seem to find a way to make a point system. Answered by LukeWright (114) [earned 5 cycles] Voters Muffinlavania (1510) If you want to change the If you want to change the Point variable in a function (anything with def), you are going to have to global it at the top of the function. So for example, if you want to change points in the declare function, just put this one line at the start def declare(winner, playpoints, player_input, player_done): global playpoints That way you can edit the variable in the function, which should just work with your commented lines on 149,150,155,and 156 Since you are not using global variables or classes, the values revert to the last thing you set it to after the function is done. To combat this, I usually use classes or global variables. to use global variables, simply put This ensures that the variables that you change inside of the functions, stay the value that you changed it to. To use classes simply do: To call a class variable, simply do @LioxLynx Click the checkmark next to my comment if this helped! actually, the variable doesn't revert back to what it was, you just create another variable inside of the function, they are actually totally different variables. @LukeWright Yes @InvisibleOne
https://replit.com/talk/ask/How-to-make-a-point-system/131196
CC-MAIN-2021-17
refinedweb
280
57.64
Hello everyone, Yhc now includes support for concurrency! The interface is the same as Concurrent GHC, so for example the following is a concurrent Yhc program: ------------------------------------------------------------------ module Fair where import Control.Concurrent import Control.Concurrent.MVar consumer :: MVar () -> Char -> IO () consumer mv c = do _ <- takeMVar mv putChar c consumer mv c producer :: MVar () -> Int -> IO () producer mv 0 = return () producer mv n = do putMVar mv () producer mv (n-1) main :: IO () main = do mv <- newEmptyMVar _ <- forkIO (consumer mv 'A') _ <- forkIO (consumer mv 'B') _ <- forkIO (consumer mv 'C') producer mv 1000 putStrLn "" ------------------------------------------------------------------ Currently only Control.Concurrent Control.Concurrent.MVar Control.Concurrent.QSem are implemented, however all the rest can easily be written in Haskell in terms of MVars. Because the introduction of concurrency has changed the way stacks work for *all* of Yhc it is possible some bugs have been introduced. The concurrent yhc implementation passes all the unit tests that the single threaded yhc passed, but of course unit tests don't cover all cases. If you find your previously working single threaded programs are now breaking please submit a BUG REPORT to the list :-) Also concurrency support is still new and relatively untested so you might find some concurrent programs that segfault/crash/lock up/etc. ALL BUG REPORTS HIGHLY WELCOME! NOTE: the Windows release for concurrency isn't quite ready but it should be quite soon (thanks Neil for handling this). Anyway, enjoy :-) Tom
http://www.haskell.org/pipermail/yhc/2006-March/000085.html
CC-MAIN-2014-15
refinedweb
240
50.67
An SE(2) state space where distance is measured by the length of Dubins curves. More... #include <ompl/base/spaces/DubinsStateSpace.h> Detailed Description An SE(2) state space where distance is measured by the length of Dubins curves. Note that this Dubins distance is not a proper distance metric, so nearest neighbor methods that rely on distance() being a metric (such as ompl::NearestNeighborsGNAT) will not always return the true nearest neighbors or get stuck in an infinite loop. The notation and solutions in the code are taken from: A.M. Shkel and V. Lumelsky, “Classification of the Dubins set,” Robotics and Autonomous Systems, 34(4):179-202, 2001. DOI: 10.1016/S0921-8890(00)00127-5 The classification scheme described there is not actually used, since it only applies to “long” paths. Definition at line 64 of file DubinsStateSpace.h. Member Data Documentation ◆ dubinsPathType Dubins path types. Definition at line 75 of file DubinsStateSpace.h. ◆ isSymmetric_ Whether the distance is "symmetrized". If true the distance from state s1 to state s2 is the same as the distance from s2 to s1. This is done by taking the minimum length of the Dubins curves that connect s1 to s2 and s2 to s1. If isSymmetric_ is true, then the distance no longer satisfies the triangle inequality. Definition at line 156 of file DubinsStateSpace.h. The documentation for this class was generated from the following files: - ompl/base/spaces/DubinsStateSpace.h - ompl/base/spaces/src/DubinsStateSpace.cpp
http://ompl.kavrakilab.org/classompl_1_1base_1_1DubinsStateSpace.html
CC-MAIN-2017-22
refinedweb
247
57.67
Do domain names matter? Expand Messages - Below is my essay on the state of domain names, and generally the idea of naming entities online. You may recognize some of the text below from some of the discussions we had here a few months back. For a permalink (with links), go to . ------ July 25, 2003 Is it just me, or are we paying less attention to the Domain Name System than we used to? Seems like only a few years ago that the tech-culture world was attuned to every new angle in the ongoing struggle over the DNS' management. You couldn't read the front page of Slashdot without catching one heavily commented-upon story on alternate registries, trademark disputes, or the latest ICANN board meeting. But today? Hardly a peep. Not because the problems have magically solved themselves: The MPAA, for example, just sent a cease-and-desist letter to a blogger with the domain name. But a story like this won't draw the same attention it would have before. And by the way, what ever happened to those new top-level domains, like .biz, .info, and .name? Some of those are two years old and wide open for business—homesteads desperate for homesteaders. This could be simply a temporary development. If the economy picks up we might see an uptick in the number of dot-coms suing hapless webmasters, and our outrage might rise accordingly. Or maybe we've just been exceptionally distracted of late: ICANN pales in comparison to the new crop of acronyms—MPAA, RIAA, DMCA, TIA, USA PATRIOT—menacing us today. But perhaps these trends obscure a deeper shift. At the beginning of the boom, the vast quantity of people and organizations online outstripped our ability to find them, and we pressed the DNS into service to help fill that gap. But this usage of the centralized, permanent DNS conflicted with the common-sense methods that people use to name things in their everyday lives, and as the internet continues to decentralize this dissonance only grows stronger. The conflict is being alleviated not by technical or political reform at the center of the network, but by innovation at its edges. As end-user applications mature, they increasingly allow individuals to develop and share their own naming systems—not to destroy the DNS, but to render it irrelevant. JUST ANOTHER PYRAMID SCHEME? The reasons that the DNS started to crumble under the pressure of commercialization have already been well documented. Writing in 1998, Ted Byfield noted that the DNS was never designed for that pressure in the first place: DNS was built around the structurally conservative assumptions of a particular social stratum: government agencies, the military, universities, and their hybrid organizations—in other words, hierarchical institutions subject to little or no competition. These assumptions were built into DNS in theory, and they guide domain-name policy in practice to this day—even though the commercialization of the Net has turned many if not most of these assumptions upside down. One of the assumptions Byfield is referring to is the notion that name collisions could be greatly reduced by dividing the namespace into top-level domains and trusting that everybody would calmly accept their place in that hierarchy. But as domain names became associated with trademarks, common usage flattened these tidy divisions into one undifferentiated sprawl. Corporations saw the web as one more front in the battle for marketing and public relations—like television, only with a keyboard—and accordingly they didn't care much for quaint rules written by computer scientists. So when, say, Archie Comics sued a California man for registering veronica.org in honor of his daughter, arguments that the .org TLD didn't belong to companies fell on deaf ears. (Online protest eventually succeeded where quoting RFCs had failed, although today veronica.org redirects to SamsDirect.) Many reformers aimed for a political solution, appealing to ICANN to keep the DNS safe for bit players. They felt that in kowtowing to the corporations, ICANN was bastardizing the simplicity of the system that Jon Postel had managed until handing it off in 1998. (When the final draft proposal for ICANN was finished, Wired called Postel "the Internet's own Obi-Wan Kenobi"—a phrase that would attain an eerie resonance with the Slashdot crowd when Postel passed away a month later and ICANN revealed itself to be a bit of an Evil Empire.) But would even a perfectly managed DNS have functioned in accordance with its earlier hierarchical vision? The hierarchy made sense to the users of the early internet, but the noisier commercialized internet would have fit much less comfortably into such a scheme. Even if you could've assigned every person, place, and thing its proper slot, most people would not have bothered to learn what went where. Take, for example, the .museum TLD, which has been open since 2001. Most prominent museums have avoided using this TLD; the Whitney, the Guggenheim, and the Museum of Modern Art all place their primary domain names under .org. Conceptually, .museum muddies the waters because it's not mutually exclusive with .org. And when it comes to marketing, .museum is a disaster since it only serves to distract the user—who ever heard of a six-letter TLD?—without making her life any easier. Or to take a more high-profile example, look at the recent lawsuit that forced the World Wrestling Federation to change its name to World Wrestling Entertainment. The World Wildlife Fund had sued the Federation for breaking the terms of a 1994 contract dictating who could use the initials WWF, in what media, and how prominently. Now, strictly speaking this wasn't solely an issue of domain names. The Fund's spokeswoman attributed the suit to an "explosion" of the acronym's use in three media: online, satellite TV, and cable TV. But one of the major grievances in the Fund's suit was the Federation's registration of the domain name in 1997. According to the neighborly rules of the pre-boom internet, this should not have been a problem: The Federation got wwf.com and the Fund got wwf.org. This solution works if you care about those tidy hierarchical divisions. Most people don't. This is one of the reasons that the new TLDs have been so underwhelming: People don't see the world as cleanly divided into discrete categories, with the corporations in this corner and the non-profits in that corner. It's all one namespace to them. Mutually exclusive hierarchies are convenient, but they only work on a small group of items. Once that group gets too big and diverse—a comic book artist here, an airplane-parts manufacturer there—any hierarchy that might reasonably hold that group becomes too cumbersome for people to use. When people want to organize large groups of items they often find it easier to use overlapping sets instead. That's why filesystems have symlinks. That's why many of Apple's OS X programs, such as iTunes and Address Book, let you drag-and-drop your MP3s or contacts into as many groups as you want. Why bother fretting over whether you should put Christine from work in your Friends group or your Coworkers group? Put her in both and get on with your life. SIX BILLION NAMING SYSTEMS AND JUST ONE INTERNET A hierarchical, precise DNS is a perfect system for computers. Human beings, however, prefer to rely on systems that make use of their own technical strengths, such as the ability to adapt their language to the preconceptions of your audience, and the ability to adapt their own conception of the world to accommodate new knowledge. Common sense, in other words. If, in the days when World Wrestling Entertainment was still a federation, you used the initials WWF in a conversation, chances are your listener would be able to figure out which one you were referring to. Humans do this by drawing on the context of the conversation to make the correct match. Are you talking about panda bears, or Stone Cold Steve Austin? In real life, people have almost no problem resolving name collisions—a good thing, considering how often they happen. There are two types of Dove bars you can buy in a supermarket: One is chocolate and the other is soap. There are three famous Dres in hip-hop: Dr. Dre of NWA, Dre of Outkast, and Dre of Dre and Ed Lover. Hip-hop fans know how to tell them apart. It happens on a personal level, too. In my freshman year of college, my dorm floor had four Mikes and four Daves. We resolved these name collisions by settling on nicknames for everybody: Big Dave, Sophomore Dave, Asshole Dave, etc. People who lived outside our floor didn't know who was who, but they didn't need that system anyway. We did, and it worked fine for us. What we didn't do, however, was make use of last names, even though they offer a more global, permanent method of differentiation. Last names were less memorable to us than the jokey, college-guy associations we could invent on our own. Clay Shirky wrote that the aims of the DNS are to be memorable, global, and non-political. "Pick two", he said, but in fact most of the time people only care about the first: As long as names are memorable, people don't mind that they're local and highly subjective. Techies are an exception, since they spend much of their time crafting language for machines, and as such are accustomed to treating language as a brittle, precise tool. But most people like their language loose and contextual, thank you very much, and the hierarchies of the DNS demanded a rigor that never seemed worth the trouble to them.: * Subscribe to a blog's feed in your RSS aggregator and you might never have to type that blog's URL again. * Blogging tools decrease the amount of manual work that bloggers have to do to pass links along. The beta version of Google's browser toolbar even has a BlogThis button. * Apple's web browser Safari integrates with its Address Book to automatically bookmark the websites of your contacts. * Almost all email clients and chat clients will automatically turn URLs into clickable links, relieving you of the need to even cut-and-paste. None of these innovations are groundbreaking. But taken together they add up to an environment where users delegate to computers the dirty work of handling URLs. Consider, for perspective, this 1999 article by usability author Joe Clark:. Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/decentralization/conversations/topics/6523
CC-MAIN-2015-35
refinedweb
1,791
61.36
UART over Pyscan pins - Louis Henn last edited by Hi everyone. I'd like to know whether I can use existing pins on the pyscan board to communicate with another device via UART. Can't find anything on the matter. I would for example like to use the fingerprint reader's RX/TX pins as my UART pins. Currently using it with a SiPy, if that helps. Any help is appreciated. Thanks, Louis - Louis Henn last edited by @robert-hh took some debugging, but I did manage to make it work like you said. Thanks a lot! @Louis-Henn Look at the documentation: e.g. from machine import UART uart = UART(1, baudrate=9600, pins=('P19', 'P15')) or uart 1. Note, that using uart.init() requires specifying all setting again, so it is more convenient to put all setting in the uart instantiation call. - Louis Henn last edited by Louis Henn @robert-hh thank you. I guess my next question is how would I program UART to work over this pins. Up to know I've used pins 10 and 11 when using pymakr expansion board. How would I code it to be P15/P19. Is it just conncted to GPIO 15/19? Edit: I just saw there is no GPIO 19 @Louis-Henn You cannot share UART TX signals, because they are driven actively both high and low. But you can use the pins on the extension connector of Pyscan for UART. See: Using P15 and P19 for UART looks fine, as long as the fingerprint reader is not connected. Just give it a try.
https://forum.pycom.io/topic/6803/uart-over-pyscan-pins
CC-MAIN-2022-21
refinedweb
266
84.78
int strcmp ( const char * str1, const char * str2 ); <cstring> Compare two strings Compares the C string str1 to the C string str2.This function starts comparing the first character of each string. If they are equal to each other, it continues with the following pairs until the characters differ or until a terminanting null-character is reached. /* strcmp example */ #include <stdio.h> #include <string.h> int main () { char szKey[] = "apple"; char szInput[80]; do { printf ("Guess my favourite fruit? "); gets (szInput); } while (strcmp (szKey,szInput) != 0); puts ("Correct answer!"); return 0; } Guess my favourite fruit? orangeGuess my favourite fruit? appleCorrect answer!
http://www.cplusplus.com/reference/clibrary/cstring/strcmp/
crawl-002
refinedweb
101
70.29
Hi, I was trying to run the following code in UNIX: import java.awt.*; public class FontTest { public static void main(String[] args) { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); Font[] fa = ge.getAllFonts(); for (int i=0; i<fa.length; i++) { System.out.println(fa[i].getFontName()); } } } But I got the following exception: Exception in thread "main" java.lang.InternalError: Can't connect to X11 window server using ':0.0' as the value of the DISPLAY variable. at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method) at <Unloaded Method> at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:124) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:63) at FontTest.main(Compiled Code) I am new to Java on UNIX. Please tell mw what is wrong with the system. Do I need to install something on this system? Thanks in advance AWT Question (2 messages) Threaded Messages (2) - AWT Question by Mircea Crisan on March 09 2004 02:13 EST - Display variable by leerutg leerutg on March 09 2004 20:03 EST AWT Question[ Go to top ] Hi, - Posted by: Mircea Crisan - Posted on: March 09 2004 02:13 EST - in response to Mano S The error you are getting is because there is no XServer started on the unix machine. AWT tries to get the defaut fonts (colors, etc.) from the underlying windowing system, which in your case is not started. However if you want your test to work you should tell the JVM that is running in a non-window environment by setting the system property java.awt.headless=true. Best regards, Mircea Display variable[ Go to top ] The msg about DISPLAY variable points to something other than the previous user suggested. If that suggestion did not help, try this: - Posted by: leerutg leerutg - Posted on: March 09 2004 20:03 EST - in response to Mircea Crisan If the program is on unix1 box and you did a telnet/ssh from win1 box, then you need to set the DISPLAY variable to your box. A best way to test this is to open a command window xterm etc after you set the display variable. set this setenv DISPLAY=win1.domain.com:0.0 or export DISPLAY=win1.domain.com:0.0 The same holds even if your box (win1) is a windows or unix.
http://www.theserverside.com/discussions/thread.tss?thread_id=24387
CC-MAIN-2015-18
refinedweb
386
58.08
import "github.com/stellar/go/support/render/hal" handler.go io.go link.go link_builder.go page.go paging_token.go StandardPagingOptions is a helper string to make creating paged collection URIs simpler. func Render(w http.ResponseWriter, data interface{}) Render write data to w, after marshalling to json type BasePage struct { FullURL *url.URL `json:"-"` Embedded struct { Records []Pageable `json:"records"` } `json:"_embedded"` } BasePage represents the simplest page: one with no links and only embedded records. Can be used to build custom page-like resources Add appends the provided record onto the page Init initialized the Records slice. This ensures that an empty page renders its records as an empty array, rather than `null` LinkBuilder is a helper for constructing URLs in horizon. func (lb *LinkBuilder) Link(parts ...string) Link Link returns a hal.Link whose href is each of the provided parts joined by '/' func (lb *LinkBuilder) Linkf(format string, args ...interface{}) Link Linkf provides a helper function that returns a link with an href created by passing the arguments into fmt.Sprintf func (lb *LinkBuilder) PagedLink(parts ...string) Link PagedLink creates a link using the `Link` method and appends the common paging options Links represents the Links in a Page type Page struct { Links Links `json:"_links"` BasePage Order string `json:"-"` Limit uint64 `json:"-"` Cursor string `json:"-"` } Page represents the common page configuration (i.e. has self, next, and prev links) and has a helper method `PopulateLinks` to automate their initialization. InvertedOrder returns the inversion of the page's current order. Used to populate the prev link PopulateLinks sets the common links for a page. Pageable implementors can be added to hal.Page collections Package hal imports 8 packages (graph) and is imported by 56 packages. Updated 2020-08-01. Refresh now. Tools for package owners.
https://godoc.org/github.com/stellar/go/support/render/hal
CC-MAIN-2020-45
refinedweb
298
64.81
There images generated from the photobooth and create a task that only humans can do — coming up with funny captions. How Does It Work? The photobooth tutorial culminated in uploading photos to Dropbox. This tutorial picks up where it left off. - When a new file is uploaded to Dropbox a task gets created - Volunteers text into the system and are added as a worker - Workers are set to “Idle,” indicating that they are available to do work - TaskRouter matches a picture that needs captioning with an idle worker. Our app sends them a photo via MMS - The worker is marked as “Busy” until they reply with a caption - Once a caption is received, worker is changed back to “Idle” and waits for their next assignment Getting Started We’re going to build our distributed photo captioning app using Ruby and Sinatra. You don’t need a fully functional Arduino powered photobooth to follow along with this post. You do, however, need to set up a Dropbox app. You can find those instructions in the “Arduinos and Dropbox” section in the photobooth tutorial. Once you have your Dropbox app setup you can mimic the photobooth by manually uploading files to Dropbox. In addition to a Dropbox app, you’ll need: - a free Twilio account - an MMS-enabled Twilio number (only available on US and Canadian numbers) - ngrok, free IP tunneling software A Note on Ngrok Your development machine is most likely hiding behind a router and lacks a publicly accessible IP address. However, both Dropbox and Twilio need to make HTTP requests to this app, so you’ll need to create a tunnel from the public internet to your local server. Our favorite way to do this is ngrok. If you haven’t already, download ngrok and move it to your home directory. Also sign up for a free ngrok account and follow the instructions on how to set up custom domains. This way you won’t have to change your webhook urls on the Twilio and Dropbox dashboards everytime you restart ngrok. If you’d like to learn more about ngrok, check out Kevin Whinnery’s great tutorial on ngrok. Once you’ve got ngrok installed, start it up with a custom subdomain (your name perhaps) and point it at port 9292: ./ngrok -subdomain=example 9292 Leave ngrok open in a terminal window for the rest of this tutorial. Setting Up TaskRouter The best place to start building a TaskRouter application is the TaskRouter dashboard. TaskRouter applications are scoped to a Workspace. Let’s make one: - Click Create Workspace - Give your workspace a friendly name of “Photobooth Captions” - Leave Template set to None - Click Save Once your workspace is created, change the Default Activity from “Offline” to “Idle.” We’ll discuss why in a few minutes but the short answer is that we want our workers ready to receive work as soon as they enter the system. Next we need to create a Task Queue. Click Task Queues at the top of the dashboard, then click Create Task Queue and configure it with the following properties: The key property here is Target Workers which states that workers eligible to complete Tasks in this Task Queue must have a skill of “caption”. For the purposes of this tutorial we’ll only have one kind of worker but Task Queue really starts to shine when you have a multitude of task types requiring a multitude of skillsets. Once you’ve completed this tutorial you’ll be in a great position to create something more complex. Once you’ve configured your Task Queue, click Save. Next we need to create a Workflow which will route Tasks into our Task Queue. Click Workflows at the top of the dashboard, then click Create Workflow. Configure it with these properties: - Friendly Name: Photobooth Workflow - Assignment Callback: (replace examplewith your ngrok subdomain) - Leave Fallback Assignment Callback URL and Task Reservation Timeout blank - Leave “Caption Queue” as the default task queue - Click Save By default, the Workflow will place Tasks into the Caption Queue because of the Default Task Queue setting. If we wanted to be more explicit about this to prepare for a more robust system, we could create a Filter in the Routing Configuration section. Let’s configure a filter for our captioning tasks. Click the Add Filter button and set the following properties: - Filter Label: Caption Filter - Expression: required_skill = "caption" - Target Task Queue: Caption Queue - Priority: 1 With this filter in place, a Task with required_skill set to “caption” in its attributes will be routed to the Caption Queue. Your Routing Configuration should look like this: Click Save to complete the Workflow creation. This is all the setup we need to do on our dashboard. Let’s get into the code. Creating the Sinatra App Our application will be built in Ruby using Sinatra. Let’s create a directory for our app and a few of the files we’ll need to get started: mkdir photobooth-taskrouter cd photobooth-taskrouter touch app.rb Gemfile config.ru Then edit the Gemfile: source "" ruby '2.2.0' gem 'sinatra' gem 'thin' gem 'twilio-ruby', '~> 3.15.1' gem 'dropbox-sdk' gem 'envyable' Install bundler if you haven’t already: gem install bundler Then install your gems: bundle install Along with the gems for the Dropbox and Twilio, we’ve included Envyable, a gem to manage environment variables. (For more on this, read Phil Nash’s excellent post on managing environment variables in Ruby). To use envyable we need to create a config directory and a env.yml file: mkdir config touch config/env.yml Open env.yml and add the following YAML: development: TWILIO_ACCOUNT_SID: TWILIO_AUTH_TOKEN: TWILIO_WORKSPACE_SID: TWILIO_WORKFLOW_SID: TWILIO_PHONE_NUMBER: DROPBOX_ACCESS_TOKEN: Copy in the values for your Twilio Account SID, Twilio Auth token — you can find these by clicking “Show credentials” in the top right of the Workspace dashboard. Then copy in the Workspace SID and Worklow SID — you can find these on their respective pages. Then paste in the phone number of one of your MMS enabled Twilio phone numbers. For the Dropbox token, visit the Dropbox App Console and click into the app you created earlier. In the OAuth 2 section, click Generate under “Generated access token” and copy the resulting token into the YAML. With our env.yml in place, our environment variables will now be accessible via ENV['NAME_OF_VARIABLE']. Now let’s start on our Sinatra app. Open ‘app.rb’, paste these lines, and save the file. require 'dropbox_sdk' require 'json' Envyable.load('./config/env.yml', 'development') Finally, edit the config.ru which tells our server what to do when we run rackup. require 'bundler' Bundler.require require './app.rb' run Sinatra::Application If you want to test that this works so far, see if you can start your server without getting any errors: bundle exec rackup Configuring the Dropbox Webhook Our application will utilize Dropbox’s webhook to receive notifications when files are uploaded. This allows us to create Tasks for our app as the photos come in. Before we use the webhook though, we have to verify our app with Dropbox. For the verification process, Dropbox will make a GET request to our webhook with a challenge parameter. Our HTTP response must simply include the text of that challenge. Create a new route in app.rb to handle this request: get '/dropbox' do params[:challenge] end Restart the app. Then visit the Dropbox App Console and add http:// to the Webhook URIs field. Once you click Add, Dropbox will verify our domain. We could delete the GET /dropbox route after that, but if we ever change domains (e.g., deploy to production) then we’re going to need to reauthorize again. Might as well leave it there. If you’d like to learn more about this authorization process or about interacting with the Dropbox API in general, check out their well-written API docs. Using the Dropbox API’s /delta Endpoint When a photo is uploaded, Dropbox will make a POST request to our /dropbox webhook (this is in addition to the GET /dropbox we used to verify our app). The information provided in the POST request is pretty limited. It only contains an array of User IDs that have new file changes in the Dropbox app we configured but it doesn’t contain any additional information about the actual file upload itself. Since we the webhook request doesn’t tell us which files were added, we need to request a list of recent Dropbox changes via their delta method. In order to make sure we’re not getting duplicate changes, we need to save a “cursor” returned to us by Dropbox and pass it back in on subsequent delta calls. For the sake of moving fast in this tutorial, we’re going to do this the wrong way and store the cursor in a global variable. Please use a proper datastore in a real app. Below Envyable.load('./config/env.yml', 'development') in app.rb, add this: $cursor = nil Now we’re going to create a post /dropbox route which will: - create a REST client using our Dropbox access token - retrieve a list of changes to our Dropbox folder since our last cursor - save the new cursor Then it will iterate through each file in the list of changes and: - grab its filename - request a publicly accessible url from dropbox using our REST client - create a new task in TaskRouter (we’ll leave a placeholder for this for the moment) And finally, it will return a 200 — otherwise Dropbox will keep trying the request over and over and over again. Here’s the code: post '/dropbox' do dropbox_client = DropboxClient.new(ENV['DROPBOX_ACCESS_TOKEN']) changes = dropbox_client.delta($cursor) $cursor = changes['cursor'] changes['entries'].each do |entry| file_name = entry[0] media_hash = dropbox_client.media(file_name) image_url = media_hash['url'] # create task end 200 end If you’d like to learn more about what we’ve done here, check out Dropbox’s core API docs. Create a Task with TaskRouter We’re going to be doing a lot of work with Twilio, so let’s create a twilio_helpers.rb file to keep our code clean: touch twilio_helpers.rb Now let’s create a helper method in twilio_helpers.rb to instantiate a TaskRouter REST API client: def task_router_client Twilio::REST::TaskRouterClient.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN'], ENV['TWILIO_WORKSPACE_SID'] end Then let’s require the twilio helpers in our app.rb: require './twilio_helpers.rb' We’ll use our client helper to create a new task with the image_url as an attribute. Replace the # create task comment with: attributes = { image_url: image_url, required_skill: 'caption' }.to_json task_router_client.tasks.create( attributes: attributes, workflow_sid: ENV['TWILIO_WORKFLOW_SID'] ) Let’s test what we’ve build so far. Restart your Sinatra server and upload a file to Dropbox — either via your Photobooth or by simply dragging an image into the folder of the your Dropbox app. Once the file uploads, the webhook will fire and hit the /dropbox route, which will then create a task in TaskRouter. Open the TaskRouter dashboard and go to the Tasks page. You should see a new Task. If you click on the Task, you’ll see the image_url. Create a Worker in TaskRouter Now that we can create tasks, we need workers who can complete those tasks. Workers will join the system by texting our Twilio number. We need to configure the webhook that Twilio will use when it receives a new text. Open the numbers list on your Twilio dashboard, click on the phone number you entered earlier into the env.yml, and configure the number by setting the Messaging Request URL to http://. For the sake of this post, we’re going to concern ourselves with two scenarios when someone texts in: - They’re texting in for the first time. We’ll create a worker using their phone number as a friendly name. - They’re providing a caption. We’ll save it, then set the worker as ready to receive more tasks. Before we create the route to handle the webhook, let’s create two more helper methods in twilio_helpers.rb. First, a method to check if a worker exists for a given phone number: def worker_exists?(phone_number) task_router_client.workers.list(friendly_name: phone_number).size > 0 end Second, a method to simplify the generation of TwiML responses which we’ll use to reply to people when they text into the system: def twiml_response(body) content_type 'text/xml' Twilio::TwiML::Response.new do |r| r.Message body end.to_xml end Now let’s head back to app.rb and create a /message endpoint. For now we’ll focus on the first use case: someone texts in and a worker with that number does yet not exist: In that case we will create a new worker with: - an attribute defining their phone number - the friendly name set to their phone number to make them easier to identify We’ll also reply with a text message telling them to hold tight and wait for their photo. post '/message' do phone_number = params['From'] if worker_exists?(phone_number) # we’ll come back to this soon else attributes = {phone_number: phone_number, skill: 'caption'}.to_json task_router_client.workers.create( attributes: attributes, friendly_name: phone_number, ) twiml_response("Hold tight! We'll be sending you photos to caption as soon as they become available.") end end Let’s test this out. Restart your server, then send a text message to your Twilio number. Once you get a reply, check the workers tab on the TaskRouter dashboard. You should see a new worker that has your phone number as a friendly name. Something else is afoot though. If you look at your server, you’ll see that TaskRouter tried to make an HTTP request at /assignment, but we haven’t defined that route yet. Let’s do that now. Assign Work When we have a task in the system and an idle worker who’s qualified to do the work, TaskRouter starts to perform its magic. When TaskRouter sees a potential match, it makes an HTTP request to the assignment webhook defined on our Workflow dashboard. This HTTP request sends information about the task and asks if you’d like the worker to accept it. In that request, we have everything we need to send a worker their task: the image_url and worker’s phone number. Let’s create a route that will: - respond to a POST request at /assignment - extract the phone_numberfrom worker_attributes - extract the image_urlfrom task_attributes - store the image_url for later - call a twilio_helper named send_photowhich we will define in just a second - return JSON instructions to TaskRouter to tell it that the worker accepts the task We also need to store data about our image urls and captions. We’re not going to tell you how to do that in this post. Feel free to use MySQL, DynamoDB or the storage engine of your choice. For the purposes of this post, we’ll just leave a comment where you would save the pieces of data you want to persist. Create your route to handle assignment: post '/assignment' do worker_attributes = JSON.parse(params['WorkerAttributes']) phone_number = worker_attributes['phone_number'] task_attributes = JSON.parse(params['TaskAttributes']) image_url = task_attributes['image_url'] # save then image_url and phone_number pair send_photo(phone_number, image_url) content_type :json {instruction: 'accept'}.to_json end The first four lines extract the image_url and phone_number from the parameters sent to us by TaskRouter. Then we send a photo using a Twilio helper we’ll define in a second. The last two lines return JSON telling TaskRouter that our worker accepts the task. Now let’s create our send_photo method in twilio_helper.rb: def send_photo(phone_number, image_url) twilio_client = Twilio::REST::Client.new ENV['TWILIO_ACCOUNT_SID'], ENV['TWILIO_AUTH_TOKEN'] twilio_client.messages.create( from: ENV['TWILIO_PHONE_NUMBER'], to: phone_number, body: "What's the funniest caption you can come up with?", media_url: image_url ) end We’ve got everything in place to assign a task to a worker and to send them an image to caption. Let’s try it out. We need your phone number to be a “new” worker for this to work, so go back into your dashboard, click on the worker you created previously, toggle their Activity to “Offline” and then delete it. Then restart your server to load the changes we just made. After that, send a text to your Twilio number again, and our app will respond with the introductory text like last time. Now TaskRouter makes a POST request to your newly created /assignment route. You can watch this happen by visiting localhost:4040 in a browser. That route will fire off the MMS with the Dropbox picture to your phone. Responding to the Worker’s Message We’ve created a worker in the ‘Idle’ state and they’ve just received their first captioning task. What happens when they text back? After we’ve saved the worker’s caption, we’ll transition them back to the ‘Idle’ Activity so that they will receive more photos to caption. Let’s create a Twilio helper to retrieve a worker based on their phone number. In twilio_helper.rb: def get_worker(phone_number) task_router_client.workers.list(friendly_name: phone_number).first end Let’s create another helper to retrieve the SID for the ‘Idle’ activity: def get_activity_sid(friendly_name) task_router_client.activities.list(friendly_name: friendly_name).first.sid end And then we’ll use those two methods to change the worker’s activity back to “Idle”: def update_worker_activity(phone_number, activity_friendly_name) worker = get_worker(phone_number) activity_sid = get_activity_sid('Idle') worker.update(activity: activity_sid) end With these helpers in place we can respond to the existing worker’s incoming message. In the /message endpoint of app.rb let’s add the following code to the if worker_exists? block that we said we’d come back to: if worker_exists?(phone_number) caption = params['Body'] # save the caption in the same place you stored the image_url and phone_number pair update_worker_activity(phone_number, 'Idle') twiml_response(“Thanks for the caption! We’ll send you more photos as they come available.”) else # ... That’s all the code for this app. Restart your server to reload the changes. Then send a hilarious text to your Twilio number. You’ll get a thank you back and your activity in TaskRouter will be switched back to Idle. If there are more tasks waiting in the taskqueue, TaskRouter will make another POST request to the /activity route and your phone will light up with another picture. You’ll respond with a funny caption, and so it goes. Next Steps Let’s recap. In this post we: - Created a new workspace, workflow and task queue in TaskRouter - Created tasks in response to a Dropbox upload - Allowed volunteers to sign up as workers via text message - Assigned photos to be captioned by workers - Updated a worker’s status once the task was completed TaskRouter has given us a solid foundation for our application that is easily extendable to an even more diverse set diverse set of tasks across workers with varying skills. Consider extending what we built in this post with the following suggestions: - Create specialized captioners (for instance, some people might be better at captioning wedding photobooth pictures while others are better at office party photos). - Create a second Task Queue for people who can rate captions (the Internet is great at quantity but we might want some quality control). - Build a website to show off these hilarious captions. I’m really excited to see what you build with TaskRouter. If you have any questions while you’re building your application, please reach out to me via email at [email protected] or hit me up on Twitter @brentschooley. - SMS and MMS Notifications with Ruby and Sinatra - Automated Survey with Ruby and Sinatra - Building Better Phone Trees With Twilio Using Multivariate Testing - Part 2: Using Twilio SMS with Sinatra for Ruby and Datamapper To Build A Phone Verification System - Test Before Shipping with the Twilio Test Toolkit
https://www.twilio.com/blog/taskrouter-and-the-internet-of-things-html
CC-MAIN-2019-51
refinedweb
3,310
62.68
Setup Guide This tutorial is on how to set up the development environment for programming with the SuanShu library (the Java version). In order to use the SuanShu library, you need to know some Java programming. Sun (or Oracle) provides excellent online tutorials on learning Java. You can find a lot of easy-to-read-and-follow lessons in Sun’s Java Tutorials. IDE You will need an Integrated Development Environment (IDE) to program SuanShu. Although you may choose any IDE you like, we recommend Netbeans. You can download it from here. If you are an avid programmer, you may want to download the ‘All’ version. For most of us who are mathematicians with little programming experience, the ‘Java SE’ version suffices. To learn about how to use NetBeans, you will find a quick start tutorial here. The most important skill for you to learn is debugging. This is especially true for those who are converting from Matlab and R and are used to printing out values. NetBeans debugger allows you to easily inspect values, place conditional breakpoints, pause and resume your program and much more. A quick Google search gives these articles. We cannot emphasize enough that it is imperative that you must master using a debugger to make your programming experience productive and enjoyable. Hotkey is another tool that will increase your programming productivity. My favorite hotkey is Ctrl+Shift+F (meaning that you press all three keys Ctrl, Shift and F all at the same time). It makes my code look pretty and neat. A list of hotkeys is found here. You can customize them to your own taste by going to Tools -> Options -> Keymap. suanshu.jar To program the SuanShu library, you will need a licensed copy of suanshu.jar (the actual file name depends on the version that you have). It is a compressed file that contains the SuanShu Java classes for numerical computing. This section walks you through how to set up a SuanShu project. You may skip this section if you want to start programming SuanShu right away. This NetBeans project has everything set up. Please read our Basic Trails to get started. If in the future you would like to update the SuanShu library for a newer version, you will need to come back to this section. Create a SuanShu project To create a SuanShu project in NetBeans, open the NetBeans IDE, click File -> New Project… (Ctrl+Shift+N). In ‘categories’, choose ‘Java’; in projects, choose ‘Java Application’. Click ‘Next’. In ‘Project Name’, type whatever you like, e.g., HelloSuanShu. In ‘Project Location’, type where you would like to save your project. Click ‘Finish’. We need to tell this Java project that we will be calling the SuanShu library. To do so, right click on the project name in the Projects window. Click ‘Properties’. Click ‘Libraries’. Make sure the ‘Compile’ tab is displayed. Click ‘Add JAR/Folder’ and browse to where you save suanshu.jar. Choose suanshu.jar and then hit ‘OK’. To install the SuanShu javadoc in NetBeans, we need to associate the suanshu.jar with its javadoc. Click ‘Edit’. Browse where you save suanshu.javadoc.zip and hit ‘OK’. Now, you have created an empty SuanShu project. Copy and paste these lines under “// TODO code application logic here“. System.out.println(“Hello SuanShu”); Matrix A1 = new DenseMatrix(new double[][]{//create a matrix {1, 2, 1}, {4, 5, 2}, {7, 8, 1} }); System.out.println(A1); Matrix B = new Inverse(A1);//compute the inverse of A1 Matrix I = A1.multiply(B);//this should be the identity matrix System.out.println(String.format(“%s * %s = %s (the identity matrix)”, A1, B, I)); Hit Ctrl+Shift+I to fix the necessary imports. Make sure you select these two. import com.numericalmethod.suanshu.matrix.doubles.operation.Inverse; import com.numericalmethod.suanshu.matrix.doubles.Matrix; Hit Alt+Shift+F to beautify your code. The complete source for this example can be found here. Right click on the project name, e.g., HelloSuanShu, and click ‘Run’ (or simply press F6 if you make this project your main project). Vola! You should see this in your output window (Ctrl+4). Hello SuanShu 3×3 [,1] [,2] [,3] [1,] 1.000000, 2.000000, 1.000000, [2,] 4.000000, 5.000000, 2.000000, [3,] 7.000000, 8.000000, 1.000000, 3×3 [,1] [,2] [,3] [1,] 1.000000, 2.000000, 1.000000, [2,] 4.000000, 5.000000, 2.000000, [3,] 7.000000, 8.000000, 1.000000, * 3×3 [,1] [,2] [,3] [1,] -1.833333, 1.000000, -0.166667, [2,] 1.666667, -1.000000, 0.333333, [3,] -0.500000, 1.000000, -0.500000, = 3×3 [,1] [,2] [,3] [1,] 1.000000, 0.000000, 0.000000, [2,] 0.000000, 1.000000, 0.000000, [3,] 0.000000, 0.000000, 1.000000, (the identity matrix) Congratulations! Javadoc SuanShu’s javadoc is the complete reference to the SuanShu library. You can read it in a browser, e.g., Internet Explorer. You can bring them up dynamically during programming. Each time you press Ctrl+SPACE after the ‘.’ of any object you create, you will see a list of available methods for this object, and the javadoc for the method selected. To get more information about programming SuanShu, please consult our tutorials. Recent Comments
http://numericalmethod.com/up/suanshu/tutorial/setupguide/
CC-MAIN-2017-26
refinedweb
878
69.07
In this Java XPath tutorial, we will learn what is XPath library, what are XPath data types and learn to create XPath expression syntax to retrieve information from XML file or document. This information can be XML nodes or XML attributes or even comments as well. Table of Contents 1. What is XPath? 2. XPath Data Model 3. XPath Data Types 4. XPath Syntax 5. XPath Expressions 6. Recommended reading We will use this XML in running various XPath examples in this tutorial. <?xml version="1.0" encoding="utf-8" ?> <inventory> <!--Test is test comment--> > </inventory> 1. What is XPath XPath is a syntax used to describe parts of an XML document. With XPath, you can refer to the first element, any attribute of the elements, all specific elements that contain the some text, and many other variations. An XSLT style-sheet uses XPath expressions in the match and select attributes of various elements to indicate how a document should be transformed. XPath can be sometimes useful while testing web services using XML for sending request and receiving response. XPath uses language syntax much similar to what we already know. The syntax is a mix of basic programming language expressions (wild cards such as $x*6) and Unix-like path expressions (such as /inventory/author). In addition to the basic syntax, XPath provides a set of useful functions (such as count() or contains(), much similar to utility functions calls) that allow you to search for various data fragments inside the document. 2. XPath Data Model XPath views an XML document as a tree of nodes. This tree is very similar to a Document Object Model i.e. DOM tree, so if you’re familiar with the DOM, you will easily get some understanding of how to build basic XPath expressions. There are seven kinds of nodes in the XPath data model: - The root node (Only one per document) - Element nodes - Attribute nodes - Text nodes - Comment nodes - Processing instruction nodes - Namespace nodes 2.1. Root Node The root node is the XPath node that contains the entire document. In our example, the root node contains the <inventory> element. In an XPath expression, the root node is specified with a single slash ( '/'). 2.2. Element Nodes Every element in the original XML document is represented by an XPath element node. For example in our sample XML below are element nodes. book title author publisher isbn price 2.3. Attribute Nodes At a minimum, an element node is the parent of one attribute node for each attribute in the XML source document. These nodes are used to define the features about a particular element node. For example in our XML fragment “ year” is an attribute node. 2.4. Text Nodes Text nodes are refreshingly simple. They contain text from an element. If the original text in the XML document contained entity or character references, they are resolved before the XPath text node is created. The text node is text, pure and simple. A text node is required to contain as much text as possible. Remember that the next or previous node of a text node can’t be another text node. For example, all values in our XML fragment are text nodes e.g. “ Snow Crash” and “ Neal Stephenson“. 2.5. Comment Nodes A comment node is also very simple—it contains some text. Every comment in the source document becomes a comment node. The text of the comment node contains everything inside the comment, except the opening <!-- and the closing -->. For example: <!--Test is test comment--> 2.6. Processing Instruction Nodes A processing instruction node has two parts, a name (returned by the name() function) and a string value. The string value is everything after the name <?xml, including white space, but not including the ?> that closes the processing instruction. For example: <?xml version="1.0" encoding="utf-8"?> 2.7. Namespace Nodes Namespace nodes are almost never used in XSLT style sheets; they exist primarily for the XSLT processor’s benefit. Remember that the declaration of a namespace (such as xmlns:auth=””), even though it is technically an attribute in the XML source, becomes a namespace node, not an attribute node. 3. XPath Data Types In Java, an XPath expression may return one of following data types: - node-set – Represents a set of nodes. The set can be empty, or it can contain any number of nodes. - node (Java support it) – Represents a single node. This can be empty, or it can contain any number of child nodes. - boolean – Represents the value true or false. Be aware that the true or false strings have no special meaning or value in XPath; see Section 4.2.1.2 in Chapter 4 for a more detailed discussion of boolean values. - number – Represents a floating-point number. All numbers in XPath and XSLT are implemented as floating-point numbers; the integer (or int) datatype does not exist in XPath and XSLT. Specifically, all numbers are implemented as IEEE 754 floatingpoint numbers, the same standard used by the Java float and double primitive types. In addition to ordinary numbers, there are five special values for numbers: positive and negative infinity, positive and negative zero, and NaN, the special symbol for anything that is not a number. - string – Represents zero or more characters, as defined in the XML specification. These datatypes are usually simple, and with the exception of node-sets, converting between types is usually straightforward. We won’t discuss these datatypes in any more detail here; instead, we’ll discuss datatypes and conversions as we need them to do specific tasks. 4. XPath Syntax XPath uses UNIX and regex kind syntax. 4.1. Select nodes with xpath 4.2. Use predicates with xpath Predicates are used to find a specific node or a node that contains a specific value. Predicates are always embedded in square brackets. We will learn how to use them in the next section. 4.3. Reaching unknown nodes with xpath XPath wildcards can be used to select unknown XML elements. 4.4. XPath Axes An axis defines a node-set relative to the current node. Following are axes defined by default. 4.5. XPath Operators Below is a list of xpath operators that can be used in XPath expressions: 5. XPath Expressions Let's try to retrieve different parts of XML using XPath expressions and given data types. package xml;; public class XPathTest { public static void main(String[] args) throws Exception { //Build DOM DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); // never forget this! DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.parse("inventory.xml"); //Create XPath XPathFactory xpathfactory = XPathFactory.newInstance(); XPath xpath = xpathfactory.newXPath(); System.out.println("n//1) Get book titles written after 2001"); // 1) Get book titles written after 2001 XPathExpression expr = xpath.compile("//book[@year>2001]/title/text()"); Object result = expr.evaluate(doc, XPathConstants.NODESET); NodeList nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//2) Get book titles written before 2001"); // 2) Get book titles written before 2001 expr = xpath.compile("//book[@year<2001]/title/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//3) Get book titles cheaper than 8 dollars"); // 3) Get book titles cheaper than 8 dollars expr = xpath.compile("//book[price<8]/title/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//4) Get book titles costlier than 8 dollars"); // 4) Get book titles costlier than 8 dollars expr = xpath.compile("//book[price>8]/title/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//5) Get book titles added in first node"); // 5) Get book titles added in first node expr = xpath.compile("//book[1]/title/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//6) Get book title added in last node"); // 6) Get book title added in last node expr = xpath.compile("//book[last()]/title/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//7) Get all writers"); // 7) Get all writers expr = xpath.compile("//book/author/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//8) Count all books titles "); // 8) Count all books titles expr = xpath.compile("count(//book/title)"); result = expr.evaluate(doc, XPathConstants.NUMBER); Double count = (Double) result; System.out.println(count.intValue()); System.out.println("n//9) Get book titles with writer name start with Neal"); // 9) Get book titles with writer name start with Neal expr = xpath.compile("//book[starts-with(author,'Neal')]"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i) .getChildNodes() .item(1) //node <title> is on first index .getTextContent()); } System.out.println("n//10) Get book titles with writer name containing Niven"); // 10) Get book titles with writer name containing Niven expr = xpath.compile("//book[contains(author,'Niven')]"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i) .getChildNodes() .item(1) //node <title> is on first index .getTextContent()); } System.out.println("//11) Get book titles written by Neal Stephenson"); // 11) Get book titles written by Neal Stephenson expr = xpath.compile("//book[author='Neal Stephenson']/title/text()"); result = expr.evaluate(doc, XPathConstants.NODESET); nodes = (NodeList) result; for (int i = 0; i < nodes.getLength(); i++) { System.out.println(nodes.item(i).getNodeValue()); } System.out.println("n//12) Get count of book titles written by Neal Stephenson"); // 12) Get count of book titles written by Neal Stephenson expr = xpath.compile("count(//book[author='Neal Stephenson'])"); result = expr.evaluate(doc, XPathConstants.NUMBER); count = (Double) result; System.out.println(count.intValue()); System.out.println("n//13) Reading comment node "); // 13) Reading comment node expr = xpath.compile("//inventory/comment()"); result = expr.evaluate(doc, XPathConstants.STRING); String comment = (String) result; System.out.println(comment); } } Program output: //1) Get book titles written after 2001 Burning Tower //2) Get book titles written before 2001 Snow Crash Zodiac //3) Get book titles cheaper than 8 dollars Burning Tower Zodiac //4) Get book titles costlier than 8 dollars Snow Crash //5) Get book titles added in the first node Snow Crash //6) Get book title added in last node Zodiac //7) Get all writers Neal Stephenson Larry Niven Jerry Pournelle Neal Stephenson //8) Count all books titles 3 //9) Get book titles with writer name start with Neal Snow Crash Zodiac //10) Get book titles with writer name containing Niven Burning Tower //11) Get book titles written by Neal Stephenson Snow Crash Zodiac //12) Get count of book titles written by Neal Stephenson 2 //13) Reading comment node Test is test comment I hope that this If you have some suggestions then please leave a comment. Happy Learning !! Feedback, Discussion and Comments Gabriel E. Hi, I’m really new in XPath, I would like to know if its possible to write one javax.xml.xpath.XPathExpression for obtain the value of tag? I have try these expressions, with no results: String expression= “/autorizacion/comprobante/*/*/factura/infoFactura/identificacionComprador”; String expression= “/*//identificacionComprador/text()”; The java function for compiling is this: Thanks in advance for any help or guidance. Lokesh Gupta We can not read data inside CDATA using normal xpath expression. The data is plain text – not DOM node so we must be using string functions. This code works: Read More : SO Thread tutorialsplane Helpful Blog For Me Thanks For Sharing This!!!!! trial hi. i want to traverse the XML file given below and get the ip address and application name stored in array. Help me how to traverse. App1 App2 App3 App4 App12 App22 App32 App42 Desired output is: IP address:192.168.10.10 Appliaction running on 192.168.10.10 device: App1 App2 App3 App4 Similarly for next device Naveed Thanks for the clear explanation. . Its good. Can you let me know how to return the particular node as xml response back. From your example xml file, if i pass the year value as “2005” then it should send me the particular node as XML response. John Michael Tolentino Gwapo Thanks for this tutorial.. I really really like it… It helps me a lot with my project.. Tawfeeq Tawfeeq Good Day Lokesh, your assistance would be highly appreciated. this is my xml document below : 634.0 12.0 2670.0 3/31/2016 12:00:00 AM 8/31/2018 12:00:00 AM 1350.0 00 i have tried so many ways to try and extract the inner text of the xml but no success, could you please provide a solution to read the above xml. thank you very much Lokesh Gupta You will get all information here : Shivanand khandagale hi lokesh i want to call web service with xpath included in url…could u help me about this that how can i pass only some part of xml in url.. e.g my url is ::'localhost.localdomain'%5D/deviceconfig/system&element=myneme20.20.20.20 minu Hi Lokesh, Is it possible to sort the values using xpath? For example I want to retrieve title of all books in sorted order. Thanks, Minu anjali Hi my doubt is whether we can write a program to find the path of any given XML file.for example you take any XML file and find out all possible paths of the given file. Lokesh There is no readymade API for this. You have to iterate through XML tree and find all yourself. Anj Thanks for wonderful article.Can you tell me the XPATH expression to select all nodes where’ year ‘ attribute is specified .I tried using //book[@year] but it doesnt work Raja //book/@year Jedi It looks like my example got garbled and lost its tags. The inline tags are meant to be [b] tags Lokesh Post you code inside [xml] … [/xml] tags. Jedi Thanks, I will try that. Crossing my fingers that the code shows. The the gist, how do you also get inline tags to show in the results? Suppose one of the examples had inlines, like this —edited— And you want the results to look like this —edited— at any rate, I think the intent of my question is clear? Thanks in advance! Lokesh So far, I am able to do this much. I know it does not exactly solve your problem, but be sure that I will put more effort onto this. Jedi Very nice! But what about elements with nested elements? What if one of your examples had an inline element like this Burning MY Tower Pocket Larry Niven Jerry Pournelle 0743416910 5.99 And you wanted output to include the inline like this //1) Get book titles written after 2001 Burning MY Tower Can this be done without some complicated serialization? I’ve tired JAXB and gotten lost in complexity. Is there an easier way? tran can you explain why we must set namespaceaware(true)? Riya Am not able to use @Xmlpath in Jaxb. Please give any suggestions Jay Can you please provide me the package xml Lokesh It’s only class in the package.. 🙂 Jay then what was the need of mentioning package xml; Lokesh 🙂 It’s just package name I created for writing the sample code. Usually I copy whole class file code and paste as it is in tutorial as well. Jay I am able to run your code using ur xml sample..but when I am trying to run with mine xml response the above code is not working, I decoded and found the problem lies with namespace..Can you please help, how to handle namespace in your code Jay Any update on the namespace issue I am facing. Lokesh You should use XPath local-name() like this: xpath.compile("//*[local-name()='title']/text()"); Jay A bit confusing, can you please help me implementing this on my xpath, find below my xpath : //ns1:inventory/book[1]/title/text() Jay Hey just solved it, thanks a lot for your help, ultimately I did it with namespace. I would like to get in touch with you, if I require any help in future, can u please drop me your mail id here, or else you can drop me a mail at basujaydeep[at]yahoo[dot]com Lokesh Post your question here if you need my help anytime. It will help other’s as well. Thorsten Reimers Thank you very much, very good. I noticed that it is a small step only to manipulation of XML using XPath expressions. I tried setTextContext after navigating to a node and it worked as expected! Thorsten Reimers Hi, if you like, here is an excerpt from my code DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.parse(new InputSource(new StringReader(xmlInput))); XPathFactory xpathfactory = XPathFactory.newInstance(); XPath xpath = xpathfactory.newXPath(); XPathExpression xpathExpression = xpath.compile(xpathDefinition); Node node = (Node) xpathExpression.evaluate(doc, XPathConstants.NODE); node.setTextContent(value); DOMSource domSource = new DOMSource(doc); StringWriter writer = new StringWriter(); StreamResult res = new StreamResult(writer); TransformerFactory tf = TransformerFactory.newInstance(); Transformer transformer = tf.newTransformer(); transformer.transform(domSource, res); String xmlOutput = writer.toString(); I had the need to transform the document to an XML string … Cheers Thorsten
https://howtodoinjava.com/xml/java-xpath-tutorial-example/
CC-MAIN-2019-43
refinedweb
2,977
58.38
Utilities to read/write Python types to/from HDF5 files, including MATLAB v7.3 MAT files. Project description Overview This Python. All of this is done without pickling data. Pickling is bad for security because it allows arbitrary code to be executed in the interpreter. One wants to be able to read possibly HDF5 and MAT files from untrusted sources, so pickling is avoided in this package. The package’s documetation is found at The package’s source code is found at The package is licensed under a 2-clause BSD license (). Installation Dependencies This package only supports Python >= 2.6. This package requires the numpy and h5py (>= 2.1) packages to run. Note that full functionality requires h5py >= 2.3. An optional dependency is the scipy package. Installing by pip This package is on PyPI. To install hdf5storage using pip, run the command: pip install hdf5storage Installing from Source To install hdf5storage from source, download the package and then install the dependencies pip install -r requirements.txt Then to install the package, run the command with Python python setup.py install Running Tests For testing, the package nose (>= 1.0) is required as well as unittest2 on Python 2.6. There are some tests that require Matlab and scipy to be installed and be in the executable path. Not having them means that those tests cannot be run (they will be skipped) but all the other tests will run. To install all testing dependencies, other than scipy, run pip install -r requirements_tests.txt. To run the tests python setup.py nosetests Building Documentation The documentation additionally requires sphinx (>= 1.7). The documentation dependencies can be installed by pip install -r requirements_doc.txt To build the documentation python setup.py build_sphinx Python 2 This package was designed and written for Python 3, with Python 2.7 and 2.6 support added later. This does mean that a few things are a little clunky in Python 2. Examples include requiring unicode keys for dictionaries, the int and long types both being mapped to the Python 3 int type, etc. The storage format’s metadata looks more familiar from a Python 3 standpoint as well. The documentation is written in terms of Python 3 syntax and types primarily. Important Python 2 information beyond direct translations of syntax and types will be pointed out. Hierarchal Data Format 5 (HDF5) HDF5 files (see) are a commonly used file format for exchange of numerical data. It has built in support for a large variety of number formats (un/signed integers, floating point numbers, strings, etc.) as scalars and arrays, enums and compound types. It also handles differences in data representation on different hardware platforms (endianness, different floating point formats, etc.). As can be imagined from the name, data is represented in an HDF5 file in a hierarchal form modelling a Unix filesystem (Datasets are equivalent to files, Groups are equivalent to directories, and links are supported). This package interfaces HDF5 files using the h5py package () as opposed to the PyTables package (). MATLAB MAT v7.3 file support MATLAB () MAT files version 7.3 and later are HDF5 files with a different file extension (.mat) and a very specific set of meta-data and storage conventions. This package provides read and write support for a limited set of Python and MATLAB types. SciPy () has functions to read and write the older MAT file formats. This package has functions modeled after the scipy.io.savemat and scipy.io.loadmat functions, that have the same names and similar arguments. The dispatch to the SciPy versions if the MAT file format is not an HDF5 based one. Supported Types The supported Python and MATLAB types are given in the tables below. The tables assume that one has imported collections and numpy as: import collections as cl import numpy as np The table gives which Python types can be read and written, the first version of this package to support it, the numpy type it gets converted to for storage (if type information is not written, that will be what it is read back as) the MATLAB class it becomes if targetting a MAT file, and the first version of this package to support writing it so MATlAB can read it. This table gives the MATLAB classes that can be read from a MAT file, the first version of this package that can read them, and the Python type they are read as. Versions - 0.1.16. Bugfix release that fixed the following bugs. - Issue #81 and #82. h5py.File will require the mode to be passed explicitly in the future. All calls without passing it were fixed to pass it. - Issue #102. Added support for h5py 3.0 and 3.1. - Issue #73. Fixed bug where a missing variable in loadmat would cause the function to think that the file is a pre v7.3 format MAT file fall back to scipy.io.loadmat which won’t work since the file is a v7.3 format MAT file. - Fixed formatting issues in the docstrings and the documentation that prevented the documentation from building. - 0.1.15. Bugfix release that fixed the following bugs. - Issue #68. Fixed bug where str and numpy.unicode_ strings (but not ndarrays of them) were saved in uint32 format regardless of the value of Options.convert_numpy_bytes_to_utf16. - Issue #70. Updated setup.py and requirements.txt to specify the maximum versions of numpy and h5py that can be used for specific python versions (avoid version with dropped support). - Issue #71. Fixed bug where the 'python_fields' attribute wouldn’t always be written when doing python metadata for data written in a struct-like fashion. The bug caused the field order to not be preserved when writing and reading. - Fixed an assertion in the tests to handle field re-ordering when no metadata is used for structured dtypes that only worked on older versions of numpy. - Issue #72. Fixed bug where python collections filled with ndarrays that all have the same shape were converted to multi-dimensional object ndarrays instead of a 1D object ndarray of the elements. -. - 0.1.13. Bugfix release fixing the following bug. - Issue #36. Fixed bugs in writing int and long to HDF5 and their tests on 32 bit systems. -. - 0.1.11. Bugfix release fixing the following. - Issue #30. Fixed loadmat not opening files in read mode. -. 0.1.6. Bugfix release fixing a bug with determining the maximum size of a Python 2.x int on a 32-bit system. -. - 0.1.3. Bugfix release fixing the following bug. - Fixed broken ability to correctly read and write empty structured np.ndarray (has fields). -. - 0.1.1. Bugfix release fixing the following bugs. - str is now written like numpy.str_ instead of numpy.bytes_. - Complex numbers where the real or imaginary part are nan but the other part are not are now read correctly as opposed to setting both parts to nan. - Fixed bugs in string conversions on Python 2 resulting from str.decode() and unicode.encode() not taking the same keyword arguments as in Python 3. - MATLAB structure arrays can now be read without producing an error on Python 2. - numpy.str_ now written as numpy.uint16 on Python 2 if the convert_numpy_str_to_utf16 option is set and the conversion can be done without using UTF-16 doublets, instead of always writing them as numpy.uint32. 0.1. Initial version. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hdf5storage/
CC-MAIN-2021-10
refinedweb
1,266
66.74
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Project Help and Ideas » First Project - Make command does not work I'm doing the first project and in step 10c I keep getting the following when I type in "make": make -C ../libnerdkits make: *** ../libnerdkits: No such file or directory. Stop. make: *** [initialload.hex] Error 2 How do I fix it? Also of the topic where can I get the set up for "make music with microcontroller" project? for the first problem, you need to copy the whole "Code" folder from the nerdkits CD to your computer. for the second, here's a post about the music project for the ATmega168. View the tutorial first. The pins for the ATmega168 version are these: PB5 -> buzzer -> ground ground -> switch -> PC5 -Paul Correction: you should copy the whole CD on to your hard drive It didn't work. I downloaded the folder "Code" from nerdkits site. I put the folder on disk C. Then I go to command prompt: c: code initialload> and type in "make" but it doesn't work. It seems that it can not find "libnerdkits" , I have that folder (libnerdkits) in the "code" folder. Maybe it should be some where else? What folder are you in when you type make? Can you do a DIR command and see all your project files as well as the makefile? The directory structure in the code folder should not need to be changed to compile the projects in there. In your programs, you will have includes that look like this: #include "../libnerdkits/delay.h" #include "../libnerdkits/lcd.h" When make sees these, it will look one folder up (../) for the libnerdkits folder, then specifically the files delay.h and lcd.h in that folder. If you were to type THIS at the DOS prompt from the root directory of C: c:\code\initialload\make Then make would try to look one folder above C:\ which would give an error. You have to first change to the initialload folder by typing: cd c:\code\initialload This is with the assumption that your code folder is in the root folder of C:. Then when you type make, it should work as long as the delay.h, delay.c, lcd.h, and lcd.c files are in the libnerdkits folder one folder above your initialload folder like this: C:\ --+ | + - code --+ | + - libnerdkits | + | | | + - delay.h, delay.c, lcd.h, lcd.c | + - initialload | + | | | + - program files and makefile Hope that helps, Rick Thanks Rick and Paul, It just worked. My folders were in wrong places. No problem, good to hear you are up and running... Please log in to post a reply.
http://www.nerdkits.com/forum/thread/465/
CC-MAIN-2019-18
refinedweb
451
83.36
PHP Cookbook/Files From WikiContent Introduction The C's, although less complicated. The fundamental unit of identifying a file to read from or write to is a file handle . This handle identifies your connection to a specific file, and you use it for operations on the file. This chapter focuses on opening and closing files and manipulating file handles in PHP, as well as what you can do with the file contents once you've opened a file. Chapter 19 deals with directories and file metadata such as permissions. Opening /tmp/cookie-data and writing the contents of a specific cookie to the file looks like this: $fh = fopen('/tmp/cookie-data','w') or die("can't open file"); if (-1 == fwrite($fh,$_COOKIE['flavor'])) { die("can't write data"); } fclose($fh) or die("can't close file"); The function fopen( ) returns a file handle if its attempt to open the file is successful. If it can't open the file (because of incorrect permissions, for example), it returns false. Section 18.2 and Section 18.4 cover ways to open files. The function fwrite( ) writes the value of the flavor cookie to the file handle. It returns the number of bytes written. If it can't write the string (not enough disk space, for example), it returns -1. Last, fclose( ) closes the file handle. This is done automatically at the end of a request, but it's a good idea to explicitly close all files you open anyway. It prevents problems using the code in a command-line context and frees up system resources. It also allows you to check the return code from fclose( ). Buffered data might not be actually written to disk until fclose( ) is called, so it's here that "disk full" errors are sometimes reported. As with other processes, PHP must have the correct permissions to read from and write to a file. This is usually straightforward in a command-line context but can cause confusion when running scripts within a web server. Your web server (and consequently your PHP scripts) probably runs as a specific user dedicated to web serving (or perhaps as user nobody). For good security reasons, this user often has restricted permissions on what files it can access. If your script is having trouble with a file operation, make sure the web server's user or group — not yours — has permission to perform that file operation. Some web serving setups may run your script as you, though, in which case you need to make sure that your scripts can't accidentally read or write personal files that aren't part of your web site. Because most file-handling functions just return false on error, you have to do some additional work to find more details about that error. When the track_errors configuration directive is on, each error message is put in the global variable $php_errormsg. Including this variable as part of your error output makes debugging easier: $fh = fopen('/tmp/cookie-data','w') or die("can't open: $php_errormsg"); if (-1 == fwrite($fh,$_COOKIE['flavor'])) { die("can't write: $php_errormsg") }; fclose($fh) or die("can't close: $php_errormsg"); If you don't have permission to write to the /tmp/cookie-data, the example dies with this error output: can't open: fopen("/tmp/cookie-data", "w") - Permission denied There are differences in how files are treated by Windows and by Unix. To ensure your file access code works appropriately on Unix and Windows, take care to handle line-delimiter characters and pathnames correctly. A line delimiter on Windows is two characters: ASCII 13 (carriage return) followed by ASCII 10 ( linefeed or newline). On Unix, it's just ASCII 10. The typewriter-era names for these characters explain why you can get "stair-stepped" text when printing out a Unix-delimited file. Imagine these character names as commands to the platen in a typewriter or character-at-a-time printer. A carriage return sends the platen back to the beginning of the line it's on, and a line feed advances the paper by one line. A misconfigured printer encountering a Unix-delimited file dutifully follows instructions and does a linefeed at the end of each line. This advances to the next line but doesn't move the horizontal printing position back to the left margin. The next stair-stepped line of text begins (horizontally) where the previous line left off. PHP functions that use a newline as a line-ending delimiter (for example, fgets( )) work on both Windows and Unix because a newline is the character at the end of the line on either platform. To remove any line-delimiter characters, use the PHP function rtrim( ) : $fh = fopen('/tmp/lines-of-data.txt','r') or die($php_errormsg); while($s = fgets($fh,1024)) { $s = rtrim($s); // do something with $s ... } fclose($fh) or die($php_errormsg); This function removes any trailing whitespace in the line, including ASCII 13 and ASCII 10 (as well as tab and space). If there's whitespace at the end of a line that you want to preserve, but you still want to remove carriage returns and line feeds, use an appropriate regular expression: $fh = fopen('/tmp/lines-of-data.txt','r') or die($php_errormsg); while($s = fgets($fh,1024)) { $s = preg_replace('/\r?\n$/','',$s); // do something with $s ... } fclose($fh) or die($php_errormsg); Unix and Windows also differ on the character used to separate directories in pathnames. Unix uses a slash (/), and Windows uses a backslash (\). PHP makes sorting this out easy, however, because the Windows version of PHP also understands / as a directory separator. For example, this code successfully prints the contents of C:\Alligator\Crocodile Menu.txt: $fh = fopen('c:/alligator/crocodile menu.txt','r') or die($php_errormsg); while($s = fgets($fh,1024)) { print $s; } fclose($fh) or die($php_errormsg); This piece of code also takes advantage of the fact that Windows filenames aren't case-sensitive. However, Unix filenames are. Sorting out linebreak confusion isn't only a problem in your code that reads and writes files but in your source code as well. If you have multiple people working on a project, make sure all developers configure their editors to use the same kind of linebreaks. Once you've opened a file, PHP gives you many tools to process its data. In keeping with PHP's C-like I/O interface, the two basic functions to read data from a file are fread( ) , which reads a specified number of bytes, and fgets( ), which reads a line at a time (up to a specified number of bytes.) This code handles lines up to 256 bytes long: $fh = fopen('orders.txt','r') or die($php_errormsg); while (! feof($fh)) { $s = fgets($fh,256); process_order($s); } fclose($fh) or die($php_errormsg); If orders.txt has a 300-byte line, fgets( ) returns only the first 256 bytes. The next fgets( ) returns the next 44 bytes and stops when it finds the newline. The next fgets( ) moves to the next line of the file. Examples in this chapter generally give fgets( ) a second argument of 1048576: 1 MB. This is longer than lines in most text files, but the presence of such an outlandish number should serve as a reminder to consider your maximum expected line length when using fgets(). Many operations on file contents, such as picking a line at random (see Section 18.11) are conceptually simpler (and require less code) if the entire file is read into a string or array. Section 18.6 provides a method for reading a file into a string, and the file( ) function puts each line of a file into an array. The tradeoff for simplicity, however, is memory consumption. This can be especially harmful when you are using PHP as a server module. Generally, when a process (such as a web server process with PHP embedded in it) allocates memory (as PHP does to read an entire file into a string or array), it can't return that memory to the operating system until it dies. This means that calling file( ) on a 1 MB file from PHP running as an Apache module increases the size of that Apache process by 1 MB until the process dies. Repeated a few times, this decreases server efficiency. There are certainly good reasons for processing an entire file at once, but be conscious of the memory-use implications when you do. Section 18.21 through Section 18.24 deal with running other programs from within a PHP program. Some program-execution operators or functions offer ways to run a program and read its output all at once (backticks) or read its last line of output (system( )). PHP can use pipes to run a program, pass it input, or read its output. Because a pipe is read with standard I/O functions (fgets( ) and fread( )), you decide how you want the input and you can do other tasks between reading chunks of input. Similarly, writing to a pipe is done with fputs( ) and fwrite( ), so you can pass input to a program in arbitrary increments. Pipes have the same permission issues as regular files. The PHP process must have execute permission on the program being opened as a pipe. If you have trouble opening a pipe, especially if PHP is running as a special web server user, make sure the user is allowed to execute the program you are opening a pipe to. Creating or Opening a Local File Problem You want to open a local file to read data from it or write data to it. Solution Use fopen( ): $fh = fopen('file.txt','r') or die("can't open file.txt: $php_errormsg"); Discussion 18-1. Table 18-1. fopen( ) file modes"); See Also Documentation on fopen( ) at. Creating a Temporary File Problem You need a file to temporarily hold some data. Solution Use tmpfile( ) if the file needs to last only the duration of the running script: $temp_fh = tmpfile(); // write some data to the temp file fputs($temp_fh,"The current time is ".strftime('%c')); // the file goes away when the script ends exit(1); If the file needs to last longer, generate a filename with tempnam( ) , and then use fopen( ) : $tempfilename = tempnam('/tmp','data-'); $temp_fh = fopen($tempfilename,'w') or die($php_errormsg); fputs($temp_fh,"The current time is ".strftime('%c')); fclose($temp_fh) or die($php_errormsg); Discussion The function tmpfile( ) creates a file with a unique name and returns a file handle. The file is removed when fclose( ) is called on that file handle, or the script ends. Alternatively, tempnam( ) generates a filename. It takes two arguments: the first is a directory, and the second is a prefix for the filename. If the directory doesn't exist or isn't writeable, tempnam( ) uses the system temporary directory — the TMPDIR environment variable in Unix or the TMP environment variable in Windows. For example: $tempfilename = tempnam('/tmp','data-'); print "Temporary data will be stored in $tempfilename"; Temporary data will be stored in /tmp/data-GawVoL Because of the way PHP generates temporary filenames, the filename tempnam( ) returns is actually created but left empty, even if your script never explicitly opens the file. This ensures another program won't create a file with the same name between the time that you call tempnam( ) and the time you call fopen( ) with the filename. See Also Documentation on tmpfile( ) at and on tempnam( ) at. Opening a Remote File Problem You want to open a file that's accessible to you via HTTP or FTP. Solution Pass the file's URL to fopen( ) : $fh = fopen('','r') or die($php_errormsg); Discussion When fopen( ) is passed a filename that begins with http://, it retrieves the given page with an HTTP/1.0 GET request (although a Host: header is also passed along to deal with virtual hosts). Only the body of the reply can be accessed using the file handle, not the headers. Files can be read, like this: $fh = fopen('','r'); $fh = fopen('','r'); Opening remote files with fopen( ) is implemented via a PHP feature called the URL fopen wrapper . It's enabled by default but is disabled by setting allow_url_fopen to off in your php.ini or web server configuration file. If you can't open remote files with fopen( ), check your server configuration. See Also Section 11.2 through Section 11.6, which discuss retrieving URLs; documentation on fopen( ) at and on the URL fopen wrapper feature at. Reading from Standard Input Problem You want to read from standard input. Solution Use fopen( ) to open php://stdin: $fh = fopen('php://stdin','r') or die($php_errormsg); while($s = fgets($fh,1024)) { print "You typed: $s"; } Discussion Section 20.4 discusses reading data from the keyboard in a command-line context. Reading data from standard input isn't very useful in a web context, because information doesn't arrive via standard input. The bodies of HTTP POST and file-upload requests are parsed by PHP and put into special variables. They can't be read on standard input, as they can in some web server and CGI implementations. See Also Section 20.4 for reading from the keyboard in a command-line context; documentation on fopen( ) at. Reading a File into a String Problem You want to load the entire contents of a file into a variable. For example, you want to determine if the text in a file matches a regular expression. Solution Use filesize( ) to get the size of the file, and then tell fread( ) to read that many bytes: $fh = fopen('people.txt','r') or die($php_errormsg); $people = fread($fh,filesize('people.txt')); if (preg_match('/Names:.*(David|Susannah)/i',$people)) { print "people.txt matches."; } fclose($fh) or die($php_errormsg); Discussion To read a binary file (e.g., an image) on Windows, a b must be appended to the file mode: $fh = fopen('people.jpg','rb') or die($php_errormsg); $people = fread($fh,filesize('people.jpg')); fclose($fh); There are easier ways to print the entire contents of a file than by reading it into a string and then printing the string. PHP provides two functions for this. The first is fpassthru($fh) , which prints everything left on the file handle $fh and then closes it. The second, readfile($filename) , prints the entire contents of $filename. You can use readfile( ) to implement a wrapper around images that shouldn't always be displayed. This program makes sure a requested image is less than a week old: $image_directory = '/usr/local/images'; if (preg_match('/^[a-zA-Z0-9]+\.(gif|jpeg)$/',$image,$matches) && is_readable($image_directory."/$image") && (filemtime($image_directory."/$image") >= (time() - 86400 * 7))) { header('Content-Type: image/'.$matches[1]); header('Content-Length: '.filesize($image_directory."/$image")); readfile($image_directory."/$image"); } else { error_log("Can't serve image: $image"); } The directory in which the images are stored, $image_directory, needs to be outside the web server's document root for the wrapper to be effective. Otherwise, users can just access the image files directly. You test the image for three things. First, that the filename passed in $image is just alphanumeric with an ending of either .gif or .jpeg. You need to ensure that characters such as .. or / are not in the filename; this prevents malicious users from retrieving files outside the specified directory. Second, use is_readable( ) to make sure you can read the file. Finally, get the file's modification time with filemtime( ) and make sure that time is after 86400 × 7 seconds ago. There are 86,400 seconds in a day, so 86400. See Also Documentation on filesize( ) at, fread( ) at, fpassthru( ) at, and readfile( ) at. Counting Lines, Paragraphs, or Records in a File Problem You want to count the number of lines, paragraphs, or records in a file. Solution To count lines, use fgets( ) . Because it reads a line at a time, you can count the number of times it's called before reaching the end of a file: $lines = 0; if ($fh = fopen('orders.txt','r')) { while (! feof($fh)) { if (fgets($fh,1048576)) { $lines++; } } } print $lines; To count paragraphs, increment the counter only when you read a blank line: $paragraphs = 0; if ($fh = fopen('great-american-novel.txt','r')) { while (! feof($fh)) { $s = fgets($fh,1048576); if (("\n" == $s) || ("\r\n" == $s)) { $paragraphs++; } } } print $paragraphs; To count records, increment the counter only when the line read contains just the record separator and whitespace: $records = 0; $record_separator = '--end--'; if ($fh = fopen('great-american-novel.txt','r')) { while (! feof($fh)) { $s = rtrim(fgets($fh,1048576)); if ($s == $record_separator) { $records++; } } } print $records; Discussion In the line counter, $lines is incremented only if fgets( ) returns a true value. As fgets( ) moves through the file, it returns each line it retrieves. When it reaches the last line, it returns false, so $lines doesn't get incorrectly incremented. Because EOF has been reached on the file, feof( ) returns true, and the while loop ends. This paragraph counter works fine on simple text but may produce unexpected results when presented with a long string of blank lines or a file without two consecutive linebreaks. These problems can be remedied with functions based on preg_split( ). If the file is small and can be read into memory, use the pc_split_paragraphs( ) function shown in Example 18-1. This function returns an array containing each paragraph in the file. Example 18-1. pc_split_paragraphs( ) function pc_split_paragraphs($file,$rs="\r?\n") { $text = join('',file($file)); $matches = preg_split("/(.*?$rs)(?:$rs)+/s",$text,-1, PREG_SPLIT_DELIM_CAPTURE|PREG_SPLIT_NO_EMPTY); return $matches; } The contents of the file are broken on two or more consecutive newlines and returned in the $matches array. The default record-separation regular expression, \r?\n, matches both Windows and Unix linebreaks. If the file is too big to read into memory at once, use the pc_split_paragraphs_largefile( ) function shown in Example 18-2, which reads the file in 4K chunks. Example 18-2. pc_split_paragraphs_largefile( ) function pc_split_paragraphs_largefile($file,$rs="\r?\n") { global $php_errormsg; $unmatched_text = ''; $paragraphs = array(); $fh = fopen($file,'r') or die($php_errormsg); while(! feof($fh)) { $s = fread($fh,4096) or die($php_errormsg); $text_to_split = $unmatched_text . $s; $matches = preg_split("/(.*?$rs)(?:$rs)+/s",$text_to_split,-1, PREG_SPLIT_DELIM_CAPTURE|PREG_SPLIT_NO_EMPTY); // if the last chunk doesn't end with two record separators, save it * to prepend to the next section that gets read $last_match = $matches[count($matches)-1]; if (! preg_match("/$rs$rs\$/",$last_match)) { $unmatched_text = $last_match; array_pop($matches); } else { $unmatched_text = ''; } $paragraphs = array_merge($paragraphs,$matches); } // after reading all sections, if there is a final chunk that doesn't * end with the record separator, count it as a paragraph if ($unmatched_text) { $paragraphs[] = $unmatched_text; } return $paragraphs; } This function uses the same regular expression as pc. See Also Documentation on fgets( ) at, on feof( ) at, and on preg_split( ) at. Processing Every Word in a File Problem You want to do something with every word in a file. Solution); Discussion. Section 2.6). See Also Section 13.3 discusses regular expressions to match words; Section 1.5 for breaking apart a line by words; documentation on fgets( ) at, on preg_split( ) at, and on the Perl-compatible regular expression extension at. Reading a Particular Line in a File Problem You want to read a specific line in a file; for example, you want to read the most recent guestbook entry that's been added on to the end of a guestbook file. Solution If the file fits into memory, read the file into an array and then select the appropriate array element: $lines = file('vacation-hotspots.txt'); print $lines[2]; Discussion Because array indexes start at 0, $lines[2] refers to the third line of the file. If the file is too big to read into an array, read it line by line and keep track of which line you're on: $line_counter = 0; $desired_line = 29; $fh = fopen('vacation-hotspots.txt','r') or die($php_errormsg); while ((! feof($fh)) && ($line_counter <= $desired_line)) { if ($s = fgets($fh,1048576)) { $line_counter++; } } fclose($fh) or die($php_errormsg); print $s; Setting $desired_line = 29 prints the 30th line of the file, to be consistent with the code in the Solution. To print the 29th line of the file, change the while loop line to: while ((! feof($fh)) && ($line_counter < $desired_line)) { See Also Documentation on fgets( ) at and feof( ) at. Processing a File Backward by Line or Paragraph Problem You want to do something with each line of a file, starting at the end. For example, it's easy to add new guestbook entries to the end of a file by opening in append mode, but you want to display the entries with the most recent first, so you need to process the file starting at the end. Solution If the file fits in memory, use file( ) to read each line in the file into an array and then reverse the array: $lines = file('guestbook.txt'); $lines = array_reverse($lines); Discussion You can also iterate through an unreversed array of lines starting at the end. Here's how to print out the last 10 lines in a file, last line first: $lines = file('guestbook.txt'); for ($i = 0, $j = count($lines); $i <= 10; $i++) { print $lines[$j - $i - 1]; } See Also Documentation on file( ) at and array_reverse( ) at. Picking a Random Line from a File Problem You want to pick a line at random from a file; for example, you want to display a selection from a file of sayings. Solution Use the pc_randomint( ) function shown in Example 18-3, which spreads the selection odds evenly over all lines in a file. Example 18-3. pc_randomint( ) function pc_randomint($max = 1) { $m = 1000000; return ((mt_rand(1,$m * $max)-1)/$m); } Here's an example that uses the pc_randomint( ) function: $line_number = 0; $fh = fopen('sayings.txt','r') or die($php_errormsg); while (! feof($fh)) { if ($s = fgets($fh,1048576)) { $line_number++; if (pc_randomint($line_number) < 1) { $line = $s; } } } fclose($fh) or die($php_errormsg); Discussion The pc_randomint( ) function computes a random decimal number between and $max, including 0 but excluding $max. As each line is read, a line counter is incremented, and pc_randomint( ) generates a random number between 0 and $line_number. If the number is less than 1, the current line is selected as the randomly chosen line. After all lines have been read, the last line that was selected as the randomly chosen line is left in $line. This algorithm neatly ensures that each line in an n line file has a 1/ n chance of being chosen without having to store all n lines into memory. See Also Documentation on mt_rand( ) at. Randomizing All Lines in a File Problem You want to randomly reorder all lines in a file. You have a file of funny quotes, for example, and you want to pick out one at random. Solution Read all the lines in the file into an array with file( ) , and then shuffle the elements of the array: $lines = file('quotes-of-the-day.txt'); $lines = pc_array_shuffle($lines); Discussion The pc_array_shuffle( ) function from Section 4.21 is more random than PHP's built-in shuffle( ) function, because it uses the Fisher-Yates shuffle, which equally distributes the elements throughout the array. See Also Section 4.20 for pc_array_shuffle( ); documentation on shuffle( ) at. Processing Variable Length Text Fields Problem You want to read delimited text fields from a file. You might, for example, have a database program that prints records one per line, with tabs between each field in the record, and you want to parse this data into an array. Solution Read in each line and then split the fields based on their delimiter: $delim = '|'; $fh = fopen('books.txt','r') or die("can't open: $php_errormsg"); while (! feof($fh)) { $s = rtrim(fgets($fh,1024)); $fields = explode($delim,$s); // ... do something with the data ... } fclose($fh) or die("can't close: $php_errormsg"); Discussion To parse the following data in books.txt: Elmer Gantry|Sinclair Lewis|1927 The Scarlatti Inheritance|Robert Ludlum|1971 The Parsifal Mosaic|Robert Ludlum|1982 Sophie's Choice|William Styron|1979 Process each record like this: $fh = fopen('books.txt','r') or die("can't open: $php_errormsg"); while (! feof($fh)) { $s = rtrim(fgets($fh,1024)); list($title,$author,$publication_year) = explode('|',$s); // ... do something with the data ... } fclose($fh) or die("can't close: $php_errormsg"); The line length argument to fgets( ) needs to be at least as long as the longest record, so that a record doesn't get truncated. Calling rtrim( ) is necessary because fgets( ) includes the trailing whitespace in the line it reads. Without rtrim( ), each $publication_year would have a newline at its end. See Also Section 1.12 discusses ways to break apart strings into pieces; Section 1.10 and Section 1.11 cover parsing comma-separated and fixed-width data; documentation on explode( ) at and rtrim( ) at. Reading Configuration Files Problem You want to use configuration files to initialize settings in your programs. Solution Use parse_ini_file( ): $config = parse_ini_file('/etc/myapp.ini'); Discussion: ; physical features eyes=brown hair=brown glasses=yes ; other features name=Susannah likes=monkeys,ice cream,reading The array it returns is: Array ( [eyes] => brown [hair] => brown [glasses] => 1 [name] => Susannah [likes] => monkeys,ice cream,reading ): [physical] eyes=brown hair=brown glasses=yes [other] name=Susannah likes=monkeys,ice cream,reading If this file is in /etc/myapp.ini, then: $conf = parse_ini_file('/etc/myapp.ini',1); Puts this array in $conf: Array ( [physical] => Array ( [eyes] => brown [hair] => brown [glasses] => 1 ) [other] => Array ( [name] => Susannah [likes] => monkeys,ice cream,reading ) ) Your configuration file can also be a valid PHP file that you load with require instead of parse_ini_file( ). If the file config.php contains: <?php // physical features $eyes = 'brown'; $hair = 'brown'; $glasses = 'yes'; // other features $name = 'Susannah'; $likes = array('monkeys','ice cream','reading'); ?> You can set the variables $eyes, $hair, $glasses, $name, and $likes with: require 'config.php'; The configuration file loaded by require needs to be valid PHP — including the <?php start tag and the ?> end tag. The variables named in config.php are set explicitly, not inside an array, as in parse_ini_file( ). For simple configuration files, this technique may not be worth the extra attention to syntax, but it is useful for embedding logic in the configuration file: <?php $time_of_day = (date('a') == 'am') ? 'early' : 'late'; ?> The ability to embed logic in configuration files is a good reason to make the files PHP code, but it is helpful also to have all the variables set in the configuration file inside an array. Upcoming versions of PHP will have a feature called namespaces , which is the ability to group variables hierarchically in different bunches; you can have a variable called $hair in two different namespaces with two different values. With namespaces, all the values in a configuration file can be loaded into the Config namespace so they don't interfere with other variables. See Also Documentation on parse_ini_file( ) at; information about namespaces and other upcoming PHP language features is available at. Reading from or Writing to a Specific Location in a File Problem You want to read from (or write to) a specific place in a file. For example, you want to replace the third record in a file of 80-byte records, so you have to write starting at the 161st byte. Solution Use fseek( ) to move to a specific number of bytes after the beginning of the file, before the end of the file, or from the current position in the file: fseek($fh,26); // 26 bytes after the beginning of the file fseek($fh,26,SEEK_SET); // 26 bytes after the beginning of the file fseek($fh,-39,SEEK_END); // 39 bytes before the end of the file fseek($fh,10,SEEK_CUR); // 10 bytes ahead of the current position fseek($fh,0); // beginning of the file The rewind( ) function moves to the beginning of a file: rewind($fh); // the same as fseek($fh,0) Discussion The function fseek( ) returns 0 if it can move to the specified position, otherwise it returns -1. Seeking beyond the end of the file isn't an error for fseek( ). Contrastingly, rewind( ) returns 0 if it encounters an error. You can use fseek( ) only with local files, not HTTP or FTP files opened with fopen( ). If you pass a file handle of a remote file to fseek( ), it throws an E_NOTICE error. To get the current file position, use ftell( ) : if (0 === ftell($fh)) { print "At the beginning of the file."; } Because ftell( ) returns false on error, you need to use the === operator to make sure that its return value is really the integer 0. See Also Documentation on fseek( ) at, ftell( ) at, and rewind( ) at. Removing the Last Line of a File Problem You want to remove the last line of a file; for example, someone's added a comment to the end of your guestbook. You don't like it, so you want to get rid of it. Solution If the file is small, you can read it into an array with file( ) and then remove the last element of the array: $lines = file('employees.txt'); array_pop($lines); $file = join('',$lines); Discussion If the file is large, reading it into an array requires too much memory. Instead, use this code, which seeks to the end of the file and works backwards, stopping when it finds a newline: $fh = fopen('employees.txt','r') or die("can't open: $php_errormsg"); $linebreak = $beginning_of_file = 0; $gap = 80; $filesize = filesize('employees.txt'); fseek($fh,0,SEEK_END); while (! ($linebreak || $beginning_of_file)) { // save where we are in the file $pos = ftell($fh); /* move back $gap chars, use rewind() to go to the beginning if * we're less than $gap characters into the file */ if ($pos < $gap) { rewind($fh); } else { fseek($fh,-$gap,SEEK_CUR); } // read the $gap chars we just seeked back over $s = fread($fh,$gap) or die($php_errormsg); /* if we read to the end of the file, remove the last character * since if it's a newline, we should ignore it */ if ($pos + $gap >= $filesize) { $s = substr_replace($s,'',-1); } // move back to where we were before we read $gap chars into $s if ($pos < $gap) { rewind($fh); } else { fseek($fh,-$gap,SEEK_CUR); } // is there a linebreak in $s ? if (is_integer($lb = strrpos($s,"\n"))) { $linebreak = 1; // the last line of the file begins right after the linebreak $line_end = ftell($fh) + $lb + 1; } // break out of the loop if we're at the beginning of the file if (ftell($fh) == 0) { $beginning_of_file = 1; } } if ($linebreak) { rewind($fh); $file_without_last_line = fread($fh,$line_end) or die($php_errormsg); } fclose($fh) or die("can't close: $php_errormsg"); This code starts at the end of the file and moves backwards in $gap character chunks looking for a newline. If it finds one, it knows the last line of the file starts right after that newline. This position is saved in $line_end. After the while loop, if $linebreak is set, the contents of the file from the beginning to $line_end are read into $file_without_last_line. The last character of the file is ignored because if it's a newline, it doesn't indicate the start of the last line of the file. Consider the 10-character file whose contents are asparagus\n. It has only one line, consisting of the word asparagus and a newline character. This file without its last line is empty, which the previous code correctly produces. If it starts scanning with the last character, it sees the newline and exits its scanning loop, incorrectly printing out asparagus without the newline. See Also Section 18.15 discusses fseek( ) and rewind( ) in more detail; documentation on array_pop( ) at, fseek( ) at, and rewind( ) at. Modifying a File in Place Without a Temporary File Problem You want to change a file without using a temporary file to hold the changes. Solution Read the file into memory, make the changes, and rewrite the file. Open the file with mode r+ (rb+, if necessary, on Windows) and adjust its length with ftruncate( ) after writing out changes: // open the file for reading and writing $fh = fopen('pickles.txt','r+') or die($php_errormsg); // read the entire file into $s $s = fread($fh,filesize('pickles.txt')) or die($php_errormsg); // ... modify $s ... // seek back to the beginning of the file and write the new $s rewind($fh); if (-1 == fwrite($fh,$s)) { die($php_errormsg); } // adjust the file's length to just what's been written ftruncate($fh,ftell($fh)) or die($php_errormsg); // close the file fclose($fh) or die($php_errormsg); Discussion The following code turns text emphasized with asterisks or slashes into text with HTML <b> or <i> tags: $fh = fopen('message.txt','r+') or die($php_errormsg); // read the entire file into $s $s = fread($fh,filesize('message.txt')) or die($php_errormsg); // convert *word* to <b>word</b> $s = preg_replace('@\*(.*?)\*@i','<b>$1</b>',$s); // convert /word/ to <i>word</i> $s = preg_replace('@/(.*?)/@i','<i>$1</i>',$s); rewind($fh); if (-1 == fwrite($fh,$s)) { die($php_errormsg); } ftruncate($fh,ftell($fh)) or die($php_errormsg); fclose($fh) or die($php_errormsg); Because adding HTML tags makes the file grow, the entire file has to be read into memory and then processed. If the changes to a file make each line shrink (or stay the same size), the file can be processed line by line, saving memory. This example converts text marked with <b> and <i> to text marked with asterisks and slashes: $fh = fopen('message.txt','r+') or die($php_errormsg); // figure out how many bytes to read $bytes_to_read = filesize('message.txt'); // initialize variables that hold file positions $next_read = $last_write = 0; // keep going while there are still bytes to read while ($next_read < $bytes_to_read) { /* move to the position of the next read, read a line, and save * the position of the next read */ fseek($fh,$next_read); $s = fgets($fh,1048576) or die($php_errormsg); $next_read = ftell($fh); // convert <b>word</b> to *word* $s = preg_replace('@<b[^>]*>(.*?)</b>@i','*$1*',$s); // convert <i>word</i> to /word/ $s = preg_replace('@<i[^>]*>(.*?)</i>@i','/$1/',$s); /* move to the position where the last write ended, write the * converted line, and save the position for the next write */ fseek($fh,$last_write); if (-1 == fwrite($fh,$s)) { die($php_errormsg); } $last_write = ftell($fh); } // truncate the file length to what we've already written ftruncate($fh,$last_write) or die($php_errormsg); // close the file fclose($fh) or die($php_errormsg); See Also Section 11.10 and Section 11.11 for additional information on converting between ASCII and HTML; Section 18.15 discusses fseek( ) and rewind( ) in more detail; documentation on fseek( ) at, rewind( ) at, and ftruncate( ) at. Flushing Output to a File Problem You want to force all buffered data to be written to a filehandle. Solution Use fflush( ) : fwrite($fh,'There are twelve pumpkins in my house.'); fflush($fh); This ensures that "There are twelve pumpkins in my house." is written to $fh. Discussion To be more efficient, system I/O libraries generally don't write something to a file when you tell them to. Instead, they batch the writes together in a buffer and save all of them to disk at the same time. Using fflush( ) forces anything pending in the write buffer to be actually written to disk. Flushing output can be particularly helpful when generating an access or activity log. Calling fflush( ) after each message to log file makes sure that any person or program monitoring the log file sees the message as soon as possible. See Also Documentation on fflush( ) at. Writing to Standard Output Problem You want to write to standard output. Solution Use echo or print: print "Where did my pastrami sandwich go?"; echo "It went into my stomach."; Discussion While print( ) is a function, echo is a language construct. This means that print( ) returns a value, while echo doesn't. You can include print( ) but not echo in larger expressions: // this is OK (12 == $status) ? print 'Status is good' : error_log('Problem with status!'); // this gives a parse error (12 == $status) ? echo 'Status is good' : error_log('Problem with status!'); Use php://stdout as the filename if you're using the file functions: $fh = fopen('php://stdout','w') or die($php_errormsg); Writing to standard output via a file handle instead of simply with print( ) or echo is useful if you need to abstract where your output goes, or if you need to print to standard output at the same time as writing to a file. See Section 18.20 for details. You can also write to standard error by opening php://stderr: $fh = fopen('php://stderr','w'); See Also Section 18.20 for writing to many filehandles simultaneously; documentation on echo at and on print( ) at. Writing to Many Filehandles Simultaneously Problem You want to send output to more than one file handle; for example, you want to log messages to the screen and to a file. Solution Wrap your output with a loop that iterates through your filehandles, as shown in Example 18-4. Example 18-4. pc_multi_fwrite( ) function pc_multi_fwrite($fhs,$s,$length=NULL) { if (is_array($fhs)) { if (is_null($length)) { foreach($fhs as $fh) { fwrite($fh,$s); } } else { foreach($fhs as $fh) { fwrite($fh,$s,$length); } } } } Here's an example: $fhs['file'] = fopen('log.txt','w') or die($php_errormsg); $fhs['screen'] = fopen('php://stdout','w') or die($php_errormsg); pc_multi_fwrite($fhs,'The space shuttle has landed.'); Discussion If you don't want to pass a length argument to fwrite( ) (or you always want to), you can eliminate that check from your pc_multi_fwrite( ). This version doesn't accept a $length argument: function pc_multi_fwrite($fhs,$s) { if (is_array($fhs)) { foreach($fhs as $fh) { fwrite($fh,$s); } } } See Also Documentation on fwrite( ) at. Escaping Shell Metacharacters Problem You need to incorporate external data in a command line, but you want to escape out special characters so nothing unexpected happens; for example, you want to pass user input as an argument to a program. Solution Use escapeshellarg( ) to handle arguments: system('ls -al '.escapeshellarg($directory)); Use escapeshellcmd( ) to handle program names: system(escapeshellcmd($ls_program).' -al'); Discussion The command line is a dangerous place for unescaped characters. Never pass unmodified user input to one of PHP's shell-execution functions. Always escape the appropriate characters in the command and the arguments. This is crucial. It is unusual to execute command lines that are coming from web forms and not something we recommend lightly. However, sometimes you need to run an external program, so escaping commands and arguments is useful. escapeshellarg( ) surrounds arguments with single quotes (and escapes any existing single quotes). To print the process status for a particular process: system('/bin/ps '.escapeshellarg($process_id)); Using escapeshellarg( ) ensures that the right process is displayed even if it has an unexpected character (e.g., a space) in it. It also prevents unintended commands from being run. If $process_id contains: 1; rm -rf / then: system("/bin/ps $process_id") not only displays the status of process 1, but it also executes the command rm -rf / . However: system('/bin/ps '.escapeshellarg($process_id)) runs the command /bin/ps 1; rm -rf, which produces an error because "1-semicolon-space-rm-space-hyphen-rf" isn't a valid process ID. Similarly, escapeshellcmd( ) prevents unintended command lines from execution. This code runs a different program depending on the value of $which_program: system("/usr/local/bin/formatter-$which_program"); For example, if $which_program is pdf 12, the script runs /usr/local/bin/formatter-pdf with an argument of 12. But, if $which_program is pdf 12; 56, the script runs /usr/local/bin/formatter-pdf with an argument of 12, but then also runs the program 56, which is an error. To successfully pass the arguments to formatter-pdf , you need escapeshellcmd( ): system(escapeshellcmd("/usr/local/bin/formatter-$which_program")); This runs /usr/local/bin/formatter-pdf and passes it two arguments: 12; and 56. See Also Documentation on system( ) at, escapeshellarg( ) at, and escapeshellcmd( ) at. Passing Input to a Program Problem You want to pass input to an external program run from inside a PHP script. You might, for example, use a database that requires you to run an external program to index text and want to pass text to that program. Solution Open a pipe to the program with popen( ) , write to the pipe with fputs( ) or fwrite( ), then close the pipe with pclose( ) : $ph = popen('program arg1 arg2','w') or die($php_errormsg); if (-1 == fputs($ph,"first line of input\n")) { die($php_errormsg); } if (-1 == fputs($ph,"second line of input\n")) { die($php_errormsg); } pclose($ph) or die($php_errormsg); Discussion This example uses popen( ) to call the nsupdate command, which submits Dynamic DNS Update requests to name servers: $ph = popen('/usr/bin/nsupdate -k keyfile') or die($php_errormsg); if (-1 == fputs($ph,"update delete test.example.com A\n")) { die($php_errormsg); } if (-1 == fputs($ph,"update add test.example.com 5 A 192.168.1.1\n")) { die($php_errormsg); } pclose($ph) or die($php_errormsg); Two commands are sent to nsupdate via popen( ). The first deletes the test.example.com A record, and the second adds a new A record for test.example.com with the address 192.168.1.1. See Also Documentation on popen( ) at and pclose( ) at; Dynamic DNS is described in RFC 2136 at. Reading Standard Output from a Program Problem You want to read the output from a program; for example, you want the output of a system utility such as route(8) that provides network information. Solution To read the entire contents of a program's output, use the backtick (') operator: $routing_table = `/sbin/route`; To read the output incrementally, open a pipe with popen( ): $ph = popen('/sbin/route','r') or die($php_errormsg); while (! feof($ph)) { $s = fgets($ph,1048576) or die($php_errormsg); } pclose($ph) or die($php_errormsg); Discussion The backtick operator (which is not available in safe mode), executes a program and returns all its output as a single string. On a Linux system with 448 MB of RAM, this command: $s = `/usr/bin/free`; puts this multiline string in $s: total used free shared buffers cached Mem: 448620 446384 2236 0 68568 163040 -/+ buffers/cache: 214776 233844 Swap: 136512 0 136512: // print table header print<<<_HTML_ <table> <tr> <td>user</td><td>login port</td><td>login from</td><td>login time</td> <td>time spent logged in</td> </tr> _HTML_; // open the pipe to /usr/bin/last $ph = popen('/usr/bin/last','r') or die($php_errormsg); while (! feof($ph)) { $line = fgets($ph,80) or die($php_errormsg); // don't process blank lines or the info line at the end if (trim($line) && (! preg_match('/^wtmp begins/',$line))) { $user = trim(substr($line,0,8)); $port = trim(substr($line,9,12)); $host = trim(substr($line,22,16)); $date = trim(substr($line,38,25)); $elapsed = trim(substr($line,63,10),' ()'); if ('logged in' == $elapsed) { $elapsed = 'still logged in'; $date = substr_replace($date,'',-5); } print "<tr><td>$user</td><td>$port</td><td>$host</td>"; print "<td>$date</td><td>$elapsed</td></tr>\n"; } } pclose($ph) or die($php_errormsg); print '</table>'; See Also Documentation on popen( ) at, pclose( ) at, and the backtick operator at; safe mode is documented at. Reading Standard Error from a Program Problem You want to read the error output from a program; for example, you want to capture the system calls displayed by strace(1) . Solution Redirect standard error to standard output by adding 2>&1 to the command line passed to popen( ) . Read standard output by opening the pipe in r mode: $ph = popen('strace ls 2>&1','r') or die($php_errormsg); while (!feof($ph)) { $s = fgets($ph,1048576) or die($php_errormsg); } pclose($ph) or die($php_errormsg); Discussion. This is done by redirecting it to /dev/null on Unix and NUL on Windows: // Unix: just read standard error $ph = popen('strace ls 2>&1 1>/dev/null','r') or die($php_errormsg); // Windows: just read standard error $ph = popen('ipxroute.exe 2>&1 1>NUL','r') or die($php_errormsg); See Also Documentation on popen( ) at; see your popen(3) manpage for details about the shell your system uses with popen( ); for information about shell redirection, see the Redirection section of the sh(1) manpage on Unix systems; on Windows, see the entry on redirection in the Command Reference section of your system help. Locking a File Problem You want to have exclusive access to a file to prevent it from being changed while you read or update it. If, for example, you are saving guestbook information in a file, two users should be able to add guestbook entries at the same time without clobbering each other's entries. Solution Use flock( ) to provide advisory locking: $fh = fopen('guestbook.txt','a') or die($php_errormsg); flock($fh,LOCK_EX) or die($php_errormsg); fwrite($fh,$_REQUEST['guestbook_entry']) or die($php_errormsg); fflush($fh) or die($php_errormsg); flock($fh,LOCK_UN) or die($php_errormsg); fclose($fh) or die($php_errormsg); Discussion The file locking flock( ) provides is called advisory file locking because flock( ) doesn't actually prevent other processes from opening a locked file, it just provides a way for processes to voluntarily cooperate on file access. All programs that need to access files being locked with flock( ) need to set and release locks to make the file locking effective. There are two kinds of locks you can set with flock( ): exclusive locks and shared locks. An exclusive lock , specified by LOCK_EX as the second argument to flock( ), can be held only by one process at one time for a particular file. A shared lock , specified by LOCK_SH, can be held by more than one process at one time for a particular file. Before writing to a file, you should get an exclusive lock. Before reading from a file, you should get a shared lock. To unlock a file, call flock( ) with LOCK_UN as the second argument. It's important to flush any buffered data to be written to the file with fflush( ) before you unlock the file. Other processes shouldn't be able to get a lock until that data is written. By default, flock( ) blocks until it can obtain a lock. To tell it not to block, add LOCK_NB to the second argument: $fh = fopen('guestbook.txt','a') or die($php_errormsg); $tries = 3; while ($tries > 0) { $locked = flock($fh,LOCK_EX | LOCK_NB); if (! $locked) { sleep(5); $tries--; } else { // don't go through the loop again $tries = 0; } } if ($locked) { fwrite($fh,$_REQUEST['guestbook_entry']) or die($php_errormsg); fflush($fh) or die($php_errormsg); flock($fh,LOCK_UN) or die($php_errormsg); fclose($fh) or die($php_errormsg); } else { print "Can't get lock."; } When the lock is nonblocking, flock( ) returns right away even if it couldn't get a lock. The previous example tries three times to get a lock on guestbook.txt, sleeping five seconds between each try. Locking with flock( ) doesn't work in all circumstances, such as on some NFS implementations. Also, flock( ) isn't supported on Windows 95, 98, or ME. To simulate file locking in these cases, use a directory as a exclusive lock indicator. This is a separate empty directory whose presence indicates that the data file is locked. Before opening a data file, create a lock directory and then delete the lock directory when you're finished working with the data file. Otherwise, the file access code is the same, as shown here: $fh = fopen('guestbook.txt','a') or die($php_errormsg); // loop until we can successfully make the lock directory $locked = 0; while (! $locked) { if (@mkdir('guestbook.txt.lock',0777)) { $locked = 1; } else { sleep(1); } } if (-1 == fwrite($fh,$_REQUEST['guestbook_entry'])) { rmdir('guestbook.txt.lock'); die($php_errormsg); } if (! fclose($fh)) { rmdir('guestbook.txt.lock'); die($php_errormsg); } rmdir('guestbook.txt.lock') or die($php_errormsg); A directory is used instead of a file to indicate a lock because the mkdir( ) function fails to create a directory if it already exists. This gives you a way, in one operation, to check if the lock indicator exists and create it if it doesn't. Any error trapping after the directory is created, however, needs to clean up by removing the directory before exiting. If the directory is left in place, no future processes can get a lock by creating the directory. If you use a file as a lock indicator, the code to create it looks like: $locked = 0; while (! $locked) { if (! file_exists('guestbook.txt.lock')) { touch('guestbook.txt.lock'); $locked = 1; } else { sleep(1); } } This might fail under heavy load because you check for the lock's existence with file_exists( ) and then create the lock with touch( ) . After one process calls file_exists( ), another might call touch( ) before the first calls touch( ). Both processes would then think they've got exclusive access to the file when neither does. With mkdir( ) there's no gap between the checking for existence and creation, so the process that makes the directory is ensured exclusive access. See Also Documentation on flock( ) at. Reading and Writing Compressed Files Problem You want to read or write compressed files. Solution Use PHP's zlib extension to read or write gzip'ed files. To read a compressed file: $zh = gzopen('file.gz','r') or die("can't open: $php_errormsg"); while ($line = gzgets($zh,1024)) { // $line is the next line of uncompressed data, up to 1024 bytes } gzclose($zh) or die("can't close: $php_errormsg"); Here's how to write a compressed file: $zh = gzopen('file.gz','w') or die("can't open: $php_errormsg"); if (-1 == gzwrite($zh,$s)) { die("can't write: $php_errormsg"); } gzclose($zh) or die("can't close: $php_errormsg"); Discussion The zlib extension contains versions of many file-access functions, such as fopen( ), fread( ), and fwrite( ) (called gzopen( ) , gzread( ), gzwrite( ), etc.) that transparently compress data when writing and uncompress data when reading. The compression algorithm that zlib uses is compatible with the gzip and gunzip utilities. For example, gzgets($zp,1024) works like fgets($fh,1024). It reads up to 1023 bytes, stopping earlier if it reaches EOF or a newline. For gzgets( ), this means 1023 uncompressed bytes. However, gzseek( ) works differently than fseek( ). It only supports seeking a specified number of bytes from the beginning of the file stream (the SEEK_SET argument to fseek( )). Seeking forward (from the current position) is only supported in files opened for writing (the file is padded with a sequence of compressed zeroes). Seeking backwards is supported in files opened for reading, but it is very slow. The zlib extension also has some functions to create compressed strings. The function gzencode( ) compresses a string and gives it the correct headers and formatting to be compatible with gunzip . Here's a simple gzip program: $in_file = $_SERVER['argv'][1]; $out_file = $_SERVER['argv'][1].'.gz'; $ifh = fopen($in_file,'rb') or die("can't open $in_file: $php_errormsg"); $ofh = fopen($out_file,'wb') or die("can't open $out_file: $php_errormsg"); $encoded = gzencode(fread($ifh,filesize($in_file))) or die("can't encode data: $php_errormsg"); if (-1 == fwrite($ofh,$encoded)) { die("can't write: $php_errormsg"); } fclose($ofh) or die("can't close $out_file: $php_errormsg"); fclose($ifh) or die("can't close $in_file: $php_errormsg"); The guts of this program are the lines: $encoded = gzencode(fread($ifh,filesize($in_file))) or die("can't encode data: $php_errormsg); if (-1 == fwrite($ofh,$encoded)) { die("can't write: $php_errormsg"); } The compressed contents of $in_file are stored in $encoded and then written to $out_file with fwrite( ). You can pass a second argument to gzencode( ) that indicates compression level. Set no compression with 0 and maximum compression with 9. The default level is 1. To adjust the simple gzip program for maximum compression, the encoding line becomes: $encoded = gzencode(fread($ifh,filesize($in_file)),9) or die("can't encode data: $php_errormsg); You can also compress and uncompress strings without the gzip-compatibility headers by using gzcompress( ) and gzuncompress( ). See Also Section 18.27 for a program that extracts files from a ZIP archive; documentation on the zlib extension at; you can download zlib at; the zlib algorithm is detailed in RFCs 1950 () and 1951 (). Program: Unzip The unzip.php program, shown in Example 18-5, extracts files from a ZIP archive. It uses the pc_mkdir_parents( ) function which is defined in Section 19.11. The program also requires PHP's zip extension to be installed. You can find documentation on the zip extension at. This program takes a few arguments on the command line. The first is the name of the ZIP archive it should unzip. By default, it unzips all files in the archive. If additional command-line arguments are supplied, it only unzips files whose name matches any of those arguments. The full path of the file inside the ZIP archive must be given. If turtles.html is in the ZIP archive inside the animals directory, unzip.php must be passed animals/turtles.html, not just turtles.html, to unzip the file. Directories are stored as 0-byte files inside ZIP archives, so unzip.php doesn't try to create them. Instead, before it creates any other file, it uses pc_mkdir_parents( ) to create all directories that are parents of that file, if necessary. For example, say unzip.php sees these entries in the ZIP archive: animals (0 bytes) animals/frogs/ribbit.html (2123 bytes) animals/turtles.html (1232 bytes) It ignores animals because it is 0 bytes long. Then it calls pc_mkdir_parents( ) on animals/frogs, creating both animals and animals/frogs, and writes ribbit.html into animals/frogs. Since animals already exists when it reaches animals/turtles.html, it writes out turtles.html without creating any additional directories. Example 18-5. unzip.php // the first argument is the zip file $in_file = $_SERVER['argv'][1]; // any other arguments are specific files in the archive to unzip if ($_SERVER['argc'] > 2) { $all_files = 0; for ($i = 2; $i < $_SERVER['argc']; $i++) { $out_files[$_SERVER['argv'][$i]] = true; } } else { // if no other files are specified, unzip all files $all_files = true; } $z = zip_open($in_file) or die("can't open $in_file: $php_errormsg"); while ($entry = zip_read($z)) { $entry_name = zip_entry_name($entry); // check if all files should be unzipped, or the name of // this file is on the list of specific files to unzip if ($all_files || $out_files[$entry_name]) { // only proceed if the file is not 0 bytes long if (zip_entry_filesize($entry)) { $dir = dirname($entry_name); // make all necessary directories in the file's path if (! is_dir($dir)) { pc_mkdir_parents($dir); } $file = basename($entry_name); if (zip_entry_open($z,$entry)) { if ($fh = fopen($dir.'/'.$file,'w')) { // write the entire file fwrite($fh, zip_entry_read($entry,zip_entry_filesize($entry))) or error_log("can't write: $php_errormsg"); fclose($fh) or error_log("can't close: $php_errormsg"); } else { error_log("can't open $dir/$file: $php_errormsg"); } zip_entry_close($entry); } else { error_log("can't open entry $entry_name: $php_errormsg"); } } } } See Also Section 18.26 for reading and writing zlib compressed files; Section 19.11 for the pc_mkdir_parents( ) function; documentation on the zip extension at . Notes - ↑ When switching between standard time and daylight saving time, there are not 86,400 seconds in a day. See Section 3.11 for details.
http://commons.oreilly.com/wiki/index.php/PHP_Cookbook/Files
CC-MAIN-2014-42
refinedweb
9,082
62.27
Get the highlights in your inbox every week. A glimpse into R counterculture A glimpse into R counterculture The statistical computing languages R and Python offer similar features. The decision comes down to contrasting philosophies. Subscribe now Back in 2009, Anne Milley of SAS dismissed the increasing significance of the R language (whose rivals include SAS, Python, and, more recently, Julia) in a New York Times article. She said: "We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet.". What about Python?This begs the question: "What about Python?" Indeed, Python is also a popular open-source language used for data analytics. And if we have Python, why should we care about R? This can no longer be answered by appealing to functionality; Python and R have been copying each other's functionalities for years. For example, the R graphics library ggplot2has been ported to Python; there are implementations of Jupyter notebooks with support for R; and the DataFrameclass in Python's pandas library has an uncanny conceptual similarity to the data.frameclass in base R. Accordingly, it is now far less common for a data scientist to make the choice between R and Python on account of differing functionality. There are exceptions to this rule, such as (in Python's favor) the full-stack capabilities of Python and (in R's favor) Shiny, an API to HTML and JavaScript that is implemented as an R library, allowing for seamless integration between web app development and R's capabilities. Instead, the "What about Python?" question is best answered by clarifying the contrasting design philosophies between R and Python, then choosing which one most closely aligns with your personal style. The largest conceptual difference between the two languages is Python's preference of having only one obvious way to do something (a rule in the Python Philosophy), versus R's belief in providing limitless possibilities to programmers and allowing them to choose the approach they desire. There is certainly no analogue in the R community to the use of the word "Pythonic" in the Python community. R believes in giving choice to programmers rather than advocating regimented approaches. While this is certainly an issue of personal taste, I think it makes R more closely aligned than Python to the values upheld by the open source community. Three reasons to choose R At the end of the day, programmers should choose the language they feel is most comfortable, provided its utility meets their needs. I like that R syntax is very close to the way I think, which makes it very comfortable for me to use. Consider these three simple, but illustrative, examples. - R indexes from 1, rather than the usual 0. I have been surprised by the severity of reactions to this; one of my colleagues even prefers Python over R for this very reason. But the point of a programming language is to be a middleman between our thoughts and 1s and 0s. If a language is a more effective "middleman" (for example, counting from 1, the same way we do), then what's wrong with that? I'm generally a fan of following convention, except when there's a good enough reason not to. One added benefit of R's approach to indexing is that you can remove elements from a vector by subsetting with negative indices (which requires the language to index from something greater than zero). For example:> x = 1:5 > print(x) [1] 1 2 3 4 5 > x = x[-3] > print(x) [1] 1 2 4 5 - Base R has four different assignment operators, each with a different ranking in the order of operations. The following four statements all produce the same effect:assign('x', sqrt(pi)) x = sqrt(pi) x <- sqrt(pi) sqrt(pi) -> x assign()function, can explicitly specify in which environment/namespace to store the new variable. Moreover, R has the super-assignment operators <<-and ->>(which parallel leftward and rightward assignment, respectively) that allow a variable to be stored globally, even deep within nested functions or structures. (This can also be accomplished through the assign()function.) - I think R beats every other language when it comes to ease of implementing list comprehension, even though this is typically touted as a Python selling point. One of several list comprehension methods in R is the "apply" family of functions, which provide a feature-rich way to apply functions across vectors or lists (i.e., R's equivalent of C structs). There is also a simpler approach based on R's convention of "recycling" which dictates that even when a function is declared to have only one element of input, an entire vector can be passed to the function anyway, and the function will be evaluated at each of the vector's elements. For example, the factorial() function is defined to take only one element of input, but you can nonetheless use it as:> factorial(1:9) [1] 1 2 6 24 120 720 5040 40320 362880 Although the "apply" functions were originally considered a nuance in R, they inadvertently encouraged R programmers to set up their computations in embarrassingly parallel ways. Consequently, the R community naturally developed libraries for parallel and GPU computing. In these and many other ways, R's embrace of the open source philosophy has made it a niche but growing language whose capabilities rival those of any other high-level interpreted language. Samuel Lurie will be presenting Highlights of R at SCaLE16x this year, March 8-11 in Pasadena, California. To attend and get 50% of your ticket, register using promo code OSDC. 3 Comments, Register or Log in to post a comment. Great article. Thanks for teaching me more about R. I'm glad you enjoyed it. To that add that programs such as jamovi.org and jasp-stats.org are based on R and they are equally good!
https://opensource.com/article/18/3/r-programming-features
CC-MAIN-2021-39
refinedweb
992
51.78
Wikibooks:Deletion policy From Wikibooks, the open-content textbooks collection Administrators have the ability to delete pages from the Wikibooks database. Administrators should use their best judgment in making this decision. Administrative deletion removes not just the present content of a page (which is something that anyone can do--but also something that anyone can restore) but also the page's history. While deleted pages can be restored by administrators, if deletions are made too casually it is easy to lose track. Therefore the decision to remove a page from the database is not to be taken lightly. [edit] Transwiki First of all, if the page looks okay, but it's just on the wrong wiki (e.g. wrong language, wrong type of content), add {{transwiki|<suggested wiki>}} to the top of the page, which will look like: The page will then appear in Category:Modules for transwiki. [edit] Speedy deletions If the page you want deleted is your own user page, user talk page, or any subpages thereof, or a page in the main namespace which falls into one or more of the following parameters: - A page with no meaningful content. Always check the history of the page, as this nonsense may have replaced good content. If this is the case, simply revert to the last good version. - A repost of content previously voted for deletion, where the page was not listed on votes for undeletion and taken through the necessary procedural steps there. - Blatant vandalism. This would be obvious offensive language or completely unrelated content to the Wikibook it is associated with. Make sure that the page that has been vandalized does not have prior history that can simply be reverted, then blank the page and add the deletion tag. - A page which has been transwikied to more appropriate wiki. - A page that is nominated for deletion by the original author with no other contributors. - A page that has been nominated for deletion due to a general reorganization of the book by the contributors. In this situation, please note the location of the relevant discussion that occurred regarding the page cleanup. - A page in a book which clearly does not comply with Wikibooks:What is Wikibooks (if there is some doubt, add the book to VfD). - A redirect page that does not conform with Wikibooks:naming policy if all links to it have been updated. - A redirect where it is unlikely that anyone will inadvertently search for a page under that name. - * Note * When deleting redirect pages, make sure that any links pointing to those redirects have been changed first, including other websites including other Wikimedia projects. ...it is a candidate for speedy deletion. In this case, simply add {{delete|<your personal justification for speedy deletion>}} to the top of the offending page, which will look like: The page will then appear in Category:Candidates for speedy deletion, and will be addressed by an administrator as soon as possible. Administrators may either delete pages that meet the criteria of candidates for speedy deletion without going through the above process, or may list them in the normal manner if they want a second opinion. [edit] Copyright violations If you suspect the page to be a copyright violation from another URL or other source (usually determined by running a Google test), add {{copyvio|<source URL or description of source>}} to the top of the page, which will look like: The page will then appear in Category:Copyright violations. The user who posted the suspected copyright violation has one week from this date to prove that they have permission to post the suspected content, otherwise it will be deleted. [edit] Votes for deletion If a page has existed on the system for more than one week, or it has been listed as a candidate for speedy deletion and you do not agree with the reasoning, you may add a new section to the end of Wikibooks:Votes for deletion, using the unwanted page's name as a title, so that other Wikibookians can have a chance to argue for or against the removal of the page. Please include a justification for deletion, and sign/date your justification with four tildes, ~~~~, or similar. In addition, add {{vfd}} to the top of the page, which will look like: The page will then appear in Category:Votes for deletion. At this time, the voting process begins. Every registered Wikibooks user is entitled to one vote, which should consist of either Delete or Keep, optionally followed by a justification for their vote. The vote should be signed/dated as above. If you wish to change your vote, please apply the strikethrough tag <s>old text</s> to your previous vote, before adding a new vote. After one week, if the voters have mostly reached a consensus about what to do, the appropriate action will be taken by an administrator. If not the voting may continue until a consensus is reached. Voting, and reaching consensus should occur according to the decision-making policy. By that policy, voting is used as an aid to decide whether or not the page should be kept - it is never a case that we just count the votes and go with the majority. One thing to keep in mind with a VfD Discussion is that it may be added as a result of out right vandalism or as a part of an edit war between two contributors who may not like each other. In situations like this it would be considered appropriate to remove the {{VfD}} template tag from the article in question, as well as the content from the discussion page, with perhaps a note on the article discussion page that the VfD discussion was terminated because it was started due to vandalism, even if legitimate arguments for or against keeping the article were subsequently posted. If afterwards a serious objection to the Wikibook is still being raised, the VfD discussion can resume. You should be especially sensitive to a VfD discussion about a brand-new Wikibook, particularly by a new contributor to Wikibooks. Since they are new to Wikibooks, they may not completely understand all of the policies here, so it would be better to mentor the new contributor more directly trying to help clean up the new Wikibook than trying to force a VfD discussion... even if the content it is a blatant violation of current procedures like a copyright violation or forking of an existing Wikibook. Unless there are obvious problems that are unlikely to be rectified, give the Wikibook some time to develop before the VfD discussion even takes place, and it would be inappropriate to automatically delete a good-faith effort to produce a Wikibook. If the decision is that the page should be deleted, an administrator will do so, and it will show up on the page Special:Log/Delete. Either way, the section should be moved to Wikibooks:Votes for deletion/Archive for historical reference. [edit] Meaningful content What is considered meaningful content is open to some interpretation, so you'll have to use your best judgment to make this decision. This is not intended to be an exhaustive list, but the following are generally considered not to be meaningful content: - Newbie experiments (e.g. "sdhgdf", "Can I really create a page here?") - Vandalism (including linkspamming) (e.g. "f**k you", ""). See also: dealing with vandalism - Very short pages with no definition or context (e.g. "This guy is great!") or that only have an external link. [edit] What to keep, what to delete Consider the following recommendations when deciding whether to list a page for VfD - In general, keep stubs. However, delete stubs that don't even have a decent definition. Also, delete stubs that will never become more than a simple definition. See Find or fix a stub. - In general, keep modules that need heavy editing, and list them on Modules needing attention. However, consider deleting pages that are just utter nonsense - In general, delete pages that simply will never become educational resource modules, for example, modules that represent completely idiosyncratic non-topics ("Teaching 100 monkeys to type the works of Shakespeare"), etc. [edit]:Votes votes and comments if they feel that there is strong evidence that they were not made in good faith. Such "bad faith" votes. [edit] Frequently Asked Questions - Module X is totally biased! What gives? - Take it to modules needing attention or NPOV dispute. You don't need the VfD page for that. - This user should be banned. - Take it to Administrative Assistance. - Where'd my page go?! - See the Special:Log/Delete for pages that have been recently deleted. See Wikibooks:Votes for undeletion if you are concerned that a page may have been wrongly deleted.
http://en.wikibooks.org/wiki/Wikibooks:Deletion_policy
crawl-001
refinedweb
1,460
59.03
Table of Contents When you create a new plugin, you need to tell RKWard about it. So the first thing to do, is to write a .pluginmap file (or modify an existing one). The format of .pluginmap is XML. I will walk you through an example (also of course, be sure you have RKWard configured to load your .pluginmap -- → → ): After reading this chapter, have a look at the rkwarddev package as well. It provides some R functions to create most of RKWard's XML tags for you. <!DOCTYPE rkpluginmap> The doctype is not really interpreted, but set it to "rkpluginmap" anyway. <document base_prefix="" namespace="myplugins" id="mypluginmap"> The base_prefix attribute can be used, if all your plugins reside in a common directory. Basically, then you can omit that directory from the filenames specified below. It safe to leave this at "". As you will see below, all plugins get a unique identifier, id. The namespace is a way to organize those IDs, and make it less likely to create a duplicate identifier accidentally. Internally, basically the namespace and then a “::” gets prepended to all the identifiers you specify in this .pluginmap. In general, if you intend to distribute your plugins in an R package, it is a good idea to use the package name as namespace parameter. Plugins shipped with the official RKWard distribution have namespace="rkward". The id attribute is optional, but specifying an id for your .pluginmap makes it possible for other people to make their .pluginmaps load your .pluginmap, automatically (see the section on dependencies). <components> Components? Are not we talking about plugins? Yes, but in the future, plugins will be no more than a special class of components. What we do here, then, is to register all components/plugins with RKWard. Let's look at an example entry: <component type="standard" id="t_test_two_vars" file="t_test_two_vars.xml" label="Two Variable t-Test" /> First the type attribute: Leave this to "standard" for now. Further types are not yet implemented. The id we have already hinted at. Each component has to be given a unique (in its namespace) identifier. Pick one that is easily recognizable. Avoid spaces and any special characters. Those are not banned, so far, but might have special meanings. With the file attribute, you specify where the description of the actual plugin itself is located. This is relative to the directory the .pluginmap file is in, and the base_prefix above. Finally, give the component a label. This label will be shown wherever the plugin is placed in the menu (or in the future perhaps in other places as well). Typically a .pluginmap file will contain several components, so here are a few more: <component type="standard" id="unimplemented_test" file="means/unimplemented.xml" /> <component type="standard" id="fictional_t_test" file="means/ttests/fictional.xml" label="This is a fictional t-test" /> <component type="standard" id="descriptive" file="descriptive.xml" label="Descriptive Statistics" /> <component type="standard" id="corr_matrix" file="corr_matrix.xml" label="Correlation Matrix" /> <component type="standard" id="simple_anova" file="simple_anova.xml" label="Simple Anova" /> </components> OK, this was the first step. RKWard now knows those plugins exist. But how to invoke them? They need to be placed in a menu hierarchy: <hierarchy> <menu id="analysis" label="Analysis"> Right below the <hierarchy> tag, you start describing, in which <menu> your plugins should go. With the above line, you basically say, that your plugin should be in the menu (not necessarily directly there, but in a submenu). The menu is standard in RKWard, so it does not actually have to be created from scratch. However, if it did not exist yet, using the label attribute you would give it its name. Finally, the id once again identifies this <menu>. This is needed, so several .pluginmap files can place their plugins in the same menus. They do this by looking for a <menu> with the given id. If the ID does not yet exist, a new menu will be created. Otherwise the entries will be added to the existing menu. <menu id="means" label="Means"> Basically the same thing here: Now we define a submenu to the menu. It is to be called . <menu id="ttests" label="t-Tests"> And a final level in the menu hierarchy: A submenu of the submenu . <entry component="t_test_two_vars" /> Now, finally, this is the menu we want to place the plugin in. The <entry> tag signals, this actually is the real thing, instead of another submenu. The component attribute refers to the id you gave the plugin/component above. <entry component="fictional_t_test" /> </menu> <entry component="fictional_t_test" /> </menu> <menu id="frequency" label="Frequency" index="2"/> In case you have lost track: This is another submenu to the menu. See the screenshot below. We will skip some of what is not visible, marked with [...]. [...] </menu> <entry component="corr_matrix"/> <entry component="descriptive"/> <entry component="simple_anova"/> </menu> These are the final entries visible in the screenshots below. <menu id="plots" label="Plots"> [...] </menu> Of course you can also place your plugins in menus other than . <menu id="file" label="File"> [...] </menu> Even in standard-menus such as . All you need is the correct id. </hierarchy> </document> That is how to do it. And this screenshot shows the result: Confused? The easiest way to get started is probably taking some of the existing .pluginmap files shipped with the distribution, and modifying them to your needs. Also, if you need help, do not hesitate to write to the development mailing list. By default, all items (entries / submenus) inside a menu will be sorted alphabetically, automatically. In some cases you may want more control. In this case you can group elements as follows: You can define groups inside any menu like this. All elements belonging to the same group will be grouped together: <group id="somegroup"/> If you want the group to be visually separated from other entries, use: <group id="somegroup" separated="true"/> Entries, menus, and groups can be appended to a specified group, using: <entry component="..." group="somegroup"/> In fact, it is also possible to define groups (without separator lines) implicitly: <entry component="first" group="a"/> <entry component="third"/> <entry component="second" group="a"/> Group names are specific to each menu. Group "a" in menu "Data" does not conflict with group "a" in menu "Analysis", for example. The most common use case is defining groups at the top, or at the bottom of a menu. For this, there are pre-defined groups "top" and "bottom" in each menu. Entries within each group are sorted, alphabetically. Groups appear in the order of declaration (unless appended to another group, of course). Menus and entries without group specification logically form a group (""), too.
https://api.kde.org/doc/rkwardplugins/pluginmap.html
CC-MAIN-2020-40
refinedweb
1,109
59.19
Hybrid View Error: namespace is undefinedDesigner version tested: - Designer 1.2.2 Build 48 - Windows 7 - Ext JS 4.x - When I link a class into another class I get "namespace undefined" error - Create a form panel - Create a window - Link panel into the window - Export your code and execute your application - the window showing with the inner panel - javascript error: namespace is undefined - insert alias property into the ui class with magic word 'widget' Code: Ext.define('MyApp.view.ui.MyForm', { extend: 'Ext.form.Panel', alias: 'widget.myform', ..... I tried what you suggested and it did produce alias: widget.myform but it did so in MyApp.view.MyForm and not the base class Code: Ext.define('MyApp.view.MyForm', { extend: 'MyApp.view.ui.MyForm', alias: 'widget.myform', initComponent: function() { var me = this; me.callParent(arguments); } }); Last edited by Phil.Strong; 20 Dec 2011 at 11:41 AM. Reason: typoPhil Strong @philstrong Ok Phil, but what appen if you have exported code before linking? external ui files are not re-generated by designer, if I link after first export I loose the alias property. A fair point I was scratching my head earlier thinking why did we put alias in the super class and not the base class. I think it should be moved to the base class.Phil Strong @philstrong Ok! Can't wait for Sencha Designer 2! Good work! This is a bug by design/or flaw in the code generation approach of Designer 1.x. Designer 2.x we are now taking the approach of using a single class with overrides. Therefore this issue never occurs: Conran @aconran
https://www.sencha.com/forum/showthread.php?162271-Error-namespace-is-undefined&mode=hybrid
CC-MAIN-2015-22
refinedweb
271
68.87
django-smoketest 0.1.3 Django smoketest framework ================ PLEASE NOTE: I'm basically doing README-driven development here, writing documentation for how this code should work before actually implementing it. This notice will go away when django-smoketest is actually implemented and remotely suitable for real-world use. Until then, feel free to offer ideas on the interface, but don't expect to be able to use it (you can look in the "Progress" section to see exactly where I'm smoketest.decorators import slow, rolled_back) @rolled_back def test_foomodel_writes(self): """ make sure we can also write to the database but do not leave any test detritus around. """ f = FooModel.objects.create() @slow def test_something_slow(self): """ this test will not be run in "fast" mode because it uses a lot of resources or otherwise could bog down the production server in bad ways """ # do a bunch of slow stuff # ... self.assertEqual(foo, bar) Now, if you make a `GET` to ``, django-smoketest will go through your code, finding any `smoke` modules, and run the tests you have defined (if you've used unittest or nose, you get the idea), excluding any marked with the `@slow` decorator. `GET`ing `` will include those tests as well. All tests passing will result in a response like:. There is the `@slow` decorator which marks a test as potentially slow, or utilizing a lot of resources. Either way, it lets you have two different levels of smoke tests. Fast tests can be run frequently, eg, from a monitoring script that hits it every five minutes so you can quickly be alerted if something changes in the production environment. The `@slow` tests can then be reserved for only running after a new deploy to check things a little more deeply and have more confidence that everything is functional. The `@rolled_back` decorator will make sure that the test gets. I'm also on the fence about whether this decorator should even exist or if that should be the default behavior for all smoke tests. Should a smoke test ever actually commit a transaction? In your settings, you may define a `SMOKETEST_APPS` variable that lists the applications want to run smoke tests from (instead of looking through all your applications). (do we want a SMOKETEST_SKIP_APPS as well/instead?).: * @slow decorator and view * @rolled_back decorator * capture stdout/stderr * I think it only handles `smoke.py` files or `smoke/__init__.py` and won't yet find subclasses in submodules like `smoke/foo.py`. * report additional info (exception/tracebacks) on errors * support messages on asserts * setUpClass/tearDownClass * extended assert* methods (listed in `smoketest/__init__.py`) * `SMOKETEST_APPS` (and/or `SMOKETEST_SKIP_APPS`) - Author: Anders Pearson - License: BSD - Platform: any - Package Index Owner: thraxil, ctlpypi - DOAP record: django-smoketest-0.1.3.xml
https://pypi.python.org/pypi/django-smoketest/0.1.3
CC-MAIN-2017-09
refinedweb
457
60.65
Invert sign on import with scriptuser1659263 Nov 6, 2014 8:35 PM Dear Guruz, Both FDMEE 11.1.2.3 and new scripting language Jython are beautiful. Now please help me please with the following : This script works fine : def separation(strfield,strrec): if strfield == 0: return strrec.split(";")[8] else: x = strfield return x This script, where I want to invert x value, does not work fine! Why and how to handle this issue? def separation(strfield,strrec): if strfield == 0: return strrec.split(";")[8] else: x = strfield * -1 return x Many thanks. 1. Re: Invert sign on import with scriptSH_INT Nov 6, 2014 9:00 PM (in response to user1659263) What is the value of strField are you certain it is a string that can be implicitly converted to a number? Your first code block doen't care whether strField is a string or a numeric value but your second code block requires a numeric value to successfully perform the arithmetic calculation x = strFiled * 1 2. Re: Invert sign on import with scriptFrancisco Amores Nov 7, 2014 10:28 AM (in response to user1659263) HI, let me try to help showing you some examples. Firstly, think that FDMEE is passing string parameters to the import script In this example you can see that what you basically need is to convert into float your strfield and 9th field of the record. In that way, you would be able to reverse sign. Note that x = x * -1 is also valid although the jythonic way would be x *= -1 But what would happen if any of strfield or 9th field cannot be converted to float? As you can see it would fail due to ValueError exception. In Jython when type conversions are needed like float() using try/except is recommended. In the following example, if any of the two fields are not number the record will be skipped during the import: I hope that helps. Regards 3. Re: Invert sign on import with scriptuser1659263 Nov 8, 2014 1:27 PM (in response to Francisco Amores) Thank you Francisco for your help. You provided great guidance regarding my issue which is now solved! I though that typing were manfatory with Jython. Thanks again. SebRoux 4. Re: Invert sign on import with scriptFrancisco Amores Nov 10, 2014 10:53 AM (in response to user1659263) Cool. Regards
https://community.oracle.com/thread/3627827
CC-MAIN-2018-05
refinedweb
393
69.31
Details - Type: Improvement - Status: Closed - Priority: Minor - Resolution: Fixed - Affects Version/s: 1.1.0 - - Component/s: deployment - Labels:None - Flags:Patch Description Add a puppet config option to control ip_hostname_check. If deploying on a cloud that does not provide reverse DNS service, the DataNode registration to the NameNode will fail. Exposing this option allows deployments to work around this. Activity - All - Work Log - History - Activity - Transitions Oops I accidentily pushd the last patch to upstream as is. Cory Johns It would be nice if you can propose a new JIRA to implement the changes I suggested above. Thanks! Regarding the if: Congrats, you found a bug! Your assumption is correct: The intention of templating the hdfs-site.xml etc was only to override necessary parameters, leaving out settings which are default. So the design was to have puppet class parameters have the value undef, if the setting has not to be propagated to the output file, or set explicitly, where its value has to be set in the file. If one is using <% if @parameter %> the block is inserted in template if the value is not false. In most occasions that was technically identically to not undef, until you found a parameter defaulting to "false". The correct syntax is <% if @parameter.nil? -%> <% end -%> and the default value in init.pp should be undef. Would you mind issueing a new patch? We have to check other values as well. Thank you very much for your contribution! Removed the if from the patch because it's incorrect. It doesn't seem like we need an explicit check for undefined there since we're providing a default value. Is that not accurate? Are we wanting to allow explicitly setting it to undef to not render that property, and if so, what is the value to that? Other boolean params (e.g., hdfs_webhdfs_enabled) all seem to skip this check. I'm also not familiar enough with Puppet templates to the right syntax for testing for undefined. LGTM +1 No handling of default value os not ok. in init.pp in the argument list the value should have a default value = "true", Please do not set default values in cluster.yaml. if hdfs-site.xml the value should be tested to be different from undef before inserting the sniplet. Same here - I don't see how + $namenode_datanode_registration_ip_hostname_check gets its value from the cluster.pp without a call to hiera New patch based on Olaf Flebbe's feedback in BIGTOP-2459. No bigtop:: namespace, implicit use of Hiera to populate the param, and default value given in cluster.yaml. Ok, hopefully I have understood things correctly this time and the latest patch follows the proper convention. I thought it wasn't needed with the hiera() call pulling from the bigtop:: namespace. Is that incorrect? Cory Johns, I believe you lost the change to the cluster.yaml. Or have I misread the patch? I'm more than happy to change it; I do think the bigtop:: param namespace and explicit hiera() call would be cleaner. Ah, if it works - it is fine then I haven't tested it myself, but I trust you have validated both scenarios (with site.yaml mods and without it), so we can go ahead with it. We were overriding it in site.yaml by specifying the full hadoop::common_hdfs::namenode_datanode_registration_ip_hostname_check property name. It seemed to work, but that doesn't mean it's right. I believe Kevin suggests that we need to be able to parametrize this setting, so a user can control it from site.yaml file. Hard-coding the value in the cluster.yaml won't help, as this file isn't supposed to be modified on every deployment. Instead, you can something similar to hadoop::common_hdfs::hadoop_namenode_host: "%{hiera('bigtop::hadoop_head_node')}" In other words, in the cluster.yaml do something like hadoop::common_hdfs::namenode_datanode_registration_ip_hostname_check: "%{hiera('bigtop::hadoop_ip_hostname_check')}" then {[bigtop::hadoop_ip_hostname_check}} could be set in the site.yaml during the deployment, if needed. Providing the default value in the init.pp file makes it always on, unless specified otherwise in the site.yaml. Does it make sense? Also, do you think it would be a bit easier to read, if the variable name is somewhat shorter? Thanks! My understanding (from) was that cluster.yaml was used to build the Hiera data, so your suggestion seems self-referential to me. I could, however, see the argument that the init.pp line should instead be: $namenode_datanode_registration_ip_hostname_check = hiera('hadoop::common_hdfs::namenode_datanode_registration_ip_hostname_check', true), But as it works the way it is now, I'm not at all clear what the "more correct" way to do it is. I'm still pretty new to Puppet. In the cluster.yaml block of your patch, I think you should use the hiera value instead of 'false'. So instead of this: #hadoop::common_hdfs::namenode_datanode_registration_ip_hostname_check: false You would do something like this: #hadoop::common_hdfs::namenode_datanode_registration_ip_hostname_check: %{hiera('hadoop::common_hdfs::namenode_datanode_registration_ip_hostname_check')} I'm not 100% on this (still new to the relationship between hiera, cluster.yaml, and init.pp), so hopefully a more seasoned committer will verify the best practice here. FYI Olaf Flebbe, Cory adjusted this implementation as you suggested in.
https://issues.apache.org/jira/browse/BIGTOP-2458
CC-MAIN-2017-43
refinedweb
861
58.79
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> U16 cgi_func ( U8* env, /* Pointer to input string from a TCPnet script. */ U8* buf, /* Location where to write the HTTP response string. */ U16 buflen, /* Number of bytes in the output buffer. */ U32* pcgi ); /* Pointer to a storage variable. */ The cgi_func function is what the TCPnet script interpreter calls, when interpreting the TCPnet script, to output the dynamic part of the HTTP response. The script interpreter calls cgi_func for each line in the script that begins with the command c. You must customize the cgi_func function so that it can understand and use the input string from the TCPnet script. The argument env is a pointer to the input string that cgi_func uses to create the dynamic response. It is the same string of bytes that is specified in the TCPnet script code using the c command. The argument buf is a pointer to the output buffer where the cgi_func function must write the HTTP response. The argument buflen specifies the length of the output buffer in bytes. The argument pcgi is a pointer to a variable that never gets altered by the HTTP Server. Hence, you can use it to store parameters for successive calls of the cgi_func function. You might use this to store: The cgi_func function is in the HTTP_CGI.c module. The prototype is defined in net_config.h. note The cgi_func function returns the number of bytes written to the output buffer and writes the repeat flag value in the most significant bit of the return value. If the return value's most significant bit is set to 1 (return value or-ed with 0x8000), the TCPnet script interpreter calls the cgi_func function again with the same values for the arguments env, buflen, and pcgi, which holds the same content as previously set. The argument buf is adjusted according to the number of bytes that were written to the output buffer. If the return value's second most significant bit is set to 1 (return value or-ed with 0x4000), the packet optimization is canceled and the current packet is transmitted immediatelly. cgi_process_data, cgi_process_var, http_get_lang, http_get_user_id U16 cgi_func (U8 *env, U8 *buf, U16 buflen, U32 *pcgi) { U16 len = 0; switch (env[0]) { /* Analyze the environment string. It is the script 'c' line starting */ /* at position 2. What you write to the script file is returned here. */ case 'a' : /* Network parameters - file 'network.cgi' */ .. break; case 'b': /* LED control - file 'led.cgi' */ .. break; case 'c': /* TCP status - file 'tcp.cgi' */ .. break; case 'd': /* System password - file 'system.cgi' */ switch (env[2]) { case '1': len = sprintf(buf,&env[4],http_EnAuth ? "Enabled" : "Disabled"); break; case '2': len = sprintf(buf,&env[4],http_auth_passw); break; } break; } return (len); }.
http://www.keil.com/support/man/docs/rlarm/rlarm_cgi_func.htm
CC-MAIN-2019-43
refinedweb
461
64.61
Up to [cvs.NetBSD.org] / src / lib / libc / gen Request diff between arbitrary revisions Default branch: MAIN Revision 1.11 / (download) - annotate - [select for diffs], Thu Oct 22 21:50:01 2009 UTC (6 years, 7: +1 -3 lines Diff to previous 1.10 (colored) Remove closes 3 and 4 from my copyright. Revision 1.10 / (download) - annotate - [select for diffs], Mon Dec 26 19:40:14 2005 UTC (10 years, 4.9: +7 -7 lines Diff to previous 1.9 (colored) u_intN_t -> uintN_t Revision 1.9 / (download) - annotate - [select for diffs], Sun Oct 5 17:48:49 2003 UTC (12 years, 7 months ago) by bouyer: +3 -4 lines Diff to previous 1.8 (colored) Remove references to University of California from my copyright notices. Revision 1.8 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:46 2003 UTC (12 years, 9 months ago) by agc Branch: MAIN Changes since 1.7: +2 -6 lines Diff to previous 1.7 (colored) Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22280, verified by myself. Revision 1.7 / (download) - annotate - [select for diffs], Wed Apr 16 13:34:35 2003 UTC (13 years, 1 month ago) by wiz Branch: MAIN Changes since 1.6: +3 -3 lines Diff to previous 1.6 (colored) Use .In header.h instead of .Fd #include \*[Lt]header.h\*[Gt] Much easier to read and write, and supported by groff for ages. Okayed by ross. Revision 1.4.6.3 / (download) - annotate - [select for diffs], Fri Mar 22 20:42:05 2002 UTC (14 years, 2 months ago) by nathanw Branch: nathanw_sa CVS Tags: nathanw_sa_end Changes since 1.4.6.2: +1 -1 lines Diff to previous 1.4.6.2 (colored) to branchpoint 1.4 (colored) next main 1.5 (colored) Catch up to -current. Revision 1.4.6.2 / (download) - annotate - [select for diffs], Fri Mar 8 21:35:03 2002 UTC (14 years, 2 months ago) by nathanw Branch: nathanw_sa Changes since 1.4.6.1: +3 -3 lines Diff to previous 1.4.6.1 (colored) to branchpoint 1.4 (colored) Catch up to -current. Revision 1.6 / (download) - annotate - [select for diffs], Thu Feb 7 07:00:10 2002 UTC (14 years,.5: +3 -3 lines Diff to previous 1.5 (colored) Generate <>& symbolically. Revision 1.4.6.1 / (download) - annotate - [select for diffs], Mon Oct 8 20:18:43 2001 UTC (14 years, 7 months ago) by nathanw Branch: nathanw_sa Changes since 1.4: +4 -5 lines Diff to previous 1.4 (colored) Catch up to -current. Revision 1.5 / (download) - annotate - [select for diffs], Tue Jun 5 12:16:23 2001 UTC (14 years, 11 months ago) by wiz Branch: MAIN Changes since 1.4: +4 -5 lines Diff to previous 1.4 (colored) Uppercase Dt argument, fix typos, remove an empty line. Revision 1.4 / (download) - annotate - [select for diffs], Mon May 15 06:26:42 2000 UTC (16 years ago) by bouyer.3: +11 -12 lines Diff to previous 1.3 (colored) Use the same copyrigth notice everywhere. Revision 1.3 / (download) - annotate - [select for diffs], Wed Mar 24 06:21:29 1999 UTC (17 years, 2.2: +1 -2 lines Diff to previous 1.2 (colored) Remove blank lines. Revision 1.2 / (download) - annotate - [select for diffs], Mon Mar 22 19:44:38 1999 UTC (17 years, 2 months ago) by garbled Branch: MAIN Changes since 1.1: +2 -2 lines Diff to previous 1.1 (colored) Last of the .Os cleanups. .Os is defined in the tmac.doc-common file, so we shouldn't override it with versions in the manpages. Wheee! Revision 1.1 / (download) - annotate - [select for diffs], Fri Jan 15 13:31:22 1999 UTC (17 years, 4 months ago) by bouyer Branch: MAIN Move the bswap functions from libutil to libc (this bups the minor of libc and the major of libutil). For little-endian architectures merge the bnswap() assembly versions with nto* and hton* using symbols aliasing. Use symbol renaming for the bswap function in this case to avoid namespace pollution. Declare bswap* in machine/bswap.h, not machine/endian.h. For little-endian machines, common code for inline macros go in machine/byte_swap.h Sync libkern with libc. Adjust #include in kernel sources for machine/bswap.h. This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/bswap.3
CC-MAIN-2016-22
refinedweb
770
77.03
The MB_ flag controls how the MultiByteToWideChar function treats invalid characters. Some people claim that the following sentences in the documentation are contradictory: - "Starting with Windows Vista, the function does not drop illegal code points if the application does not set the flag." - "Windows XP: If this flag is not set, the function silently drops illegal code points." - "The function fails if MB_is set and an invalid character is encountered in the source string." ERR_ INVALID_ CHARS Actually, the three sentences are talking about different cases. The first two talk about what happens if you omit the flag; the third talks about what happens if you include the flag. Since people seem to like tables, here's a description of the MB_ flag in tabular form: Here's a sample program that illustrates the possibilities: #include <windows.h> #include <ole2.h> #include <windowsx.h> #include <commctrl.h> #include <strsafe.h> #include <uxtheme.h> void MB2WCTest(DWORD flags) { WCHAR szOut[256]; int cch = MultiByteToWideChar(CP_UTF8, flags, "\xC0\x41\x42", 3, szOut, 256); printf("Called with flags %d\n", flags); printf("Return value is %d\n", cch); for (int i = 0; i < cch; i++) { printf("value[%d] = %d\n", i, szOut[i]); } printf("-----\n"); } int __cdecl main(int argc, char **argv) { MB2WCTest(0); MB2WCTest(MB_ERR_INVALID_CHARS); return 0; } If you run this on Windows XP, you get Called with flags 0 Return value is 2 Value[0] = 65 Value[1] = 66 ----- Called with flags 8 Return value is 0 ----- This demonstrates that passing the MB_ flag causes the function to fail, and omitting it causes the invalid character \xC0 to be dropped. If you run this on Windows Vista, you get Called with flags 0 Return value is 3 Value[0] = 65533 Value[1] = 65 Value[2] = 66 ----- Called with flags 8 Return value is 0 ----- This demonstrates again that passing the MB_ flag causes the function to fail, but this time, if you omit the flag, the invalid character \xC0 is converted to U+FFFD, which is REPLACEMENT CHARACTER. (Note that it does not appear to be documented precisely what happens to invalid characters, aside from the fact that they are not dropped. Perhaps code pages other than CP_UTF8 convert them to some other default character.) Now I'm sort of curious as to what prompted the behavior change. This seems like it could be a breaking change for apps that tended to get illegal characters in their inputs and were able to safely ignore them. One might assume that people reading those sentences are, generally speaking, programmers. One might assume that programmers, who after all work with logical devices, could mentally derive that little table as they were reading the text. Alas, one would likely be wrong. Lawyers, on the other hand … ;-) Documentation is overcomplicated with negatives. That's why Raymond now has to spend time on explaining it. Wasn't there some discussion on this in the comments awhile back? Unfortunately the Web 2.0 abuse makes it impossible to use a search engine to search the comments on this blog. The documentation should have Raymond's tables!! It's so much easier to parse than the text. I too am one of the humans who likes tables. [My guess is security. -Raymond] Agreed. I see the same potential as "canonical Unicode forms" abuse as it were in IIS before Win2k. If an application do checking on MBCS then convert to Unicode, and the source string contains discardable character that'd make some illegal sequence seems legal, that'll effectively allow bad people write code to bypass the checking. I'll also note that if certain application is written for MBCS and not Unicode, all it's validation would be done in with MBCS data. And if that application passes the data to COM, since it'll transparently do the MBCS to Unicode conversion, it may unknowingly pass inadequately validated data to the COM component. Another reason for not writing non-Unicode applications these days… Codepages can specify a replacement character (most commonly "?"). It's documented as the default replacement character used by WideCharToMultiByte, presumably it's now used by MultiByteToWideChar, too. It could be made clearer
https://blogs.msdn.microsoft.com/oldnewthing/20120504-00/?p=7703
CC-MAIN-2018-30
refinedweb
697
62.07
how to use tostring method in java tutorial How to use toString Method in Java Tutorial using Net beans This tutorial is all about How to use how to use tostring method Tutorial using Net beans.In this tutorial you will learn how to use toString method in java, toString method in java is very useful when it comes to converting, and it allows you to convert int to string. It can also be use in date format.. Please follow all the steps below to complete this tutorial. How to use toString Method in Java Tutorial using Net beans. Then copy paste the code below [Java] public class ToStringDemo { private int month; private int day; private int year; public ToStringDemo(int m , int d , int y){ month = m; day = y; year = y; System.out.printf(“My birthday is %s\n”,this); } public String toString () { return String.format (“%d/%d/%d”, month, day, year); } public static void main (String [] args) { ToStringDemo object = new ToStringDemo (2, 22, 1996); } } [/java] Then you’re done. Run your project and see if it works. About How To Use toString Method.) Clear Textfield element in java 2.) How to use Keyword Static in Java using Netbeans IDE
https://itsourcecode.com/free-projects/java-projects/how-to-use-tostring-method-in-java-tutorial/
CC-MAIN-2021-49
refinedweb
201
72.36
Writing View Employee Functionality Before an audit can occur, we need the ability to view specific employee records. We will view and record audits from the same page, recordaudit.aspx, but separate the implementation. Let's tackle the view functionality first. From the solution explorer, do the following: - Left click the plus (+) sign next to classes. - Double click RecordHistoryData.cs. In this file, we'll write our GetEmployeeView, which is viewable from the project downloads. As you can see from this file, we do the following: - Create a connection object and pass in our configuration settings. - Create a command object and pass in our stored procedure and connection object. - Create a parameter, GUID, and pass in the appropriate data type and size. - Set the GUID's parameter value to the passed in variable from the query string, which came from our home page and add that to the command object's parameters collection. If for any reason you can't open a connection to the database or execute the query, you wrap this inside a try/catch block, which handles exceptions, and do the following: - Open a connection to the database. - Create a data reader object and set it to the command object's execute reader method. - If we can read data from the object, we: - Set appropriate variable values from our reader object. - Otherwise, we catch and throw an exception. Save your file(s). Open Recordaudit.aspx.cs Since you have written the functionality needed to display individual employees, let's add the ability to call it. From the solution explorer, do the following: - Left click the plus (+) sign next to recordaudit.aspx. - Double click recordaudit.aspx.cs. Next, add the region named declarations and methods from the project downloads file. As you can see from the project downloads, we do the following: - Add a namespace for our class, which is RecordHistory.classes. - Create a region named declarations. Inside, we do the following: - Create a new instance of both of our classes, and name the object appropriately. - Inside our page load event, we do the following: - We check the query string to see if it's null. - If it IS NOT, we: - Check for a post back with a negation operator. - If it IS, we: - Show an error message by toggling our placeholder(s). - If it IS NOT a POSTBACK, we: - Create a new instance of a GUID object, and pass in our query string. - If the GUID object IS NOT NULL: - We call LoadData and pass in our GUID object. - In our Load Data method, we: - Pass in our GUID object using a data type of GUID in the method signature. - Call our method GetEmployeeView and pass in our GUID object. - Set a session object, named ID, and set it to the ID variable value from our record history object (rh). Four important notes about our load data method: - Even though we don't show our ID, which is the primary key from our parent table, we'll use this ID when auditing our child table, RecordHistory_Audit. - We view individual employees using a GUID, instead of a primary key. This way, we don't give away our database structure as easily to a hacker, and it also serves as a way to keep hackers or anyone else from easily cycling through primary keys to view other employees without following our interface. - Our post back check ensures when we perform the audit, we don't reload the employee information again on page load. - We use a session object to retain the value of the primary key that's returned from our query. Without it, we lose the value on post back, and would be forced to issue another query just for the ID of the record we want. Save your file(s). A Quick Note About Sessions Many times you will read about careful implementations of session objects. The reason for this mainly stems from the fact that session objects are stored on the web server. If you have many session objects being created in an application, and the default time limit is 20 minutes, one begins to understand why their use needs to be carefully examined and their implementation carried out wisely. You other option would be a cookie-based implementation, which resides on the client, and are more widely used in larger applications to limit server resources being used. Writing Audit Functionality In order to record when an update (audit) occurs on an employee, we need the ability to capture specific information in our RecordHistory_Audit table, such as: the record index (primary key) from our parent table, the employee that's being updated, user completing the audit, as well as the date time. Let's write this functionality by first opening RecordHistoryDataAction.cs from the solution by following these steps: - Left click the plus (+) sign next to classes. - Double click RecordHistoryDataAction.cs. With the file open, add the following namespaces: using System.Data; using System.Configuration; using System.Data.SqlClient; using System.Security.Principal; namespace RecordHistory.classes { public class RecordHistoryDataActions { } As you can see, you added four. The first allows us to work with stored procedures in ADO.NET; the second allows us to reference a connection key from your configuration file; the third allows us to connect to SQL Server; and the fourth allows us to record which user audited the record based on their windows login credentials. InsertAuditRecord that will insert the audit to our child table, which can be found from the project downloads file. As you can see from the code archive, you created a method named InsertAuditRecord and did the following: - Created a connection object and passed in your configuration key - Created a command object and passed in your stored procedure and connection object - Created SQL parameters for each of our variables used in the insertion - Set the values of our SQL parameters to the appropriate variable values, including: - On User, we set the value to the current user's name based on the windows identity - On Change date, we set the value to current date time from the server - On Change reason, we set the value to a literal string of our choosing - Added the parameters to the command object's collection If for any reason you can't open a connection to the database or execute the query, you wrap this inside a try/catch block, which handles exceptions, and do the following: - Call the open method of your connection object - Call execute non query from our command object - If an exception occurs, catch it and throw the exception to the screen Lastly, you call the dispose method of your command object and the close method of your connection object to clear resources. Some important notes regarding recording our audit: there are several ways to record user information to meet your application needs. A few things should be obvious however: you don't want the user auditing the information to be able to fake their credentials, change the date or the time, or have access to the database to change what the table is recording from the application. What you decide to record after these essentials are covered is entirely up to specifications set forth by business analysts or the product owners. Save your file(s). Open Recordaudit.aspx.cs Since we have written the audit functionality, we simply need to call it from our code-behind file. The functionality is shown from the project downloads file. From the solution, we do the following: - Left click the plus (+) sign next to recordaudit.aspx. - Double click recordaudit.aspx.cs. In our submit click event, we do the following: - Set our rhd object, which was instantiated from RecordHistoryDataAction.cs, to the appropriate server-side form controls. - Set our Audit_IDvariable from our object to our session object, which holds the primary key value from our parent table. This is how we're establishing the primary/foreign key relationship. - Call InsertAuditRecord. - Set our placeholder controls to their appropriate visibility. - Remove our session object's value. It should be noted, we remove the session object's value after each audit has occurred to ensure we don't insert the wrong primary key from our parent table, to our child table. Save your file, and run the project. You should now see a listing of all employees available from our parent table. Clicking on an employee link should take you to our record audit page, showing that employee information. You should be able to change the employee information, click submit, and then see a success message that provides a link back to our home page. You can check the database to ensure the audit was recorded by doing the following: - Open Microsoft SQL Server 2008 from the desktop. From Microsoft SQL Server 2008, do the following: - Left click to select mwd_RecordHistory. - Left click on New Query. Type the following in the query window: use mwd_RecordHistory select * from RecordHistory_Audit You should see the newly inserted record from our previous operation. Summary In this article you learned how to implement an audit history of employee records that we updated. Furthermore, you learned the following: - Restoring a database from a backup. - Creating an ASP.NET application. - Separating data handling operations into the following: - Reading data - Inserting data - Understanding considerations for what information should be stored in an audit, and protecting this information from being tampered - How to work with session objects and understanding when to use them Take the knowledge gained in this article and expand your auditing system to meet any needs you may have. If you have questions, please contact me. Code Download About the AuthorRyan Butler is the founder of Midwest Web Design. His skill sets include HTML, CSS, Flash, JavaScript, ASP.NET, PHP and database technologies. Original: Jan. 25, 2011
http://www.webreference.com/programming/asp_net/audit-history/2.html
CC-MAIN-2015-18
refinedweb
1,635
53.1
Building an AR game with ARKit and SpriteKit This article is part of ARKit course. ARKit is the new Apple framework that integrates device motion tracking, camera capture, and scene processing to build augmented reality (AR) experiences. When using ARKit, you have three options to create your AR world: - SceneKit, to render 3D overlay content - SpriteKit, to render 2D overlay content - Metal, to build your own view for an AR experience In this tutorial, we’re going to explore the basics of ARKit and SpriteKit by building a game, something inspired by Pokemon Go, but with ghosts, check out this video: Every few seconds, a little ghost appears randomly in the scene and a counter in the bottom left part of the screen is incremented. When you tap on a ghost, it fades out playing a sound and decrementing the counter. The code of this project is hosted on GitHub. Let’s start by reviewing what you’ll need to develop and run this project. What you’ll need First of all, ARKit requires an iOS device with an A9 or later processor for a full AR experience. In other words, you’ll need an iPhone 6s or better, iPhone SE, any iPad Pro, or the 2017 iPad. ARKit is a feature of iOS 11, so you’ll need to have this version installed and use Xcode 9 for development. At the time of writing, iOS 11 and Xcode 9 are still in beta, so you’ll need to enroll in the Apple Developer Program, however, Apple has now released both to the public so a paid developer account is no longer required. You can find more info about installing iOS 11 beta here and Xcode beta here. In case something changes in a later version, the app of this tutorial was built with Xcode beta 2. For the game, we’ll need images to represent the ghosts and a sound effect to play when one is removed. A great site to find free game assets is OpenGameArt.org. I chose this ghost image and this ghost sound effect, but you can use any other files you want. Creating the project Open Xcode 9 and create a new AR app: Enter the project information, choosing Swift as the language and SpriteKit’t have more than 3 apps installed in your device. The first time you install the app on your device, probably you’ll be asked to trust the certificate in the device, just follow the instructions: This way, when the app is run, you’ll be asked to give permissions to the camera: After that, a new sprite will be added to the scene when you touch the screen and positioned according to the orientation of the camera: Now that we have set up the project, let’s take a look at the code. How SpriteKit works with ARKit If you open Main.storyboard, you’ll see there’s an ARSKView that fills the entire screen: This view renders the live video feed from the device camera as the scene background, placing 2D images (as SpriteKit nodes) in the 3D space (as ARAnchor objects). When you move the device, the view automatically rotates and scales the images (SpriteKit nodes) corresponding to anchors ( ARAnchor objects) so that they appear to track the real world seen by the camera. This view is managed by the class ViewController.swift. First, in the viewDidLoad method, it turns on some debug properties of the view and then creates the SpriteKit scene from the automatically created scene Scene.sks: override func viewDidLoad() { super.viewDidLoad() // Set the view's delegate sceneView.delegate = self // Show statistics such as fps and node count sceneView.showsFPS = true sceneView.showsNodeCount = true // Load the SKScene from 'Scene.sks' if let scene = SKScene(fileNamed: "Scene") { sceneView.presentScene(scene) } } Then, the method viewWillAppear configures the session with the class ARWorldTrackingConfiguration. The session (an ARSession object) manages the motion tracking and image processing required to create an AR experience: override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // Create a session configuration let configuration = ARWorldTrackingConfiguration() // Run the view's session sceneView.session.run(configuration) } You can configure the session with the ARWorldTrackingConfiguration class to track the device’s movement with six degrees of freedom (6DOF). The three rotation axes: - Roll, the rotation on the X-axis - Pitch, the rotation on the Y-axis - Yaw, the rotation on the Z-axis And three translation: - Surging, moving forward and backward on the X-axis - Swaying, moving left and right on the Y-axis - Heaving, moving up and down on the Z-axis Alternatively, you can also use AROrientationTrackingConfiguration, which provides three degrees of freedom (3DOF) for simple motion tracking in less capable devices and as a fallback in situations where 6DOF tracking is temporarily unavailable. A few lines below, you’ll find the method view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode?. When an anchor is added, this method provides a custom node for that anchor that will be added to the scene. In this case, it returns an SKLabelNode to display the emoji that is presented to the user: func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? { // Create and configure a node for the anchor added to the view's session. let labelNode = SKLabelNode(text: "?") labelNode.horizontalAlignmentMode = .center labelNode.verticalAlignmentMode = .center return labelNode; } But when is this anchor created? It is done in the file Scene.swift, the class that manages the Sprite scene ( Scene.sks), specifically, in this method: override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { guard let sceneView = self.view as? ARSKView else { return } // Create anchor using the camera's current position if let currentFrame = sceneView) sceneView.session.add(anchor: anchor) } } As you can read in the comments, it creates an anchor using the camera’s current position, then it creates a matrix to position the anchor 0.2 meters in front of the camera and add it to the scene. An ARAnchor uses a 4×4 matrix represents the combined position, rotation or orientation, and scale of an object in three-dimensional space. In the 3D programming world, matrices are used to represent graphical transformations like translation, scaling, rotation, and projection. Through matrix multiplication, multiple transformations can be concatenated into a single transformation matrix. Here’s a good post about the math behind transforms. Also, the Core Animation Programming Guide has a section about manipulating layers in three dimensions where you can find matrix configurations for some common transformations. Back to the code, we start with an identity matrix ( matrix_identity_float4x4): 1.0 0.0 0.0 0.0 // This row represents X 0.0 1.0 0.0 0.0 // This row represents Y 0.0 0.0 1.0 0.0 // This row represents Z 0.0 0.0 0.0 1.0 // This row represents W If you’re wondering what is W: If w == 1, then the vector (x, y, z, 1) is a position in space. If w == 0, then the vector (x, y, z, 0) is a direction. Then, the Z-axis of column number 3 is modified with the value -0.2 to indicate a translation in that axis (a negative z value places an object in front of the camera). If you print the value of the translation matrix at this point, you’ll see it’s printed as an array of vectors, where each vector represents a column. [ [1.0, 0.0, 0.0, 0.0 ], [0.0, 1.0, 0.0, 0.0 ], [0.0, 0.0, 1.0, 0.0 ], [0.0, 0.0, -0.2, 1.0 ] ] Probably it’s easier to see it this way: 0 1 2 3 // Column number 1.0 0.0 0.0 0.0 // This row represents X 0.0 1.0 0.0 0.0 // This row represents Y 0.0 0.0 1.0 -0.2 // This row represents Z 0.0 0.0 0.0 1.0 // This row represents W Then, this matrix is mutliplied by the transformation matrix of the camera’s current frame to get the final matrix that will be used to position the new anchor. For example, assumming the following camera’s transform matrix (as an array of columns): [ [ 0.103152, -0.757742, 0.644349, 0.0 ], [ 0.991736, 0.0286687, -0.12505, 0.0 ], [ 0.0762833, 0.651924, 0.754438, 0.0 ], [ 0.0, 0.0, 0.0, 1.0 ] ] The result of the multiplication will be: [ [0.103152, -0.757742, 0.644349, 0.0 ], [0.991736, 0.0286687, -0.12505, 0.0 ], [0.0762833, 0.651924, 0.754438, 0.0 ], [-0.0152567, -0.130385, -0.150888, 1.0 ] ] Here’s more information about how to multiply matrices and here’s a matrix multiplication calculator. Now that you understand how the sample works, let’s modify it to make our game. Building the SpriteKit scene In the file Scene.swift, let’s add the following properties: class Scene: SKScene { let ghostsLabel = SKLabelNode(text: "Ghosts") let numberOfGhostsLabel = SKLabelNode(text: "0") var creationTime : TimeInterval = 0 var ghostCount = 0 { didSet { self.numberOfGhostsLabel.text = "\(ghostCount)" } } ... } We’re adding two labels, one that represents the number of ghosts present in the scene, a time interval to control the creating of the ghosts, and the ghost counter, with a property observer to update the label whenever its value changes. Next up, download the sound that will be played when the ghost is removed and drag it to the project: And add the following line to the class: let killSound = SKAction.playSoundFileNamed("ghost", waitForCompletion: false) We’ll call this action later to play the sound. In the method didMove, let’s add the labels to the scene: override func didMove(to view: SKView) { ghostsLabel.fontSize = 20 ghostsLabel.fontName = "DevanagariSangamMN-Bold" ghostsLabel.color = .white ghostsLabel.position = CGPoint(x: 40, y: 50) addChild(ghostsLabel) numberOfGhostsLabel.fontSize = 30 numberOfGhostsLabel.fontName = "DevanagariSangamMN-Bold" numberOfGhostsLabel.color = .white numberOfGhostsLabel.position = CGPoint(x: 40, y: 10) addChild(numberOfGhostsLabel) } You can use a site like iOS Fonts to visually choose the font for the labels. The position coordinates represent the bottom-left section of the screen (the code to make this happen will be explained later). I chose to place them in that section of the screen to avoid orientation issues because the size of the scene changes with the orientation, however, the coordinates remain the same, which can cause the labels to appear out of the screen or in odd positions (which can be fixed by overriding the didChangeSize method or by using UILabels instead of SKLabelNodes). Now, to create the ghosts at a defined time interval, we’ll need some sort of timer. The update method, which is called before a frame is rendered (in average 60 times per second), can help us with this: override func update(_ currentTime: TimeInterval) { // Called before each frame is rendered if currentTime > creationTime { createGhostAnchor() creationTime = currentTime + TimeInterval(randomFloat(min: 3.0, max: 6.0)) } } The argument currentTime represents the current time in the app, so if this is greater than the time represented by creationTime, a new ghost anchor will be created and creationTime will be incremented by a random amount of seconds, in this case, between 3 and 6. Here’s the definition of randomFloat: func randomFloat(min: Float, max: Float) -> Float { return (Float(arc4random()) / 0xFFFFFFFF) * (max - min) + min } For the createGhostAnchor method, we need to get the scene view: func createGhostAnchor(){ guard let sceneView = self.view as? ARSKView else { return } } Then, since the functions we’re going to use work with radians, let’s define 360 degrees in radians: func createGhostAnchor(){ ... let _360degrees = 2.0 * Float.pi } Now, to place the ghost in a random position, let’s create one random rotation matrix on the X-axis and one on the Y-axis: func createGhostAnchor(){ ... let rotateX = simd_float4x4(SCNMatrix4MakeRotation(_360degrees * randomFloat(min: 0.0, max: 1.0), 1, 0, 0)) let rotateY = simd_float4x4(SCNMatrix4MakeRotation(_360degrees * randomFloat(min: 0.0, max: 1.0), 0, 1, 0)) } Luckily, we don’t have to build the rotation matrix manually, there are functions that returns a matrix describing a rotation, translation, or scale transformation. In this case, SCNMatrix4MakeRotation returns a matrix describing a rotation transformation. The first parameter represents the angle of rotation, in radians. The expression _360degrees * randomFloat(min: 0.0, max: 1.0) gives a random angle from 0 to 360 degrees. The rest of SCNMatrix4MakeRotation’s parameters represent the X, Y, and Z-components of the rotation axis respectively, that’s why we’re passing 1 as the parameter that corresponds to X in the first call and 1 as the parameter that corresponds to Y in the second call. The result of SCNMatrix4MakeRotation is converted to a 4×4 matrix using the simd_float4x4 struct. We can combine both rotation matrices with a multiplication operation: func createGhostAnchor(){ ... let rotation = simd_mul(rotateX, rotateY) } Then, we create a translation matrix in the Z-axis with a random value between -1 and -2 meters: func createGhostAnchor(){ ... var translation = matrix_identity_float4x4 translation.columns.3.z = -1 - randomFloat(min: 0.0, max: 1.0) } Combine the rotation and translation matrices: func createGhostAnchor(){ ... let transform = simd_mul(rotation, translation) } Create and add the anchor to the session: func createGhostAnchor(){ ... let anchor = ARAnchor(transform: transform) sceneView.session.add(anchor: anchor) } And increment the ghost counter: func createGhostAnchor(){ ... ghostCount += 1 } Now the only piece of code that is missing is the one executed when the user touches a ghost to remove it. Override the touchesBegan method to get the touch object first: override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { guard let touch = touches.first else { return } } Then get the location of the touch in the AR scene: override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { ... let location = touch.location(in: self) } Get the nodes at that location: override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { ... let hit = nodes(at: location) } Get the first node (if any) and check if the node represents a ghost (remember that labels are also a node): override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { ... if let node = hit.first { if node.name == "ghost" { } } } If that’s the case, group the fade-out and sound actions, create an action sequence, execute it, and decrement the ghost counter: override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { ... if let node = hit.first { if node.name == "ghost" { let fadeOut = SKAction.fadeOut(withDuration: 0.5) let remove = SKAction.removeFromParent() // Group the fade out and sound actions let groupKillingActions = SKAction.group([fadeOut, killSound]) // Create an action sequence let sequenceAction = SKAction.sequence([groupKillingActions, remove]) // Excecute the actions node.run(sequenceAction) // Update the counter ghostCount -= 1 } } } And our scene is done, now let’s work on the view controller of ARSKView. Building the view controller In viewDidLoad, instead of loading the scene Xcode created for us, let’s create our scene in this way: override func viewDidLoad() { ... let scene = Scene(size: sceneView.bounds.size) scene.scaleMode = .resizeFill sceneView.presentScene(scene) } This will ensure our scene fills the entire view and therefore, the entire screen (remember that the ARSKView defined in Main.storyboard fills the entire screen). This will also help to position the game labels in the bottom-left section of the screen, with the position coordinates defined in the scene. Now it’s time to include the ghost images. In my case, the image format was SVG so I converted to PNG and for simplicity, only added the first 6 ghosts of the image, creating the 2X and 3X versions (I didn’t see the point on creating the 1X version since devices using this resolution probably won’t be able to run the app anyway). Drag the image to Assets.xcassets: Notice the number at the end of the image’s names – this will help us to randomly choose an image to create the SpriteKit node. Replace the code in view(_ view: ARSKView, nodeFor anchor: ARAnchor) with this: func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? { let ghostId = randomInt(min: 1, max: 6) let node = SKSpriteNode(imageNamed: "ghost\(ghostId)") node.name = "ghost" return node } We give all the nodes the same name ghost, so we can identify them when it’s time to remove them. Of course, don’t forget the randomInt function: func randomInt(min: Int, max: Int) -> Int { return min + Int(arc4random_uniform(UInt32(max - min + 1))) } And we’re done! Let’s test it. Testing the app Run the app on a real device, give permissions to the camera, and start searching for the ghosts in all directions: A new ghost should appear every 3 to 6 seconds, the counter should be updated and a sound should play every time you hit a ghost. Try to take the counter to zero! Conclusion There are two great things about ARKit. One is that with a few lines of code we can create amazing AR apps, and second, we can leverage our knowledge of SpriteKit and SceneKit. ARKit actually has a small number of classes, it’s more learning about how to use the mentioned frameworks and adapt them a little bit to create an AR experience. You can extend this app by adding game rules, introducing bonus points or changing the images and sound. Also, using Pusher, you could add multi-player features by syncing the state of the game. Remember that you can find the Xcode project in this GitHub repository. July 25, 2017 Ready to begin? Start building your realtime experience today. From in-app chat to realtime graphs and location tracking, you can rely on Pusher to scale to million of users and trillions of messages
https://blog.pusher.com/building-ar-game-arkit-spritekit/?learn-arkit
CC-MAIN-2021-21
refinedweb
2,945
54.12
On Sun, Aug 14, 2011 at 17:03, Jochen Keil <jochen.keil at gmail.com> wrote: > > It's because your function is doing stuff instead of listening for X > > events. If you want to go off and do something else, forkIO a thread > I've already tried forkIO, xfork, seq, par, etc. all with more or less > seq and par won't do anything useful here (seq has nothing to do with parallelism and par isn't designed for this kind of usage). Also, xmonad doesn't use OS threads, and it's just occurred to me that there's no way to hook the X event loop into GHC's thread scheduler, so forkIO won't actually be useful anyway. :( xfork spawns a subprocess, which would then need to send X events to the main event loop which you would handle in the handleEventHook. This is also how an independent thread would need to communicate. > the same result: the program's window will be mapped only after the > function returns. > ...this happens with forkIO as well? I'm tempted to say you didn't use it properly; "after the function returns" isn't one of the behaviors I'd expect unless you're trying to synchronize with the thread as well. > >. > That's *why* event passing is how it should be solved. As to the how: sendEvent and the handleEventHook. > timeout <- io $ waitForEvent d 1000000 > *sigh* This is a good way to make xmonad stop processing events. You must *not* do this if you expect xmonad to be usable while waiting. (This is why X.A.Submap doesn't try to handle timeouts.) Instead, you need to have the event loop manage it. Use ExtensibleState to store the keymap state and start time; the handleEventHook recognizes a key, checks the ExtensibleState to see if it's useful, and if so acts on it and returns All False to prevent xmonad's default handler from also acting on the key. Acting on it may involve updating the state to point to a new submap, or performing some xmonad action. You should also forkIO a timeout thread which invokes (delay) (see Control.Concurrent) and then sendEvent to send a timeout event which is also processed by the handleEventHook. (Remember to clear the ExtensibleState as well as invoking releaseKeys.) > -- this is oversimplistic: you need to make sure the MyTimeoutEvent corresponds > -- to the current vmap and not an earlier one, by storing some kind of id in both. > -- You can't simply pass the vmap because it's going to take a trip through the X11 > -- server and there's no guarantee the same pointer comes back. > handleEventHook (SendEvent {ev_type = e}) > | e == MyTimeoutEvent = releaseKeys >> ES.put () >> return (All False) > handleEventHook (KeyPress {ev_key = k}) = do > vmap <- ES.get -- state of vi keymap > -- keyDecision goes here, more or less > -- if you update the keymap, delete the > > -- the initial key binding then places the vi keymap in ES and spawns a timeout > -- thread, allowing the handleEventHook to do the rest. -- brandon s allbery allbery.b at gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
http://www.haskell.org/pipermail/xmonad/2011-August/011643.html
CC-MAIN-2014-42
refinedweb
530
72.16
! WARNING: Some people weren't able to get the FPS library to work. If this is the case, you can try downloading an older version of the Arduino IDE here.. Update (2/2/19): Thanks to JustinO60, there is a design outfitted for the new FPS that can be downloaded here (Main, Cover). Grand Prize in the Sensors Contest Grand Prize in the Epilog Challenge VI Participated in the Battery Powered Contest 358 Discussions 10 days ago How do I integrate it with my fridge door..and please the video on how you did it thank you Question 4 weeks ago Hi, excellent your explain and project. I need to make a project and i would like you a question. It posible to have differents output for each fingerprint code? for example For finger 1(Person Walter) (code 25) output => drive a motor for 10 seconds and stop For finger 9 (Person Smith) (code 55) output => drive a motor for 5 seconds and stop For.... 10, 12, 14 loops. What would yhe right the right path for resolve this situation? Thanks Answer 4 weeks ago Hey Daidesigns, That’d be easily possible by adding more code after the line that says “access granted”, then use the id variable. Hope this helped! 3 months ago 100$ says you can't make it for ATmega32 and post full source code for me to copy. Tip 5 months ago I created CAD files for the new FPS's from Sparkfun while attempting to pursue this project. Attached Below Reply 5 months ago Thanks for the great addition! I'll mention your comment in the Instrucbtable! Question 1 year ago on Step 6 The Blink example and the Enroll example will not verify on my IDE. Answer 1 year ago I believe you need to upload it as an Arduino Uno. Also, have you properly installed the library? Google searching how to install an Arduino or see the README of the FPS library page Answer 1 year ago So I have already successfully loaded the main programs on the LCD and the ATtiny85. I guess that means I don't really need to mess with the others. Answer 1 year ago the error message I am getting for the enroll example is: Error compiling for board Duemilanove or Diecimila Arduino: 1.8.5 (Windows 7), Board: "Arduino Duemilanove or Diecimila, ATmega328P" C:\Users\user11\Downloads\F24WRJCHW0FUO3Z\F24WRJCHW0FUO3Z.ino:10:25: fatal error: FPS_GT511C3.h: No such file or directory #include "FPS_GT511C3.h" ^ compilation terminated. exit status 1 Error compiling for board Arduino Duemilanove or Diecimila. This report would have more information with "Show verbose output during compilation" option enabled in File -> Preferences. Answer 1 year ago The FPS library I used is sadly not updated, so you may have to downgrade your IDE. What is the error you're getting? Question 1 year ago So I have already successfully loaded the main programs on the LCD and the ATtiny85. I guess that means I don't really need to mess with the others. Question 1 year ago on Step 7 I have tested the LCD with my Arduino Uno board and it works nicely. I have downloaded the files for the AT Tiny85 but can not load them onto the chip. I keep getting this error message from IDE: Invalid library found in C:\Users\user11\Documents\Arduino\libraries\ArduinoProjectHandbook_CodeandLibraries: Answer 1 year ago I just tried this on my Arduino 1.6.12 IDE and it verified. Make sure you go to Tools>Board>ATtiny then Tools>Processor>ATtiny85. Let me know if you're still having trouble! Question 1 year ago RE: my question on step 7 from 4 weeks ago - I am now able to, and have loaded the ATtiny85, and have loaded the Final Code for the LCD unit. So its not a hardware problem on my end - I suspect a software/sketch problem?? Question 1 year ago on Step 12 the important note in step 1 says to add a voltage divider. Is there a schematic that shows this? in the schematic, D 10 on the LCD is connected to pin 1 on the FPS. the important note says to connect D 10 to pin 2 on the FPS. I understand the words of the note, but if I could see the connection lines of a schematic I would understand the connections better. Answer 1 year ago sorry for the horrible drawing, but I hope this is of help. Answer 1 year ago thanks 4 the reply - your drawing is perfectly fine. you are showing the connection to FPS pin 1. the instructable says FPS pin two (which is connected to D11 with a 560 Ω resistor) is the note in the instructable incorrect? Answer 1 year ago sorry no, I drew it wrong. The instructible is correct I assume, using FPS pin 2. Answer 1 year ago again - thanks for the reply - but now its more complicated and difficult to understand. there is already resistors connected to FPS 2. your sketch was nice - but I need to see a complete corrected schematic diagram in order to understand this.
https://www.instructables.com/id/DIY-Fingerprint-Scanning-Garage-Door-Opener/
CC-MAIN-2019-30
refinedweb
860
72.97
RDF::RDFa::Generator::HTML::Pretty::Note - a note about something Often you'll want to create your own subclass of this as the basic notes are pretty limited (plain text only). $note = RDF::RDFa::Generator::HTML::Pretty::Note->new($subject, $text) $subject is an RDF::Trine::Node (though probably not a Literal!) indicating the subject of the note. $text is the plain text content of the note. $note->is_relevent_to($node) $node is an RDF::Trine::Node. Checks if the subject of $note is $node. Alias: is_relelvant_to. $note->node($namespace, $tagname) Gets an XML::LibXML::Element representing the note. $namespace and $tagname are used to create the new element. If an unexpected namespace or tagname is supplied, may die. Expected namespace is ''. Expected tagname is any XHTML tag that can contain text nodes. Please report any bugs to. RDF::RDFa::Generator, RDF::RDFa::Linter..
http://search.cpan.org/~tobyink/RDF-RDFa-Generator-0.102/lib/RDF/RDFa/Generator/HTML/Pretty/Note.pm
CC-MAIN-2016-44
refinedweb
143
53.17
This is a follow-up to the post about deploying R models using web services. Within 1 hour I was able to take an existing R function and publish to the Domino Data Lab web service AND write a simple Python script to call the service. Domino make it super simple. The R code I used was the Sales Price calculator I previously wrote about on Rpubs. I stripped out any calling code so the R script uploaded to Domino was just a function. The next step was to create the API end point that calls the R function. This is as simple as pasting in the name of the R script along with the function name. With the API end point published it’s just a case of consuming it. Domino provide sample code in lots of different languages. I chose Python (I’m currently learning Python). The is a very simple example. import unirest import json import yaml business_name = "Nelsons" rating =6 turnover = 20000 competitor_intensity = 5 months_trading = 12 product = "Maloja FadriM Multisport Jacket" quantity = 20 response = unirest.post("", headers={ "X-Domino-Api-Key": "Enter your private API key here", "Content-Type": "application/json" }, params=json.dumps({ "parameters": [business_name, rating, turnover, competitor_intensity, months_trading, product, quantity] }) ) response_data = yaml.load(response.raw_body) print (response_data['result']) The beauty of this – the code can be called from anywhere. My account with Domino has the free instance but paid instances can be spun up with huge amounts of RAM. There are so many options around now that help with the analytical workflow. Of course, data preparation is still time consuming but we’re seeing tools and packages to make it more efficient. How about taking a legacy spreadsheet. Take the processing out to R and then publish it for consumption by a web app using Shiny or Domino. In my past I’ve created many excel apps. I always wanted a way to publish them to save me from “file/open/refresh/email”. We have different options today.
http://leehbi.com/2015/06/consuming-r-through-api-endpoints-with-domino-data-lab/
CC-MAIN-2017-26
refinedweb
334
66.03
Let’s say we’ve loaded a bunch of posts with loadAll: match "index.html" $ do route idRoute compile $ do myPosts <- loadAll "posts/*" ... We can iterate over these posts in a template file if we expose them with listField: match "index.html" $ do route idRoute compile $ do myPosts <- loadAll "posts/*" let indexContext = listField "allPostsWithExtraField" defaultContext (return myPosts) <> defaultContext ... In a template, we can do: $for(allPostsWithExtraField)$ The title of one of the posts: $title$ $endfor$ Now, let’s say we want to have access to another field inside that loop, perhaps one that is computed from some existing metadata of the item: $for(allPostsWithExtraField)$ The title of one of the posts: $title$ And here's the URL without its file extension: $url-plain$ $endfor$ Obviously, we don’t want to manually introduce a metadata field for this. But how do we “add” another field? I struggled with this, so I cheated and asked Jasper Van der Jeugt, the creator of Hakyll, on the Hakyll Google Group. Jasper provided a complete working solution: First we define a function which does what you want: import System.FilePath (dropExtension) urlPlainField :: Context a urlPlainField = field "url-plain" $ \item -> do mbFilePath <- getRoute (itemIdentifier item) case mbFilePath of Nothing -> return "???" Just filePath -> return $ toUrl $ dropExtension filePath And then we add it to some Contextthat we define for the use in the list: postCtx :: Context String postCtx = urlPlainField `mappend` defaultContext Now, the following should enable you to use the new $url-plain$: let ctx = listField "how-do-i-posts" postCtx (return howDoIPosts) In our example, this solution looks like the following: match "index.html" $ do route idRoute compile $ do myPosts <- loadAll "posts/*" let postContext = urlPlainField <> defaultContext let indexContext = listField "allPostsWithExtraField" postContext (return myPosts) <> defaultContext ... So, we provided listField with a context that includes urlPlainField. Checking a proposed solution is always easier than finding it yourself, and I wanted to know why this is the solution. So, into the source code we go! In Hakyll.Web.Template.Context, we find: -------------------------------------------------------------------------------- listField :: String -> Context a -> Compiler [Item a] -> Context b listField key c xs = listFieldWith key c (const xs) -------------------------------------------------------------------------------- listFieldWith :: String -> Context a -> (Item b -> Compiler [Item a]) -> Context b listFieldWith key c f = field' key $ fmap (ListField c) . f So, listField is a wrapper for listFieldWith. listFieldWith does the following: "allPostsWithExtraField") ListField, hence it being used as the constructor a bit further in the function. fis first applied to some incoming Item b, and yields a Compiler [Item a], which we will call is. fmapover that Compiler [Item a]and apply the ListFieldconstructor to it. The constructor already has capplied to it (which is the context we passed! In our case, postContext). ListField c isvalue is then applied to field' key, producing a Context. Ok, so now the context is being carried around somewhere. But we haven’t actually done anything with it yet. So, we’ll need to find out where a ListField is pattern matched and deconstructed. That happens to be in Hakyll.Web.Template, in a where clause in the applyTemplate' function: ... applyElem (For e b s) = applyExpr e >>= \cf -> case cf of StringField str -> fail $ "Hakyll.Web.Template.applyTemplateWith: expected ListField but " ++ "got StringField for expr " ++ show e ++ ", namely " ++ str ListField c xs -> do sep <- maybe (return "") go s bs <- mapM (applyTemplate' b c) xs return $ intercalate sep bs ... We’re interested in what is going to be done with c, which is our context. As you can see, it’s used in a call to applyTemplate' that is mapped over all the items (called xs) inside our ListField. So, what happens, is: myPosts <- loadAll "posts/*", which yields us an [Item a]. ListFieldwith the listFieldfunction. Item a. applyElem, the template b(which is just everything between $for$and $endfor$, the body), our context cand an Item a, are all passed to applyTemplate'. And how can we then use something from the Item to generate some new value? Let’s take the definition of urlPlainField again for that: urlPlainField :: Context a urlPlainField = field "url-plain" $ \item -> do mbFilePath <- getRoute (itemIdentifier item) case mbFilePath of Nothing -> return "???" Just filePath -> return $ toUrl $ dropExtension filePath Each Context always gets access to the item that was passed to applyTemplate'. So, we can just get the Identifier of the item with itemIdentifier. And with an Identifier, we can access: getMetadata. toFilePath. identifierVersion. getRoute. saveSnapshot <snapshotName>. loadSnapshot <snapshotName>.
https://beerendlauwers.be/posts/2016-04-29-hakyll-adding-to-loadall.html
CC-MAIN-2022-21
refinedweb
727
61.87
Architecture of Convolutional Neural Networks (CNNs) demystified Introduction I will start with a confession – there was a time when I didn’t really understand deep learning. I would look at the research papers and articles on the topic and feel like it is a very complex topic. I tried understanding Neural networks and their various types, but it still looked difficult. Then one day, I decided to take one step at a time. I decided to start with basics and build on them. I decided that I will break down the steps applied in these techniques and do the steps (and calculations) manually, until I understand how they work. It was time taking and intense effort – but the results were phenomenal. Now, I can not only understand the spectrum of deep learning, I can visualize things and come up with better ways because my fundamentals are clear. It is one thing to apply neural networks mindlessly and it is other to understand what is going on and how are things happening at the back. Neural Networks from Scratch In this article I am going to discuss the architecture behind Convolutional Neural Networks, which are designed to address image recognition and classification problems. I am assuming that you have a basic understanding of how a neural network works. If you’re not sure of your understanding I would request you to go through this article before you read on. Table of Contents: - How does a machine look at an image? - How do we help a neural network to identify images? - Defining a Convolutional neural network - Convolution Layer - Pooling Layer - Output Layer - Putting it all together - Using CNN to classify images 1. How does a machine look at an image? Human brain is a very powerful machine. We see (capture) multiple images every second and process them without realizing how the processing is done. But, that is not the case with machines. The first step in image processing is to understand, how to represent an image so that the machine can read it? In simple terms, every image is an arrangement of dots (a pixel) arranged in a special order. If you change the order or color of a pixel, the image would change as well. Let us take an example. Let us say, you wanted to store and read an image with a number 4 written on it. The machine will basically break this image into a matrix of pixels and store the color code for each pixel at the representative location. In the representation below – number 1 is white and 256 is the darkest shade of green color (I have constrained the example to have only one color for simplicity). Once you have stored the images in this format, the next challenge is to have our neural network understand the arrangement and the pattern. 2. How do we help a neural network to identify images? A number is formed by having pixels arranged in a certain fashion. Let’s say we try to use a fully connected network to identify it? What does it do? A fully connected network would take this image as an array by flattening it and considering pixel values as features to predict the number in image. Definitely it’s tough for the network to understand what’s happening underneath. It’s impossible even for a human to identify that this is a representation of number 4. We have lost the spatial arrangement of pixels completely. What can we possibly do? Let’s try to extract features from the original image such that the spatial arrangement is preserved. Case 1: Here we have used a weight to multiply the initial pixel values. It does get easier for the naked eye to identify that this is a 4. But again to send this image to a fully connected network, we would have to flatten it. We are unable to preserve the spatial arrangement of the image. Case 2: Now we can see that flattening the image destroys its arrangement completely. we need to devise a way to send images to a network without flattening them and retaining its spatial arrangement. We need to send 2D/3D arrangement of pixel values. Let’s try taking two pixel values of the image at a time rather than taking just one. This would give the network a very good insight as to how does the adjacent pixel look like. Now that we’re taking two pixels at a time, we shall take two weight values too. I hope you noted that the image now became a 3 column arrangement from a 4 column arrangement initially. The image got smaller since we’re now moving two pixels at a time (pixels are getting shared in each movement). We made the image smaller and we can still understand that it’s a 4 to quite a great extent. Also, an important fact to realise is that we we’re taking two consecutive horizontal pixels, therefore only horizontal arrangement is considered here. This is one way to extract features from an image. We’re able to see the left and middle part well, however the right side is not so clear. This is because of the following two problems- - The left and right corners of the image is multiplied by the weights just once. - The left part is still retained since the weight value is high while the right part is getting slightly lost due to low weight value. Now we have two problems, we shall have two solutions to solve them as well. Case 3: The problem encountered is that the left and right corners of the image is getting passed by the weight just once. What we need to do is we need the network to consider the corners also like other pixels. We have a simple solution to solve this. Put zeros along the sides of the weight movement. You can see that by adding the zeroes the information from the corners is retained. The size of the image is higher too. This can be used in cases where we don’t want the image size to reduce. Case 4: The problem we’re trying to address here is that a smaller weight value in the right side corner is reducing the pixel value thereby making it tough for us to recognize. What we can do is, we take multiple weight values in a single turn and put them together. A weight value of (1,0.3) gave us an output of the form while a weight value of the form (0.1,5) would give us an output of the form A combined version of these two images would give us a very clear picture. Therefore what we did was simply use multiple weights rather than just one to retain more information about the image. The final output would be a combined version of the above two images. Case 5: Till now we have used the weights which were trying to take horizontal pixels together. But in most cases we need to preserve the spatial arrangement in both horizontal and vertical direction. We can take the weight as a 2D matrix which takes pixels together in both horizontal and vertical direction. Also, keep in mind that since we have taken both horizontal and vertical movement of weights, the output is one pixel lower in both horizontal and vertical direction. Special thanks to Jeremy Howard for the inspiring me to create these visuals. So what did we do? What we did above was that we were trying to extract features from an image by using the spatial arrangement of the images. To understand an image its extremely important for a network to understand how the pixels are arranged. What we did above is what exactly a convolutional neural network does. We can take the input image, define a weight matrix and the input is convolved to extract specific features from the image without losing the information about its spatial arrangement. Another great benefit this approach has is that it reduces the number of parameters from the image. As you saw above the convolved images had lesser pixels as compared to the original image. This dramatically reduces the number of parameters we need to train for the network. 3. Defining a Convolutional Neural Network We need three basic components to define a basic convolutional network. - The convolutional layer - The Pooling layer[optional] - The output layer Let’s see each of these in a little more detail 2.1 The Convolution Layer In this layer, what happens is exactly what we saw in case 5 above. Suppose we have an image of size 6*6. We define a weight matrix which extracts certain features from the images We have initialized the weight as a 3*3 matrix. This weight shall now run across the image such that all the pixels are covered at least once, to give a convolved output. The value 429 above, is obtained by the adding the values obtained by element wise multiplication of the weight matrix and the highlighted 3*3 part of the input image. The 6*6 image is now converted into a 4*4 image. Think of weight matrix like a paint brush painting a wall. The brush first paints the wall horizontally and then comes down and paints the next row horizontally. Pixel values are used again when the weight matrix moves along the image. This basically enables parameter sharing in a convolutional neural network. Let’s see how this looks like in a real image. The weight matrix behaves like a filter in an image extracting particular information from the original image matrix. A weight combination might be extracting edges, while another one might a particular color, while another one might just blur the unwanted noise. The weights are learnt such that the loss function is minimized similar to an MLP. Therefore weights are learnt to extract features from the original image which help the network in correct prediction. When we have multiple convolutional layers, the initial layer extract more generic features, while as the network gets deeper, the features extracted by the weight matrices are more and more complex and more suited to the problem at hand. The concept of stride and padding As we saw above, the filter or the weight matrix, was moving across the entire image moving one pixel at a time. We can define it like a hyperparameter, as to how we would want the weight matrix to move across the image. If the weight matrix moves 1 pixel at a time, we call it as a stride of 1. Let’s see how a stride of 2 would look like. As you can see the size of image keeps on reducing as we increase the stride value. Padding the input image with zeros across it solves this problem for us. We can also add more than one layer of zeros around the image in case of higher stride values. We can see how the initial shape of the image is retained after we padded the image with a zero. This is known as same padding since the output image has the same size as the input. This is known as same padding (which means that we considered only the valid pixels of the input image). The middle 4*4 pixels would be the same. Here we have retained more information from the borders and have also preserved the size of the image. Multiple filters and the activation map One thing to keep in mind is that the depth dimension of the weight would be same as the depth dimension of the input image. The weight extends to the entire depth of the input image. Therefore, convolution with a single weight matrix would result into a convolved output with a single depth dimension. In most cases instead of a single filter(weight matrix), we have multiple filters of the same dimensions applied together. The output from the each filter is stacked together forming the depth dimension of the convolved image. Suppose we have an input image of size 32*32*3. And we apply 10 filters of size 5*5*3 with valid padding. The output would have the dimensions as 28*28*10. You can visualize it as – This activation map is the output of the convolution layer. 2.2 The Pooling Layer Sometimes when the images are too large, we would need to reduce the number of trainable parameters. It is then desired to periodically introduce pooling layers between subsequent convolution layers. Pooling is done for the sole purpose of reducing the spatial size of the image. Pooling is done independently on each depth dimension, therefore the depth of the image remains unchanged. The most common form of pooling layer generally applied is the max pooling. Here we have taken stride as 2, while pooling size also as 2. The max operation is applied to each depth dimension of the convolved output. As you can see, the 4*4 convolved output has become 2*2 after the max pooling operation. Let’s see how max pooling looks on a real image. As you can see I have taken convoluted image and have applied max pooling on it. The max pooled image still retains the information that it’s a car on a street. If you look carefully, the dimensions if the image have been halved. This helps to reduce the parameters to a great extent. Similarly other forms of pooling can also be applied like average pooling or the L2 norm pooling. Output dimensions It might be getting a little confusing for you to understand the input and output dimensions at the end of each convolution layer. I decided to take these few lines to make you capable of identifying the output dimensions. Three hyperparameter would control the size of output volume. - The number of filters – the depth of the output volume will be equal to the number of filter applied. Remember how we had stacked the output from each filter to form an activation map. The depth of the activation map will be equal to the number of filters. - Stride – When we have a stride of one we move across and down a single pixel. With higher stride values, we move large number of pixels at a time and hence produce smaller output volumes. - Zero padding – This helps us to preserve the size of the input image. If a single zero padding is added, a single stride filter movement would retain the size of the original image. We can apply a simple formula to calculate the output dimensions. The spatial size of the output image can be calculated as( [W-F+2P]/S)+1. Here, W is the input volume size, F is the size of the filter, P is the number of padding applied and S is the number of strides. Suppose we have an input image of size 32*32*3, we apply 10 filters of size 3*3*3, with single stride and no zero padding. Here W=32, F=3, P=0 and S=1. The output depth will be equal to the number of filters applied i.e. 10. The size of the output volume will be ([32-3+0]/1)+1 = 30. Therefore the output volume will be 30*30*10. 2.3 The Output layer After multiple layers of convolution and padding, we would need the output in the form of a class. The convolution and pooling layers would only be able to extract features and reduce the number of parameters from the original images. However, to generate the final output we need to apply a fully connected layer to generate an output equal to the number of classes we need. It becomes tough to reach that number with just the convolution layers. Convolution layers generate 3D activation maps while we just need the output as whether or not an image belongs to a particular class. The output layer has a loss function like categorical cross-entropy, to compute the error in prediction. Once the forward pass is complete the backpropagation begins to update the weight and biases for error and loss reduction. 3. Putting it all together – How does the entire network look like? CNN as you can now see is composed of various convolutional and pooling layers. Let’s see how the network looks like. - We pass an input image to the first convolutional layer. The convoluted output is obtained as an activation map. The filters applied in the convolution layer extract relevant features from the input image to pass further. - Each filter shall give a different feature to aid the correct class prediction. In case we need to retain the size of the image, we use same padding(zero padding), otherwise valid padding is used since it helps to reduce the number of features. - Pooling layers are then added to further reduce the number of parameters - Several convolution and pooling layers are added before the prediction is made. Convolutional layer help in extracting features. As we go deeper in the network more specific features are extracted as compared to a shallow network where the features extracted are more generic. - The output layer in a CNN as mentioned previously is a fully connected layer, where the input from the other layers is flattened and sent so as the transform the output into the number of classes as desired by the network. - The output is then generated through the output layer and is compared to the output layer for error generation. A loss function is defined in the fully connected output layer to compute the mean square loss. The gradient of error is then calculated. - The error is then backpropagated to update the filter(weights) and bias values. - One training cycle is completed in a single forward and backward pass. 4. Using CNN to classify images in KERAS Let’s try taking an example where we input several images of cats and dogs and we try to classify these images into their respective animal category using a CNN. This is a classic problem of image recognition and classification. What the machine needs to do is it needs to see the image and understand by the various features as to whether its a cat or a dog. The features can be like extracting the edges, or extracting the whiskers of a cat etc. The convolutional layer would extract these features. Let’s take a hand on the data set. These are the examples of some of the images in the dataset. We would first need to resize these images to get them all in the same shape. This is something we would generally need to do while handling images, since while capturing images, it would be impossible to capture all images of the same size. For simplicity of your understanding I have just used a single convolution layer and a single pooling layer, which generally doesn’t happen when we’re trying to make predictions. Dataset used can be downloaded from here. #import various packages import os import numpy as np import pandas as pd import scipy import sklearn import keras from keras.models import Sequential import cv2 from skimage import io %matplotlib inline #Defining the File Path cat=os.listdir("/mnt/hdd/datasets/dogs_cats/train/cat") dog=os.listdir("/mnt/hdd/datasets/dogs_cats/train/dog") filepath="/mnt/hdd/datasets/dogs_cats/train/cat/" filepath2="/mnt/hdd/datasets/dogs_cats/train/dog/" images=[] label = [] for i in cat: image = scipy.misc.imread(filepath+i) images.append(image) label.append(0) #for cat images for i in dog: image = scipy.misc.imread(filepath2+i) images.append(image) label.append(1) #for dog images #resizing all the images for i in range(0,23000): images[i]=cv2.resize(images[i],(300,300)) #converting images to arrays images=np.array(images) label=np.array(label) # Defining the hyperparameters filters=10 filtersize=(5,5) epochs =5 batchsize=128 input_shape=(300,300,3) #Converting the target variable to the required size from keras.utils.np_utils import to_categorical label = to_categorical(label) #Defining the model model = Sequential() model.add(keras.layers.InputLayer(input_shape=input_shape)) model.add(keras.layers.convolutional.Conv2D(filters, filtersize, strides=(1, 1), padding='valid', data_format="channels_last", activation='relu')) model.add(keras.layers.MaxPooling2D(pool_size=(2, 2))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(units=2, input_dim=50,activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(images, label, epochs=epochs, batch_size=batchsize,validation_split=0.3) model.summary() In this model, I have only used a single convolution and Pooling layer and the trainable parameters are 219,801. Wonder how many would I have had if I had used an MLP in this case. You can reduce the number of parameters by further by adding more convolution and pooling layers. The more convolution layers we add the features extracted would be more specific and intricate. Projects Now, its time to take the plunge and actually play with some other real datasets. So are you ready to take on the challenge? Accelerate your deep learning journey with the following Practice Problems: End Notes I hope through this article I was able to provide you an intuition into convolutional neural networks. I did not go into the complex mathematics of CNN. In case you’re fond of understanding the same – stay tuned, there’s much more lined up for you. Try building your own CNN network to understand how it operates and makes predictions on images. Let me know your findings and approach using the comments section. Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2017/06/architecture-of-convolutional-neural-networks-simplified-demystified/?utm_source=blog&utm_source=tensorflow-2-tutorial-deep-learning
CC-MAIN-2022-40
refinedweb
3,630
63.7
Python Programming, news on the Voidspace Python Projects and all things techie. mock 0.7.2 released There's a new minor release of mock, version 0.7.2 with two bugfixes in it. - (download) - (documentation) - (repo and issue tracker) mock is a Python library for simple mocking and patching (replacing objects with mocks during test runs). mock is designed for use with unittest, based on the "action -> assertion" pattern rather than "record -> replay". The full changelog for this release is: - BUGFIX: instances of list subclasses can now be used as mock specs - BUGFIX: MagicMock equality / inequality protocol methods changed to use the default equality / inequality. This is done through a side_effect on the mocks used for __eq__ / __ne__ The most important change is the second one, which fixes an oddity with the way equality comparisons with MagicMock work(ed). With the MagicMock class a lot of the useful python protocol methods (magic methods) are hooked up and preconfigured to either return a useful value or are themselves MagicMocks. __eq__ and __ne__ are allowed to return arbitrary objects and so were setup as mocks where you could configure the behaviour (through side_effect) or the return value (through return_value) yourself. Here's how it works in mock 0.7.1: >>> from mock import MagicMock >>> m = MagicMock() >>> m == 3 <mock.Mock object at 0x58c770> >>> m.__eq__.call_count 1 >>> m.__eq__.return_value = False >>> m == 3 False >>> m.__eq__.call_count 2 The issue with this, as you can see above, is that MagicMock() == anything returns a mock object, which by default has a boolean value of True. This has the following effect: >>> m = MagicMock() >>> if m == 3: ... print 'Uhm...' ... Uhm... Unfortunately this is how unittest.TestCase.assertEqual (and all sorts of other code) is implemented. This means that by default MagicMock would pass an assertEqual test against any object. This made it hard to write useful asserts with MagicMock. The change is that MagicMock now has __eq__ and __ne__ setup with side_effect functions that implement the default equality / inequality behaviour, based on identity. You can still customise the behaviour in the same way as before if you want. With mock 0.7.2: >>> from mock import MagicMock >>> m = MagicMock() >>> m == 3 False >>> m.__eq__.call_count 1 >>> m.__eq__.return_value = True >>> m == 3 True >>> m.__eq__.call_count 2 I've also been working on the next major release of mock, which will be 0.8. There'll be an alpha shortly, which will be by no means feature complete but will give you a chance to try out (and find bugs with / complain about) some of the major new features. Just to get your appetite whetted, here is the changelog (so far). It will require a blog entry to explain the features, and the documentation is not yet updated, but some of these are pretty cool: - patch and patch.object now create a MagicMock instead of a Mock by default - Implemented auto-speccing (recursive, lazy speccing of mocks with mocked signatures for functions/methods). Use the autospec argument to patch - Added the create_autospec function for manually creating 'auto-specced' mocks - The patchers (patch, patch.object and patch.dict), plus Mock and MagicMock, take arbitrary keyword arguments for configuration - New mock method configure_mock for setting attributes and return values / side effects on the mock and its attributes - -. Note that vars(Mock()) can still be used to get all instance attributes and dir(type(Mock()) will still return all the other attributes (irrespective of FILTER_DIR) - Added the Mock API (assert_called_with etc) to functions created by mocksignature - Private attributes _name, _methods, '_children', _wraps and _parent (etc) renamed to reduce likelihood of clash with user attributes. - Removal of deprecated patch_object Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2011-05-30 21:11:33 | | Categories: Python, Projects Tags: mock, release Using patch.dict to mock imports I had an email from a mock user asking if I could add a patch_import to mock that would patch __import__ in a namespace to replace the result of an import with a Mock. It's an interesting question, with a couple of caveats: - Don't patch __import__. If you must monkey around with imports use a PEP 302 loader. - Wanting to mock importing inexorably means you're doing dynamic imports, most probably local imports inside a function. This the first time). That aside there is a way to use mock to affect the results of an import, and it has nothing to do with patching 'fooble' module. >>> from mock import patch, Mock >>> import sys >>> mock = Mock() >>> with patch.dict('sys.modules', {'fooble': mock}): ... import fooble ... fooble.blob() ... <mock.Mock object at 0x519b50> >>> assert 'fooble' not in sys.modules >>> mock.blob.assert_called_once_with() As you can see the import fooble succeeds, but on exit there is no 'fo patcher start and stop methods, works around this by taking a reference to sys.modules inside the test rather than at import time. (Using patch.dict as a decorator takes a reference to sys.modules at import time, it doesn't do the patching until the test is executed though.) This is an intriguing bug in nosetests, so I may see if I can reproduce and diagnose it. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2011-05-29 20:50:36 | | Categories: Python, Hacking, Projects Tags: mock, import, patch Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2011_05_28.shtml?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+voidspace+%28The+Voidspace+Techie+Blog%29
CC-MAIN-2015-35
refinedweb
912
64.41
Type: Posts; User: rliq Can anyone see a way that I can calculate layerOutputs[layerIndex] using a LINQ expression? layerOutputs[layerIndex] = 0.0; for (int layerInput = 0; layerInput <... 1) Rectangle or Square? 2) Are all Rectangles/Squares the same size? 3) Are the circles always the same size? It's hard to read code that is not indented. Also, you've not actually told us what your problems is. People here will help you, but you need to be more specific. You can't just say "My... Also remember that if: index = 3 then index++ / 5 is three fifths and Not sure how bullet proof this is, but it works for your sample. Probably best if you also include a flag to say whether you are inside or outside a tag. Then you can report an error (if desirable)... Can you post the code here, showing how you read the text from the file. Then we will be able to see what internal structure you have used and better answer your question. In general, all drawing code should be placed in the forms Paint event handler. This is automatically called when part of your application needs to be drawn. I tend to use 'this' when referring to member functions that I did not write, i.e. those in a class that mine is derived from. Having said that, I'm always arguing with myself over this standard. ... You need to draw all of the lines every time in your Paint handler. I just wrote this code, it may help you make some progress. using System; using System.Collections.Generic; using... If you write each block of data to a different temp file each time, then the whole of its contents is the string that was written. Then append the whole of that text to the end of another file and... I think you need to post a sample of your code. Saying that "you tried it the same way as you did in the other form", is meaningless, when we don't know what you did in the other form. string autocode = string.Empty; I guess you may already have seen the link, showing an implementation of the "Thread Safe" Dictionary class. That is correct, I just read the Dictionary<> documentation on MSDN. Guess I was just lucky that I did not create two duplicate keys... Ultimately though, only testing will determine which way... I just wrote this as an experiment... using System; using System.Collections.Generic; using System.ComponentModel; using System.Threading; namespace CodeGuruTest yep.. should be.... string autocode = new string(); string autcode; foreach () { autcode = ... } // use autocode here Can you explain the process a little more... You have 10 threads 'updating'. Is this adding, removing, updating? Is 1 removing and 9 adding or 9 removing and 1 adding etc...? 1) How are you drawing the graph? 2) In which part of your code are you drawing the graph? Post some of your code here. You have not said what your issue is? What is the question that you want answering? 9 out of 10 times I believe the answer to this would be Dictionary<,>. (Memory access faster than Disk access) A database may be faster, if there are millions of items, the database is on a... Your server program could write each incoming block of text to a new file. Eg: 1.txt, 2.txt, 3.txt (or even get the system to generate a temp file name for you, so you don't have to keep a counter).... Nice one! I'm not sure that you can add null items to a ComboBox...? You can use the: // Create a Graphics object for the Control. Graphics g = pictureBox.CreateGraphics(); to get the Graphics object for the PictureBox control. Then use the members of the...
http://forums.codeguru.com/search.php?s=213b3a41d31939494d322e5cf035d960&searchid=9125611
CC-MAIN-2016-30
refinedweb
635
77.13
RDebug is a useful utility for checking a log of specific messages generated by the running code. This feature is helpful when the breakpoint/step debug tricks' using space has been limited. To use RDebug, include the header file, e32debug.h. (In 2nd Edition SDK, RDebug class declared in E32SVR.H) #include <e32debug.h> Then, add the following to any place in your code: // code before log RDebug::Print(_L("### Log %d %08x"), 5, 0xABCDEF12); // code after log The output debug message is now viewable with a the DebugView tool. This trick is especially useful for tasks with special requirements. The RDebug::Print parameter format is easy because it follows the C printf format. The one that most have trouble with is %S to print a descriptor. It expects a pointer to a descriptor, so you must use the & operator if you are printing a TBuf, for example. // Print a HBufC RDebug::Print( _L("Test string: %S"), hbuf ); // Print a TBuf RDebug::Print( _L("Test string: %S"), &tbuf ); // useful macro, LINE , evaluates to the current line number in RDebug::Print(_L("Debug on line %d"), __LINE__); The output of RDebug in emulator is written to: For S60 3rd Edition, there are two options in the \epoc32\data\epoc.ini file to enable or disable RDebug output, that is: The following code shows the content of epoc.ini with LogToFile and LogToDebugger enabled: LogToFile 1 LogToDebugger 1 Note: If changing the epoc.ini is something that you feel comfortable with, then try it. You can also control the same settings through the emulator's window menu. Select Tools -> Preferences. The logging options can be found in the General tab. Viewing RDebug output in the Carbide.c++ IDE can be done by enabling "View process output". To enable it, right click project name and select Debug as | Debug.... Click the Debugger tab and enable "View process output" (see picture below). After the "View process output" option has been enabled, debug the project. To display the debug messages, click Open Console and select the Debug Messages menu. The picture below shows the output of RDebug in the Carbide.c++ IDE. Carbide.c++ is an Eclipse based IDE. The advantage of this is that many existing Eclipse plug-ins can be easily integrated with Carbide.c++. One such example is the Eclipse Logfile Viewer plug-in, a tool that can dynamically load a log file, parse it according to user defined rules, and then display it in real time with customised formatting. Plug-in configuration example: Viewing the RDebug messages: Here you can see how the plug-in can be used to highlight the log messages sent by your application, recognised as starting with the keyword RDebug. RDebug::Print( _L("RDebug Hello") ); It is also possible to view debug output without any IDE or debugger attached. This can speed up launch times for the emulator (for example, if you are using Just-in-time debugging as described in How to debug with emulator on the fly). For this, you can use several tools that show Windows debug strings, such as DebugView from Microsoft. DebugView also has additional features such as highlighting or excluding strings with a particular pattern. One thing to keep in mind is that the debug output can occasionally come from other processes in your system, not just the Symbian emulator, so with a tool like DebugView you can capture other strings not related to the application you are debugging. Again, filtering can be very helpful here. epocwind.out is a normal text file which is appended by the emulator so you can open it with any text editor. To be able to see the log prints as they come, you can use the tail freeware program. It is a port of the unix tail program that prints the "tail" of a file. Create a bat file as follows: tail -f %temp%\epocwind.out This opens a dos-prompt to show the log prints as they come. It can be closed by pressing Ctrl-C.
http://wiki.forum.nokia.com/index.php/How_to_use_RDebug
crawl-002
refinedweb
673
63.8
Ticket #8133 (closed Bugs: fixed) multiprecision, failed gcd tests (test_cpp_int.cpp) Description I modified test_cpp_int.cpp to generate random numbers with a lot of ones or zeros. I modified T generate_random(unsigned bits_wanted): ... T val = 0; for(unsigned i = 0; i < terms_needed; ++i) { val *= (gen.max)(); switch (gen() % 5) { case 0: val += gen(); break; case 1: val += 1; break; case 2: val += (gen.max)() - 2; break; case 3: val += (gen.max)() - 1; break; } } val %= max_val; return (val == 0)? val : 1; } This caused some tests related to gcd computation to fail. Here is output (I removed some parts of it with results of operations) - Attachments Change History comment:2 Changed 4 years ago by Stepan Podoskin <stepik-777@…> Yes, you are right, it should be (val == 0) ? 1 : val; This line is there because tests will fail with division by zero otherwise. I'm testing it on 32-bit Windows with GCC 4.7.2 and MSVC 2010. I'm using boost 1.53.0, not the latest trunk. Following program outputs 1 1 1 3 when compiled with GCC and 3 3 1 3 and when complied with MSVC. #include <iostream> #include <boost/multiprecision/cpp_int.hpp> using boost::multiprecision::cpp_int; using boost::multiprecision::gcd; int main() { cpp_int a("0xffffffee00000095fffffd0000000a8fffffe4e10000348bffffb1a100005ae3ffffade700003955ffffe19900000bd2fffffcdb00000076fffffffffffffffd"); // correct gcd is 1 std::cout << gcd(a, 4294967295) << '\n'; std::cout << gcd(4294967295, a) << '\n'; std::cout << gcd(a, cpp_int("4294967295")) << '\n'; std::cout << gcd(cpp_int("4294967295"), a) << '\n'; return 0; } comment:3 Changed 4 years ago by johnmaddock Thanks, reproduced. It's a bug in the subtraction routine for subtracting an unsigned int - wrongly uses a carry when subtracting ~static_cast<unsigned>(0). Testing a fix now. comment:4 Changed 4 years ago by johnmaddock - Status changed from new to closed - Resolution set to fixed Thanks for the report - However I'm having trouble reproducing here, note that your random generator code above either returns 0 or 1, I assume the last line should read: ? However, even then neither random numbers generated as above, nor the specific test values printed out in your log trigger errors for me. What compiler/platform is this? I'm testing on Win32 VC10 here, but I can switch to 64-bit Linux if that's the thing I'm missing. BTW the report you outputted, contains line numbers that don't match up to anything in test_cpp_int.cpp - I guess because you modified that file?
https://svn.boost.org/trac/boost/ticket/8133
CC-MAIN-2016-44
refinedweb
401
66.13
Related Question browser shows page that I dont have I am just flummoxed. I started the web having just 1 single view in my views.py page, with one line to see if everything was working, namely: from django.http import HttpResponse def home(request): return HttpResponse(‘Hello, World!’) so a few days after I return, I delete that code and paste a different one def home(request): return render(request, 'home.html’, name='home’) and now that home.html page is located in a directory called templates, and has just an html table and yes, I have modified settings.py to indicate where that templates directory is DIRS’: [ os.path.join(BASE_DIR, 'templates’) ], Well, when I go the webpage,I am still seeing the hello world, and is a code that does not exist anymore. Of course, nothing to do with cookies or browser cache, because apart from having deleted all browsing data, I tried different computers, different browsers, I even shot up Tor that I had never used before. I am still seeing the hello world. This is incredible. These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
https://www.digitalocean.com/community/questions/browser-shows-page-that-i-dont-have
CC-MAIN-2020-40
refinedweb
216
66.23
I've been teaching myself some OpenGL using SFML for creating windows/handling inputs, etc. My main.cpp started getting a bit unwieldy so I decided to start splitting my code up. I created a 4X_vertex.h and a 4X_vertex.cpp (4X is the name of the project) and moved the relevant functions and structs out of my main and into these files. However, when I compile, I get the error vari I am new to lisp and am writing a few simple programs to get more familiar with it. One of the things I am doing is writing a recursive and iterative version of a factorial method. However, I have come across a problem and can't seem to solve it. I saw a similar error at a solution wa Right now, my project has two classes and a main. Since the two classes inherit from each other, they are both using forward declarations. In the first object, right underneath the #include statement, I initialize two enums, before the class definition. I can use both enums just fine inside that class. However, if I try to use those enums in the other class, which inherits from the first one, I #include <stdio.h>const int str[1000] = {0};int main(void){ printf("arr is %d", str[0]); return 0;} Has the following output: [-exercises/adam/stack2]:size a.out text data bss dec hex filename 5133 272 24 5429 1535 a.out Whereas: I was asked a question in an interview, weather we can access a publically declared variable which is been declared in aspx.cs or ascx.cs page in aspx or ascx page respectively. I have seen this type of error everywhere and, although I have looked at the answers, none seem to help. I get the following error with the following piece of code: error: 'A' has not been declared B.h: #include "A.h"class B{ public: static bool doX(A *a);}; A.h: include "B. Hi everyone — what's wrong with this code? package mainimport "fmt"// fibonacci is a function that returns// a function that returns an int.func fibonacci() func() int { prev := 0 curr := 1 return func() int { temp := curr curr := curr + prev prev := temp return curr I am compiling the C++ code using Android native library NDK but I am getting the following errors while trying to include the g729a.h file in g729_jni.cpp: ERRORS: Compile++ arm : g729_jni <= g729_jni.cpp/usr/src/android-ndk-r8/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-g++ -MMD -MP -MF ./obj/local/armeabi/objs/g72 I have written a program in which i use C++ stl set. There is a struct event from which the set is being constructed, and its corresponding binary predicate.. struct comp to define the ordering between them in the set. set struct event binary predicate struct comp The code portion looks as follows: struct event{int s;int f;int w;set I am creating a very simple calculator. To save the first user inputed float, i save it as a string in the addition action as to save it when they click on the addition button. Then later I call upon it again to add it to the second user inputed float. However when i call upon it again it gives the error: Use of undefined identifier num1. The same thing ha float num1
http://bighow.org/tags/declared/1
CC-MAIN-2017-39
refinedweb
572
65.83
Details - Type: New Feature - Status: Resolved - Priority: Major - Resolution: Later - Affects Version/s: None - Fix Version/s: None - Component/s: datanode, hdfs-client, libhdfs, namenode - Labels:None Description We would like to be able to create certain files on certain storage device classes (e.g. spinning media, solid state devices, RAM disk, non-volatile memory). HDFS-2832 enables heterogeneous storage at the DataNode, so the NameNode can gain awareness of what different storage options are available in the pool and where they are located, but no API is provided for clients or block placement plugins to perform device aware block placement. We would like to propose a set of extensions that also have broad applicability to use cases where storage device affinity is important: - Add an enum of generic storage device classes, borrowing from current taxonomy of the storage industry - Augment DataNode volume metadata in storage reports with this enum - Extend the namespace so pluggable block policies can be specified on a directory and storage device class can be tracked in the Inode. Perhaps this could be a larger discussion on adding support for extended attributes in the HDFS namespace. The Inode should track both the storage device class hint and the current actual storage device class. FileStatus should expose this information (or xattrs in general) to clients. - Extend the pluggable block policy framework so policies can also consider, and specify, affinity for a particular storage device class - Extend the file creation API to accept a storage device class affinity hint. Such a hint can be supplied directly as a parameter, or, if we are considering extended attribute support, then instead as one of a set of xattrs. The hint would be stored in the namespace and also used by the client to indicate to the NameNode/block placement policy/DataNode constraints on block placement. Furthermore, if xattrs or device storage class affinity hints are associated with directories, then the NameNode should provide the storage device affinity hint to the client in the create API response, so the client can provide the appropriate hint to DataNodes when writing new blocks. - The list of candidate DataNodes for new blocks supplied by the NameNode to clients should be weighted/sorted by availability of the desired storage device class. - Block replication should consider storage device affinity hints. If a client move()s a file from a location under a path with affinity hint X to under a path with affinity hint Y, then all blocks currently residing on media X should be eventually replicated onto media Y with the then excess replicas on media X deleted. - Introduce the concept of degraded path: a path can be degraded if a block placement policy is forced to abandon a constraint in order to persist the block, when there may not be available space on the desired device class, or to maintain the minimum necessary replication factor. This concept is distinct from the corrupt path, where one or more blocks are missing. Paths in degraded state should be periodically reevaluated for re-replication. - The FSShell should be extended with commands for changing the storage device class hint for a directory or file. - Clients like DistCP which compare metadata should be extended to be aware of the storage device class hint. For DistCP specifically, there should be an option to ignore the storage device class hints, enabled by default. Suggested semantics: - The default storage device class should be the null class, or simply the “default class”, for all cases where a hint is not available. This should be configurable. hdfs-defaults.xml could provide the default as spinning media. - A storage device class hint should be provided (and is necessary) only when the default is not sufficient. - For backwards compatibility, any FSImage or edit log entry lacking a storage device class hint is interpreted as having affinity for the null class. - All blocks for a given file share the same storage device class. If the replication factor for this file is increased the replicas should all be placed on the same storage device class. - If one or more blocks for a given file cannot be placed on the required device class, then the file is marked as degraded. Files in degraded state should be periodically reevaluated for re-replication. - A directory and path can only have one storage device affinity hint. If the file inode specifies a hint, this is used, otherwise we walk up the path until a hint is found and use that one, otherwise the default storage class is used. Issue Links - is related to - - - HBASE-6572 Tiered HFile storage - Open - relates to HBASE-6572 Tiered HFile storage - Open Activity - All - Work Log - History - Activity - Transitions We should be careful to add as little complexity as possible while enabling the core feature here... Adding extended attributes is a well discussed idea, probably the right way to go, but it adds RAM pressure on the NN and needs to be thought out carefully. I believe there is already a JIRA on that? One way to reduce complexity and RAM pressure would be to only support placement hints on directories and have them apply only to files in that immediate directory. That should limit meta-data cost and address HBase and other use cases. That said, tying namespace data to the blocks, where replication policy is applied is a little complicated and deserves discussion. Something sanjay, suresh and I have been discussing. Maybe they can jump in with their thoughts. It would be particularly interesting if people could use both flash and hard disks in the same cluster. Perhaps the flash could be used for HBase-backed storage, and the hard disks for everything else, for example. That is certainly a use case we are looking at. More specifically, migration of the blocks for a given column accessed in a read-mostly random access manner to the most suitable available storage device class for that type of workload. That would be the first aim of HBASE-6572. I feel like we might also want to enable automatic migration between tiers, at least for some files. I suppose this could also be done outside HDFS, with a daemon that looks at file access times (atimes) and attaches the correct xattrs. Our thinking is block placement and replication policy plug points could be extended or introduced so it’s not necessary to deploy and manage an additional set of daemons, but that is only one possible implementation option. One way to reduce complexity and RAM pressure would be to only support placement hints on directories and have them apply only to files in that immediate directory. That should limit meta-data cost and address HBase and other use cases. On minimizing RAM pressure then the thing to do here might be to allow for hints on a directory to apply to all descendants. Otherwise if we have N directories under one parent then we would need N hints instead of 1. If the proposal for storage device class hints will be generalized/incorporated into an extended attributes facility, then this may be an interesting discussion. In the case of least Linux, Windows NT+, and *BSD, xattrs are arbitrary name/value pairs associated only with a single file or directory object, and a query on a given file or directory returns only the xattrs found in its inode (or equivalent). However since namespace storage in HDFS is at a premium, it may make sense to introduce a bit that signals the xattr should be inherited by all descendants. We can add extended attributes in a way that imposes zero overhead for users who don't make use of them, by creating another subclass (or subclasses) of INode. Inherited xattrs (that apply to all descendants) is also a reasonable idea. The complexity of lookup is why I'd suggested only the immediately containing directory. You don't want to have to walk the tree to see what policy applies. Checking exactly one directory would be a lot simpler. In terms of storage, you would see a big win by only requiring one attribute per dir vs per file & again you would have only a single place to look, so less code. In terms of storage, you would see a big win by only requiring one attribute per dir vs per file & again you would have only a single place to look, so less code. Can you clarify if you mean only ONE attribute per directory or file, or if you mean that one (or more) attributes apply only to the one directory or file they are associated with? xattrs can also serve as a building block in the future towards implementing ACLs, something I had researched a few months ago. Existing file systems have used numerous strategies for implementing xattrs, with different space/time trade-offs. I've found this document to be useful: The document primarily focuses on ACLs, but there is a section titled Extended Attributes, which goes into some detail about implementation on various file systems. Some of the points I find most interesting are: - It's common to use blocks to store xattrs, but in a distributed file system, I expect we wouldn't want to incur the extra latency of a block read to retrieve xattrs. - ext3 employs a flyweight-style pattern, so that multiple inodes with identical sets of xattrs can share the same copy, even if they are not in a parent-child relationship. In practice, this is likely to save a lot of space, because the number of distinct sets of xattrs is likely to be much lower than the number of inodes. - XFS initially stores xattrs on the inode, which has a statically allocated size. Once the number of xattrs grows too large to fit on the inode, XFS then promotes xattrs through a series of more and more sophisticated external data structures to handle the growth. It can effectively support very large numbers of xattrs on a single inode. Project management question: do we think it makes sense to spin out a separate xattrs jira and move discussion there? It would serve as a pre-requisite for storage policies and also ACLs. I think the xattrs feature itself is sufficiently complex to warrant its own round of design and implementation. (In fact, this jira has already spent more time discussing xattrs design than storage policies.) I would like to see a clear real use case to be identified first, in addition to generic design goals. HBASE-6572 can be it, but it needs more details, IMO. Without it a lot of technical discussions will see no end. It lets us think about different features at a concrete context. Without a concrete use case to run through, some design decisions are very hard to make. I would like to see a clear real use case to be identified first, in addition to generic design goals. HBASE-6572 can be it, but it needs more details, IMO Sure, if there is consensus on the general direction (introduce xattrs, use xattrs for storing device class hints, give block placement plugins xattr / device hint awareness) then we can put together a strawman implementation of HBASE-6572 that follows that consensus, with patches provided here and/or on HDFS-2006 later, for the bits that reach down into HDFS, for further consideration and discussion at that time. I think the general direction here is good, modulo details on how xattrs will work exactly, such that the new capabilities will be applicable to many use cases beyond HBASE-6572. The granularity of where the policies can be placed is an important consideration. Having it at a directory level can make it much easier to manage, even though a per-file level appears to give more flexibility. HDFS Snapshots, quotas are at a dir level. BTW Many file systems allow certain policies at a volume level. If HDFS supported multiple volumes within a NN then I would have put quotas and snapshots at a volume level, but still propose that storage policies should be at a directory level because one may want slightly finer granularity than a volume (I want HBASE journal and HBase store files to have different storage policies even though they reside in the same volume). The granularity of where the policies can be placed is an important consideration. Having it at a directory level can make it much easier to manage, even though a per-file level appears to give more flexibility. Sure. Do you have an opinion on if directory level policies should be inherited from parents (or not)? I want HBASE journal and HBase store files to have different storage policies even though they reside in the same volume. Makes sense. I would like to see intermediate persisted output from chained MR jobs have different storage policies than the final output. I think the Extended Attributes is orthogonal to this issue. Extended file attributes is a file system feature that enables users to associate computer files with metadata not interpreted by the filesystem This implies that xattr should not be interpreted just stored by the system, while in this case you create a whole framework of block placement inside the file system. What is the difference between this issue and HDFS-2832? Are you adding api to specify file placement, is it an extension to HDFS-2832? Sorry for hitting save button too early Given poorly toned comments on HDFS-2832and on public forums like twitter, I am asking why this is not a dupe of HDFS-2832? Suresh, With this issue I had hoped to engage on discussions about tiered storage concerns, specifically, and volunteered some of our thoughts for consideration. Today the scope of HDFS-2832 was widened to duplicate this issue. Since the issues are linked, that was not necessary. This is not an event that has happened in a vacuum. Several individuals were disappointed to see this, and were discussing it in a non-ASF forum. Your response here to those conversations, a suggestion to shut down at least this aspect of our attempt to engage the Hadoop community, highlights why those conversations took place. Today the scope of HDFS-2832was widened to duplicate this issue. Since the issues are linked, that was not necessary. I disagree. Here is the brief comment I had posted on that jira - - Support for heterogeneous storages: - DN could support along with disks, other types of storage such as flash etc. - Suitable storage can be chosen based on client preference such as need for random reads etc. - Block report scaling: instead of a single monolithic block report, a smaller block report per storage becomes possible. This is important with the growth in disk capacity and number of disks per datanode. - Better granularity of storage failure handling: - DN could just indicate loss of storage and namenode can handle it better since it knows the list of blocks belonging to a storage. - DN could locally handle storage failures or provide decommissioning of a storage by marking a storage as ReadOnly. - Hot pluggability of disks/storages: adding and deleting a storage to a node is simplified. - Other flexibility: includes future enhancements to balance storages with in a datanode, balancing the load (number of transceivers) per storage etc and better block placement strategies. It has brief mentions of the following, that is duplicated in this jira: - Client preference for writing to storages - automatically means that block placement must consider storage type etc. - Support for different storage types in datanode and block reports based on that. - Awareness of those storage types at the namenode (not for just block placement with various other benefits) - Affinity of replicas to a storage type. Certainly you have elaborated along these points and more implementation details. Does not mean it is a different jira. Thanks for thinking about this, Andrew. This will be a nice feature to have in the future. It would be particularly interesting if people could use both flash and hard disks in the same cluster. Perhaps the flash could be used for HBase-backed storage, and the hard disks for everything else, for example. The xattr idea sounds like the right way to go for when you know what tier you want to put something in. I feel like we might also want to enable automatic migration between tiers, at least for some files. I suppose this could also be done outside HDFS, with a daemon that looks at file access times (atimes) and attaches the correct xattrs. However, traditional hierarchical storage management (HSM) systems integrate this into the filesystem itself, so we may want to consider this. This would also allow us to consider other features like compressing infrequently-used data.
https://issues.apache.org/jira/browse/HDFS-4672
CC-MAIN-2015-32
refinedweb
2,794
58.21
This is your resource to discuss support topics with your peers, and learn from each other. 02-19-2012 11:34 AM Hi All, I managed to get the symbian qt-components working on the playbook, they are symbian style but look good on the pb in my opinion. To use them just add the standard import to your qml file as you would for symbian: import com.nokia.symbian 1.1 Then you need to add the qt-components imports folder (attached) to the qml folder in your project directory, I have attached the imports folder with the components and plugins that I prebuilt for the playbook proc. Assuming that you add the qml folder to the bar package in the pro file like this: -e qml qml \ You just need to put the imports folder into the qml folder and you will be good to go. That's it, enjoy the components on the PlayBook!! Cheers, Jon 02-19-2012 11:15 PM Hi jheron, Were you able to get the qt library built on Windows? I have the NDK + simulator working, just need to figure out how to get the qt library built so I can start using it... 02-19-2012 11:28 PM Hi i68040, To be able to use QT in QNX Momentics on windows, I compiled it on linux (I used a virtual machine) and I copied the STAGE folder to windows. After that, I changed the settings of a project to use Qt libs. Hope it helps. 02-20-2012 10:03 AM Sorry 168040, I dropped windows over a year ago. If you don't want to install linux you can just run it from a cd (), then compile the libs from there. The components I attached above are already compiled and ready to go on the pb but likely wont work in the sim as they are built for the armle-v7. I remember seeing a configure.exe in the BB Qt port. Did you try to configure them from windows with that? I am not sure if your familiar with building Qt libs or not but you need to configure them with the configure utility before you 'make' them. Perhaps this link will help? Good luck! Jon 02-20-2012 02:00 PM Here is a couple examples of the beauty ui's you can build using Qt's qml with the symbian components. These are screen shots I took right off the playbook today... 02-20-2012 03:09 PM - edited 02-20-2012 08:46 PM aaahhh...it's nice! I'll look into that. Edit: I tried to use the symbian components on the playbook but I get this error : "com.nokia.symbian" is not installed The imports folder is present in the qml folder like you said. I'm not sure how to fix that. Any idea? 02-20-2012 08:55 PM I use the Qt SDK to design and code the app, the nokia simulator works well for quick testing I find too... Qt creator will complain about the components not being installed if you have not installed the Symbian components from the updater but they should work fine once deployed to the device, what's giving you the error, the playbook or the sim? If your using creator I would recommend installing the components from the update tool, then you can use the designer without any complaints. The above ui's were done by my bro, Keats Pascoe, in about 1 month he hammered out 3 quality apps with Qt Quick, he is a savant! lol Hang on and I will upload my initial components test code... Cheers, Jon 02-20-2012 09:21 PM - edited 02-20-2012 09:22 PM Here is the test code I ran to get them working, a signed bar file is included. Its just a component 'Button' but obviously you can test any of the symbian components you want... Cheers, Jon 02-20-2012 09:35 PM - edited 02-20-2012 09:38 PM Jon - thank for adobting this components. it's really looks great! 02-20-2012 10:22 PM @Azazello, thanks ! I didn't thought of this env. variable.
http://supportforums.blackberry.com/t5/Native-Development/QML-symbian-qt-components-for-PlayBook/m-p/1575537
CC-MAIN-2014-23
refinedweb
706
79.09
In this tutorial, you will learn how to create a server-side Blazor application that interacts with an external web API using HttpClientFactory. Later in the series, you will add IdentityServer4 authentication to protect the API and authorize the client web app. In Part 1, you will create a public Web API, and you will learn the right way to interact with it from a server-side Blazor app. In the next tutorial, you will protect the API using IdentityServer4 and learn how to authorize your Blazor app using an access token. Though this tutorial is written for server-side Blazor applications, the techniques can also be used in other ASP.NET web apps, including both MVC and Razor Pages projects. Create the Shared Models Project Start by creating a shared library that will contain the models to be used in the solution. The models contained in the shared library will be referenced by both the API and the Blazor web application frontend, so this is a good place to start! Launch Visual Studio and create a New Project. Select Class Library (.NET Standard). A .NET Standard class library can be added as a reference to all .NET Core web applications (MVC, Razor Pages, and Blazor), and it can also be included as a reference in a Xamarin project if you ever decide to create a mobile frontend. In this series, you will create a contact management application. For Solution Name, type BlazorContacts. Enter BlazorContacts.Shared for the Project Name, and click Create to scaffold the shared library from template. In the Solution Explorer pane, right-click the BlazorContacts.Shared project and select Add > New Folder. Name the folder Models. This directory will contain all the shared models you will need in your application For this application, you will need a model that contains information about individual contacts. Right-click the Models folder, and select Add > Class. Name the class Contact.cs. This will generate a file that should look like the following: using System; using System.Collections.Generic; using System.Text; namespace BlazorContacts.Shared.Models { class Contact { } } The Contact model should be public, so you can access it from outside the class. It should also include a unique identifier for the contact, and, for this example, the contact’s name and phone number. You may choose to split the Name property into two properties, one for FirstName and one for LastName, to provide for more robust sorting and filtering in your end product. You could also add more fields, such as Address, EmailAddress, and Location. namespace BlazorContacts.Shared.Models { public class Contact { public long Id { get; set; } public string Name { get; set; } public string PhoneNumber { get; set; } } } There is a useful Nuget package for annotating data models that I recommend. Using it will make data validation much easier on both the backend database as well as the frontend UI. It can be used to define required fields, length constraints, valid characters, etc. Add the package to your project. Install-Package System.ComponentModel.Annotations Then include the package in your class library. By also including a JSON serialization library, you can ensure the public properties in your model are linked to the correct JSON property produced by the API. using System.ComponentModel.DataAnnotations; using System.Text.Json.Serialization; I am using the new System.Text.Json namespace instead of NewtonSoft. With this package, your annotated Models class might look like the following: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Text.Json.Serialization; namespace BlazorContacts.Shared.Models { public class Contact { [Key] [JsonPropertyName("id")] public long Id { get; set; } [Required] [JsonPropertyName("name")] public string Name { get; set; } [Required] [DisplayName("Phone Number")] [JsonPropertyName("phonenumber")] public string PhoneNumber { get; set; } } } The [Key] attribute can be used when attaching the model to a database. It is not actually necessary in this case, because a property with name Id will automatically be used as the key in a database entry. The [Required] attribute denotes a required field, and DisplayName allows you to denote a human readable name for a field. The DisplayName will be shown in the case of error messages, for example. Create the Web API Now, it is time to create the Web API which will be used to add, delete, and fetch contacts. With the BlazorContacts solution open, add a New Project , and select ASP.NET Core Web Application. Name the project BlazorContacts.API , and click Create. On the next page, select the API project template. Leave the Authentication setting as No Authentication. Later, you will configure IdentityServer4 to grant API access to your Blazor frontend. Click Create , and wait for the API project template to scaffold. In the Solution Explorer pane of your newly created API project, right click the BlazorContacts.API project and select Add > Reference. In the Reference Manager, add BlazorContacts.Shared as a reference for your API project. This will allow you to reference the Contact model you just created. Next, right-click the Controllers directory of the API project and select Add > Controller. In the Add New Scaffolded Item dialog, choose API Controller – Empty and click Add. In the next dialog, name the controller ContactsController. This will scaffold a blank API controller class called ContactsController.cs that looks like the following: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; namespace BlazorSecure.API.Controllers { [Route("api/[controller]")] [ApiController] public class ContactsController : ControllerBase { } } Add a using directive to include the BlazorContacts.Shared.Models namespace. using BlazorContacts.Shared.Models; For demonstration purposes, this API will simply store the contacts in memory. In production applications, you could use a database. Declare a collection of contacts, and add a method to populate the collection. private static readonly List<Contact> contacts = GenerateContacts(5); private static List<Contact> GenerateContacts(int number) { return Enumerable.Range(1, number).Select(index => new Contact { Id = index, Name = $"First{index} Last{index}", PhoneNumber = $"+1 555 987{index}", }).ToList(); } The GenerateContacts() method will generate and return a list of a given number of unique contacts. You could also generate contacts one by one and add them to the contacts variable. Again, this is all just sample data stored in memory, so do whatever you’d like. contacts.Add(new Contact { Id = 1, Name="First1 Last1", PhoneNumber="+1 555 123 9871" }); Next, add public methods for interacting with the API. Below are sample methods. // GET: api/contacts [HttpGet] public ActionResult<List<Contact>> GetAllContacts() { return contacts; } // GET: api/contacts/5 [HttpGet("{id}")] [ProducesResponseType(StatusCodes.Status404NotFound)] public ActionResult<Contact> GetContactById(int id) { var contact = contacts.FirstOrDefault((p) => p.Id == id); if (contact == null) return NotFound(); return contact; } // POST: api/contacts [HttpPost] public void AddContact([FromBody] Contact contact) { contacts.Add(contact); } // PUT: api/contacts/5 [HttpPut("{id}")] public void EditContact(int id, [FromBody] Contact contact) { int index = contacts.FindIndex((p) => p.Id == id); if(index != -1) contacts[index] = contact; } // DELETE: api/contacts/5 [HttpDelete("{id}")] public void Delete(int id) { int index = contacts.FindIndex((p) => p.Id == id); if (index != -1) contacts.RemoveAt(index); } There are two GET methods, one for fetching all contacts and one for getting individual contacts by ID. Next, there is a POST method for adding a new contact. The PUT method can be used to update an existing contact, and the DELETE method will delete a contact by ID. One final note regarding the BlazorContacts.API project: I am using the following launch settings (Properties > launchSettings.json) to start the project on. This address will be used when configuring the Identity Server authentication and when setting the base api address for the web frontend. { "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "", "sslPort": 0 } }, "$schema": "", "profiles": { "IIS Express": { "commandName": "IISExpress", "launchUrl": "api/contacts", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "BlazorContacts.API": { "commandName": "Project", "launchUrl": "api/contacts", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" }, "applicationUrl": "" } } } Create the Blazor Web App Now it is time to create the web interface to interact with the API. Create a New Project within the BlazorContacts solution. Select Blazor App and name the project BlazorContacts.Web. On the next page, select Blazor Server App. First, from the new BlazorContacts.Web project, Add > Reference that points to the BlazorContacts.Shared project, just like you did when creating the API. Then add a new folder called Services to the BlazorContacts.Web project. Right-click the new folder and Add > New Class called ApiService.cs.. using BlazorContacts.Shared.Models; using System.Collections.Generic; using System.Net.Http; using System.Text.Json; using System.Threading.Tasks; namespace BlazorContacts.Web.Services { public class ApiService { public HttpClient _httpClient; public ApiService(HttpClient client) { _httpClient = client; } public async Task<List<Contact>> GetContactsAsync() { var response = await _httpClient.GetAsync("api/contacts"); response.EnsureSuccessStatusCode(); using var responseContent = await response.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<List<Contact>>(responseContent); } public async Task<Contact> GetContactByIdAsync(int id) { var response = await _httpClient.GetAsync($"api/contacts/{id}"); response.EnsureSuccessStatusCode(); using var responseContent = await response.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<Contact>(responseContent); } } } In this example, I have written two methods for the ApiService class. One will return a List<Contact> and the other will return an individual Contact by its unique ID property. You could also just use a single method that returns a string and perform the deserialization in your page controller, instead. return await response.Content.ReadAsStringAsync(); Now that the service is created, you must register the HttpClientFactory interface. Still in BlazorContacts.Web , open Startup.cs and locate the public void ConfigureServices(IServiceCollection services) method and register the ApiService typed client. services.AddHttpClient<Services.ApiService>(client => { client.BaseAddress = new Uri(""); }); Go ahead and assign BaseAddress to the address where your web API is located, as shown above. The path argument in the the GetApiAsync() method of the ApiService class will append to this base address. Passing “api/contacts” to the method, for example, will fetch the result from which is configured in the API’s ContactsController.cs file to return a list of all contacts. Create the Blazor Page All that is left is to create Blazor pages that will interact with the API, through the ApiService. Create a new Razor component in the Pages folder called Contacts.Razor. To make this page appear at the /contacts route, add a page directive to the top of the page. @page "/contacts" Next, inject the HttpClientFactory interface you previously registered, and import the namespace of the shared contact model. @inject Services.ApiService apiService @using BlazorContacts.Shared.Models You can use the service in your page code as follows. In this case, I am overriding the OnInitialized method of the Blazor component so I can use the List<Contact> in the web interface. List<Contact> contacts; protected override async Task OnInitializedAsync() { contacts = await apiService.GetContactsAsync(); } A complete Blazor page that relies on apiService.GetContactsAsync() may look like the following. @page "/contacts" @inject Services.ApiService apiService @using BlazorContacts.Shared.Models <h3>Contacts</h3> @if (contacts == null) { <p><em>Loading...</em></p> } else { <table class="table"> <thead> <tr> <th>ID</th> <th>Name</th> <th>Phone Number</th> </tr> </thead> <tbody> @foreach (var contact in contacts) { <tr> <td>@contact.Id</td> <td>@contact.Name</td> <td>@contact.PhoneNumber</td> </tr> } </tbody> </table> } @code { List<Contact> contacts; protected override async Task OnInitializedAsync() { contacts = await apiService.GetContactsAsync(); } } To use the GetContactByIdAsync() method, simply pass the id value of the contact whose information you want to retrieve to the method. Contact contact3 = await apiService.GetContactByIdAsync(3); Putting it all together To test your Blazor solution, you will need to launch the Web API and the Blazor App at the same time. To do this, you must configure multiple projects to startup when you launch the debugger. Right click the BlazorContacts solution in the Solution Explorer. Select Set StartUp Projects. In the dialog that opens, choose Multiple startup projects and set the action for BlazorContacts.API and BlazorContacts.Web to Start. Now, when you start debugging, the API will run on and the web server will run on another port. I have configured mine to run on, for example. When you navigate to the /contacts page, the server-side Blazor frontend will fetch data from the external API using techniques that will properly dispose HttpClient and not result in socket exhaustion. Source code for this project is available on GitHub. In the coming tutorials, we will build off this application by adding IdentityServer4 protection to the API. 4 thoughts on “Blazor, HttpClientFactory, and Web API” Hi, pretty nice. I would like to separate the webapi project from the blazor project. Try to follow your step by step example. But i’am in trouble with starting all together in VS2019. If i set the class-library as a startup, it says “a project with output type class library cannot be started directly”. Yepp, but if i change the start-project to “whatever, webapi or blazor-web” it didn’t start together. Try to compare everything with your github project, but can’t figure out the difference. Any hint would be very fine You won’t set the Shared project as a startup project, but you do need to add it as a reference to each of the projects that depend on it, such as the webapi and Blazor projects. You’ll then use multiple startup projects, setting both the webapi and Blazor projects to Start. I hope that helps! If you get stuck, let me know Thx alot again! Now it works, but anyway can’t figure out why it is not working at first hand. I’ve to create a new Solution (Blazor) add the two existing Projects, did the referencing … and it works. I think the first solution was kinda broken. So if someone has the same problem: delete the solution, create a new and add the projects again … in Creating the Blazor App… .” I cant see where this is being done only injecting in IHttpClient…. Can you help?
https://wellsb.com/csharp/aspnet/blazor-httpclientfactory-and-web-api/
CC-MAIN-2020-16
refinedweb
2,309
50.84
The Mappedin React Native SDK lets you render the maps of your venue, designed using Mappedin CMS, inside a React Native application. The SDK is a Typescript package and can be downloaded from NPM. You can find the Mappedin React Native demo application on Github at In this section, we'll help you set up a React Native development environment. The demo code and these guides are written in Typescript but can be adapted to a Javascript project as well. For more information about the basic concepts of the SDK, you can review the Mappedin Web SDK Overview. Further pieces of this guide give you more details about how to set up the SDK and customize it as per your liking. Mappedin React Native SDK uses react-native-webview package to display the map. React Native project setup If you are starting a new project or want to follow along with the guides in an empty project, follow these steps. - Set up your environment and start a new React Native -project following the official instructions. npx react-native init GettingStarted --template react-native-template-typescript - Run yarn startto start the Metro bundler - The run (in a separate terminal) yarn run iosto make sure everything works at this point. This will run the React Native starter -project in an iOS simulator. - To install Mappedin React Native SDK, run yarn add @mappedin/react-native-sdk react-native-webviewMake sure you build the project after installing dependencies with cd ios && pod installor npx pod-installand followed by launching the app on iPhone emulator yarn run ios. Rendering the map With just a few lines of code we can display the demo venue. In this case it is wrapped and centered inside a React Native -component SafeAreaView. Replace the content of App.tsx in a fresh React Native -project with the following or create a similar component in your own project. These guides use TypeScript, however most of the code works as is or with small modifications (removal of types) in a JavaScript project as well. import React from 'react';import { SafeAreaView } from 'react-native';import { MiMapView } from '@mappedin/react-native-sdk';export const App = () => {return (<SafeAreaView style={{flex: 1}}><MiMapViewstyle={{ flex: 1 }}key="mappedin"options={{clientId: '<MAPPEDIN_CLIENT_ID>',clientSecret: '<MAPPEDIN_CLIENT_SECRET>',venue: 'mappedin-demo-mall',perspective: 'Website',}}/></SafeAreaView>);};
https://developer.mappedin.com/react-native-sdk/v4/getting-started/
CC-MAIN-2022-33
refinedweb
383
51.68
Created on 2013-08-26 17:23 by aisaac, last changed 2016-10-14 05:20 by python-dev. This issue is now closed. The need for weighted random choices is so common that it is addressed as a "common task" in the docs: This enhancement request is to add an optional argument to random.choice, which must be a sequence of non-negative numbers (the weights) having the same length as the main argument. +1. I've found myself in need of this feature often enough to wonder why it's not part of the stdlib. Agreed with the feature request. The itertools dance won't be easy to understand, for many people. I realize its probably quite early to begin putting a patch together, but here's some preliminary code for anyone interested. It builds off of the "common task" example in the docs and adds in validation for the weights list. There are a few design decisions I'd like to hash out. In particular: - Should negative weights cause a ValueError to be raised, or should they be converted to 0s? - Should passing a list full of zeros as the weights arg raise a ValueError or be treated as if no weights arg was passed? [Madison May] > - Should negative weights cause a ValueError to be raised, or should they be converted to 0s? > - Should passing a list full of zeros as the weights arg raise a ValueError or be treated as if no weights arg was passed? Both those seem like clear error conditions to me, though I think it would be fine if the second condition produced a ZeroDivisionError rather than a ValueError. I'm not 100% sold on the feature request. For one thing, the direct implementation is going to be inefficient for repeated sampling, building the table of cumulative sums each time random.choice is called. A more efficient approach for many use-cases would do the precomputation once, returning some kind of 'distribution' object from which samples can be generated. (Walker's aliasing method is one route for doing this efficiently, though there are others.) I agree that this is a commonly needed and commonly requested operation; I'm just not convinced either that an efficient implementation fits well into the random module, or that it makes sense to add an inefficient implementation. [Mark Dickinson] > Both those seem like clear error conditions to me, though I think it would be fine if the second condition produced a ZeroDivisionError rather than a ValueError. Yeah, in hindsight it makes sense that both of those conditions should raise errors. After all: "Explicit is better than implicit". As far as optimization goes, could we potentially use functools.lru_cache to cache the cumulative distribution produced by the weights argument and optimize repeated sampling? Without @lru_cache: >>> timeit.timeit("x = choice(list(range(100)), list(range(100)))", setup="from random import choice", number=100000) 36.7109281539997 With @lru_cache(max=128): >>> timeit.timeit("x = choice(list(range(100)), list(range(100)))", setup="from random import choice", number=100000) 6.6788657720007905 Of course it's a contrived example, but you get the idea. Walker's aliasing method looks intriguing. I'll have to give it a closer look. I agree that an efficient implementation would be preferable but would feel out of place in random because of the return type. I still believe a relatively inefficient addition to random.choice would be valuable, though. +1 for the overall idea. I'll take a detailed look at the patch when I get a chance. The sticking point is going to be that we don't want to recompute the cumulative weights for every call to weighted_choice. So there should probably be two functions: cw = make_cumulate_weights(weight_list) x = choice(choice_list, cw) This is similar to what was done with string.maketrans() and str.translate(). > A more efficient approach for many use-cases would do the precomputation once, returning some kind of 'distribution' object from which samples can be generated. I like the idea about adding a family of distribution generators. They should check input parameters and make a precomputation and then generate infinite sequence of specially distributed random numbers. [Raymond Hettinger] > The sticking point is going to be that we don't want to recompute the > cumulative weights for every call to weighted_choice. > So there should probably be two functions: > cw = make_cumulate_weights(weight_list) > x = choice(choice_list, cw) That's pretty much how I broke things up when I decided to test out optimization with lru_cache. That version of the patch is now attached. [Serhiy Storchaka] > I like the idea about adding a family of distribution generators. > They should check input parameters and make a precomputation and then > generate infinite sequence of specially distributed random numbers. Would these distribution generators be implemented internally (see attached patch) or publicly exposed? > Would these distribution generators be implemented internally (see attached patch) or publicly exposed? See issue18900. Even if this proposition will be rejected I think we should publicly expose weighted choice_generator(). A generator or a builder which returns function are only ways how efficiently implement this feature. Use lru_cache isn't good because several choice generators can be used in a program and because it left large data in a cache long time after it was used. > Use lru_cache isn't good because several choice generators can be used in a program and because it left large data in a cache long time after it was used. Yeah, I just did a quick search of the stdlib and only found one instance of lru_cache in use -- another sign that lru_cache is a bad choice. > I like the idea about adding a family of distribution generators Let's stay focused on the OP's feature request for a weighted version of choice(). For the most part, it's not a good idea to "just add" a family of anything to the standard library. We wait for user requests and use cases to guide the design and error on the side of less, rather than more. This helps avoid bloat. Also, it would be a good idea to start something like this as a third-party to module to let it iterate and mature before deciding whether there was sufficient user uptake to warrant inclusion in the standard library. For the current request, we should also do some research on existing solutions in other languages. This isn't new territory. What do R, SciPy, Fortran, Matlab or other statistical packages already do? Their experiences can be used to inform our design. Alan Kay's big criticism of Python developers is that they have a strong propensity invent from scratch rather than taking advantage of the mountain of work done by the developers who came before them. > What do R, SciPy, Fortran, Matlab or other statistical packages already do? Numpy avoids recalculating the cumulative distribution by introducing a 'size' argument to numpy.random.choice(). The cumulative distribution is calculated once, then 'size' random choices are generated and returned. Their overall implementation is quite similar to the method suggested in the python docs. >>> choices, weights = zip(*weighted_choices) >>> cumdist = list(itertools.accumulate(weights)) >>> x = random.random() * cumdist[-1] >>> choices[bisect.bisect(cumdist, x)] The addition of a 'size' argument to random.choice() has already been discussed (and rejected) in Issue18414, but this was on the grounds that the standard idiom for generating a list of random choices ([random.choice(seq) for i in range(k)]) is obvious and efficient. Honestly, I think adding weights to any of the random functions are trivial enough to implement as is. Just because something becomes a common task does not mean it ought to be added to the stdlib. Anyway, from a user point of view, I think it'd be useful to be able to send a sequence to a function that'll weight the sequence for use by random. Just ran across a great blog post on the topic of weighted random generation from Eli Bendersky for anyone interested: The proposed patch add two methods to the Random class and two module level functions: weighted_choice() and weighted_choice_generator(). weighted_choice(data) accepts either mapping or sequence and returns a key or index x with probability which is proportional to data[x]. If you need several elements with same distribution, use weighted_choice_generator(data) which returns an iterator which produces random keys or indices of the data. It is more faster than calling weighted_choice(data) repeatedly and is more flexible than generating a list of random values at specified size (as in NumPy). Should this really be implemented using the cumulative distribution and binary search algorithm? Vose's Alias Method has the same initialization and memory usage cost (O(n)), but is constant time to generate each sample. An excellent tutorial is here: Thank you Neil. It is interesting. Vose's alias method has followed disadvantages (in comparison with the roulette wheel selection proposed above): 1. It operates with probabilities and uses floats, therefore it can be a little less accurate. 2. It consumes two random number (an integer and a float) for generating one sample. It can be fixed however (in the cost of additional precision lost). 3. While it has same time and memory O(n) cost for initialization, it has larger multiplication, Vose's alias method requires several times larger time and memory for initialization. 4. It requires more memory in process of generating samples. However it has an advantage. It really has constant time cost to generate each sample. Here are some benchmark results. "Roulette Wheel" is proposed above implementation. "Roulette Wheel 2" is its modification with normalized cumulative sums. It has twice more initialization time, but 1.5-2x faster generates each sample. "Vose's Alias" is an implementation of Vose's alias method directly translated from Java. "Vose's Alias 2" is optimized implementation which uses Python specific. Second column is a size of distribution, third column is initialization time (in milliseconds), fourth column is time to generate each sample (in microseconds), fifth column is a number of generated samples after which this method will overtake "Roulette Wheel" (including initialization time). Roulette Wheel 10 0.059 7.165 0 Roulette Wheel 2 10 0.076 4.105 5 Vose's Alias 10 0.129 13.206 - Vose's Alias 2 10 0.105 6.501 69 Roulette Wheel 100 0.128 8.651 0 Roulette Wheel 2 100 0.198 4.630 17 Vose's Alias 100 0.691 12.839 - Vose's Alias 2 100 0.441 6.547 148 Roulette Wheel 1000 0.719 10.949 0 Roulette Wheel 2 1000 1.458 5.177 128 Vose's Alias 1000 6.614 13.052 - Vose's Alias 2 1000 3.704 6.531 675 Roulette Wheel 10000 7.495 13.249 0 Roulette Wheel 2 10000 14.961 6.051 1037 Vose's Alias 10000 69.937 13.830 - Vose's Alias 2 10000 37.017 6.746 4539 Roulette Wheel 100000 73.988 16.180 0 Roulette Wheel 2 100000 148.176 8.182 9275 Vose's Alias 100000 690.099 13.808 259716 Vose's Alias 2 100000 391.367 7.095 34932 Roulette Wheel 1000000 743.415 19.493 0 Roulette Wheel 2 1000000 1505.409 8.930 72138 Vose's Alias 1000000 7017.669 13.798 1101673 Vose's Alias 2 1000000 4044.746 7.152 267507 As you can see Vose's alias method has very large initialization time. Non-optimized version will never overtake "Roulette Wheel" with small distributions (<100000), and even optimized version will never overtake "Roulette Wheel" with small distributions (<100000). Only with very large distributions Vose's alias method has an advantage (when you needs very larger number of samples). Because for generating only one sample we need a method with fastest initialization we need "Roulette Wheel" implementation. And because large distributions are rare, I think there is no need in alternative implementation. In worst case for generating 1000000 samples from 1000000-elements distribution the difference between "Roulette Wheel" and "Vose's Alias 2" is a difference between 20 and 11 seconds. Serhiy, from a technical standpoint, your latest patch looks like a solid solution. From an module design standpoint we still have a few options to think through, though. What if random.weighted_choice_generator was moved to random.choice_generator and refactored to take an array of weights as an optional argument? Likewise, random.weighted_choice could still be implemented with an optional arg to random.choice. Here's the pros and cons of each implementation as I see them. Implementation: weighted_choice_generator + weighted_choice Pros: Distinct functions help indicate that weighted_choice should be used in a different manner than choice -- [weighted_choice(x) for _ in range(n)] isn't efficient. Can take Mapping or Sequence as argument. Has a single parameter Cons: Key, not value, is returned Requires two new functions Dissimilar to random.choice Long function name (weighted_choice_generator) Implementation: choice_generator + optional arg to choice Pros: Builds on existing code layout Value returned directly Only a single new function required More compact function name Cons: Difficult to support Mappings Two args required for choice_generator and random.choice Users may use [choice(x, weights) for _ in range(n)] expecting efficient results I think Storchaka's solution is more transparent and I agree with him on the point that the choice generator should be exposed. > I think Storchaka's solution is more transparent and I agree with him on the point that the choice generator should be exposed. Valid point -- transparency should be priority #1 Most existing implementation produce just index. That is why weighted_choice() accepts singular weights list and returns index. On the other hand, I think working with mapping will be wished feature too (especially because Counter is in stdlib). Indexable sequences and mappings are similar. In both cases weighted_choice() returns value which can be used as index/key of input argument. If you need choice an element from some sequence, just use seq[weighted_choice(weights)]. Actually weighted_choice() has no common code with choice() and has too different use cases. They should be dissimilar as far as possible. Perhaps we even should avoid the "choice" part in function names (are there any ideas?) to accent this. You have me convinced, Serhiy. I see the value in making the two functions distinct. For naming purposes, perhaps weighted_index() would be more descriptive. Closed issue 22048 as a duplicate of this one. Raymond, what is your opinion? I don't want to speak for Raymond, but the proposed API looks good, and it seems "Roulette Wheel 2" should be the implementation choice given its characteristics (simple, reasonably good and balanced performance). "Roulette Wheel 2" has twice slower initializations than "Roulette Wheel", but then generates every new item twice faster. It is possible to implement hybrid generator, which yields first item using "Roulette Wheel", and then rescales cumulative_dist and continues with "Roulette Wheel 2". It will be so fast as "Roulette Wheel" for generating only one item and so fast as "Roulette Wheel 2" for generating multiple items. The setup cost of RW2 should always be within a small constant multiplier of RW's, so I'm not sure it's worth the hassle to complicate things. But it's your patch :) Non-generator weighted_choice() function is purposed to produce exactly one item. This is a use case for such optimization. Updated patch. Synchronized with tip and added optimizations. I'm adverse to adding the generator magic and the level of complexity in this patch. Please aim for the approach I outlined above (one function to build cumulative weights and another function to choose the value). Since this isn't a new problem, please take a look at how other languages and packages have solved the problem. Other languages have no such handly feature as generators. NumPy provides the size parameter to all functions and generates a bunch of random numbers at time. This doesn't look pythonic (Python stdlib prefers iterators). I believe a generator is most Pythonic and most handly solution of this issue on Python. And it is efficient enough. I agree with Serhiy. There is nothing "magic" about generators in Python. Also, the concept of an infinite stream of random numbers (or random whatevers) is perfectly common (/dev/urandom being an obvious example); it is not a notion we are inventing. By contrast, the two-function approach only makes things clumsier for people since they have to remember to combine them. When I get a chance, I'll work up an approach that is consistent with the rest of the module in terms of implementation, restartability, and API. I) Reopen this idea but removing the generator from weighted choice. The entire function of weighted choice. I removed the generator and replaced it by adding an optional argument to specify an amount by which you want to call this function. Thanks for the patch. Can I get you to fill out a contributor agreement? What is wrong with generators? Hello rhettinger. I filled out the form thanks for letting me know about it. Is there anything else I have to do? Hey serhiy.storchaka There were several things "wrong" with the previous implementation in my opinion. 1st they tried to add too much. Which would if allowed would clutter up the random library if every function had both it's implementation as well as an accompanied generator. The other problem being that both were attempted to be made as callable to the public API. I would prefer the generator if present to be hidden and would also have to be more sophisticated to be able to check if it was being called with new input. 2nd by adding in the generator to the pulbic API of the random library it makes it far more confusing and obfuscates the true purpose of this function anyways which is to get a weighted choice. So basically there is nothing wrong with generators but they don't necessarily belong here so I removed it to try to get back to the core principles of what the function should be doing, by making it simpler. I disagree. My patch adds two functions because they serve two different purposes. weighted_choice() returns one random value as other functions in the random module. weighted_choice_generator() provides more efficient way to generate random values, since startup cost is more significant than for other random value generators. Generators are widely used in Python, especially in Python 3. If they considered confusing, we should deprecate builtins map(), filter(), zip() and the itertools module at first place. Your function, Steven, returns a list containing one random value by default. It does not match the interface of other functions in the random module. It matches the interface of NumPy random module. In Python you need two separate functions, one that returns single value, and other that returns a list of values. But returning iterator and generating values by demand is more preferable in Python 3. Generatorsa are more flexible. With weighted_choice_generator() it is easy to get the result of your function: list(islice(weighted_choice_generator(data), amount)). But generating dynamic amount of values with your interface is impossible. Raymond, if you have now free time, could you please make a review of weighted_choice_generator_2.patch? Hey serhiy.storchaka I can edit the code to output just one value if called with simply a list and then return a list of values if called with the optional amount parameter. My code also needs to check that amount >= 1. My code was mostly just to restart this discussion as I personally like the idea of the function for weighted choice and would like it to be standard in the random library. I have no qualms with adding both weighted_choice and weighted_choice_generator but my concern is mostly that you are asking too much and it won't go through by trying to add two functions at the same time. The other thing is that I believe that weighted_choice could suffice with just one function call. I just think my last concern is that generators are different from the other functions in random.py. Whereas they are more intuitive and accepted in the builtins like map and zip etc. There isn't any other functions in the random library that return that type of object when called. They instead return a numerical result. Those are my concerns and hence why I rewrote the code. A user can use map(), filter(), zip() without knowing anything about generators. In most cases those function will do their magic and provide a finite number of outputs. The weighted_choice_generator on the other hand isn't as easy to use. If the user wants 5 values from it, they need to know about `take()` from itertools or call `next()`. I still like Serhiy's implementation more. A function that returns a list instead of the item is unnatural and doesn't fit with the rest of the module. I think there's need to be some discussion about use cases. What do users actually want? Maybe post this on the ideas list. Okay. I reuploaded the file. The spacing on the if amount < 1 was off. Hopefully its fixed now. > One to make it return a single number if amount == 1 and the other to check that the amount > 1. I think that's a dangerous API. Any code making a call to "weighted_choice(..., amount=n)" for variable n now has to be prepared to deal with two possible result types. It would be easy to introduce buggy code that fails in the corner case n = 1. > One to make it return a single number if amount == 1 and the other to check that the amount > 1. Suggestion: if you want to go that way, return a single number if `amount` is not provided (so make the default value for `amount` None rather than 1). If `amount=1` is explicitly given, a list containing one item should be returned. I also think there's no reason to raise an exception when `amount = 0`: just return an empty list. For comparison, here's NumPy's "uniform" generator, which generates a scalar if the "size" parameter is not given, and an array if "size" is given, even if it's 1. >>> np.random.uniform() 0.4964992470265117 >>> np.random.uniform(size=1) array([ 0.64817717]) >>> np.random.uniform(size=0) array([], dtype=float64) > Suggestion: if you want to go that way, return a single number if `amount` is not provided (so make the default value for `amount` None rather than 1). If `amount=1` is explicitly given, a list containing one item should be returned. +1 Re-implemented with suggested improvements taken into account. Thanks @mark.dickinson and @pitrou for the suggestions. I also removed the redundant "fast path" portion for this code since it doesn't deal with generators anyways. Let me know additional thoughts about it. Left in a line of code that was supposed to be removed. Fixed. Raymond, do you have a time for this issue? Raymond, any chance to get weighted random choices generator in 3.6? Less than month is left to feature code freeze. FWIW, I have four full days set aside for the upcoming pre-feature release sprint which is dedicated to taking time to thoughtfully evaluate pending feature requests. In the meantime, I'm contacting Alan Downey for a consultation for the best API for this. As mentioned previously, the generator version isn't compatible with the design of the rest of the module that allows streams to have their state saved and restored at arbitrary points in the sequence. One API would be to create a list all at once (like random.sample does). Another would be to have two steps (like str.maketrans and str.translate). Ideally, the API should integrate neatly with collections.Counter as a possible input for the weighting. Hopefully, Alan can also comment on the relative frequency of small integer weightings versus the general case (the former benefits from a design using random.choice() applied to Counter.elements() and the latter benefits from a design with accumulate() and bisect()). Note, this is a low priority feature (no real demonstrated need, there is already a recipe for it in the docs, and once the best API have been determined, the code is so simple that any of us could implement it in only a few minutes). Latest draft patch attached (w/o tests or docs). Incorporates consultation from Alan Downey and Jake Vanderplas. * Population and weights are separate arguments (like numpy.random.choice() and sample() in R). Matches the way data would arrive in Pandas. Easily extracted from a Counter or dict using keys() and values(). Suitable for applications that sample the population multiple times but using different weights. See and * Excludes a replacement=False option. That use case necessarily has integer weights and may be better suited to the existing random.sample() rather than trying to recompute a CDF on every iteration as we would have to in this function. * Allows cumulative_weights to be submitted instead of individual weights. This supports uses cases where the CDF already exists (as in the ThinkBayes examples) and where we want to periodically reuse the same CDF for repeated samples of the same population -- this occurs in resampling applications, Gibbs sampling, and Monte Carlo Markov Chain applications. Per Jake, "MCMC/Gibbs Sampling approaches generally boil down to a simple weighted coin toss at each step" and "It's definitely common to do aggregation of multiple samples, e.g. to compute sample statistics" * The API allows the weights to be integers, fractions, decimals, or floats. Likewise, the population and weights can be any Sequence. Population elements need not be hashable. * Returning a list means that the we don't have to save state in mid-stream (that is why we can't use a generator). A list feeds nicely into Counters, mean, median, stdev, etc for summary statistics. Returning a list parallels what random.sample() does, keeping the module internally consistent. * Default uniform weighting falls back to random.choice() which would be more efficient than bisecting. * Bisecting tends to beat other approaches in the general case. See * Incorporates error checks for len(population)==len(cum_weights) and for conflicting specification of both weights and cumulative weights. There API is not perfect and there are some aspects that give me heartburn. 1) Not saving the computed CDF is waste and forces the user to pre-build the CDF if they want to save it for later use (the API could return both the selections and the CDF but that would be awkward and atypical). 2) For the common case of having small integer weights on a small population, the bisecting approach is slower than using random.choice on a population expanded to include the selections multiple times in proportion to their weights (that said, short of passing in a flag, there is no cheap easy way for this function to detect that case and give it a fast path). 3) Outputting a list is inefficient if all you're doing with result is summarizing it with a Counter, histogram tool, mean, median, or stdev. 4) There is no cheap way to check to see if the user supplied cum_weights is sorted or if the weights contain negative values. I've gone through the patch -- looks good to me. New changeset a5856153d942 by Raymond Hettinger in branch 'default': Issue #18844: Add random.weighted_choices() Thanks Davin. 1. Returning a list instead of an iterator looks unpythonic to me. Values generated sequentially, there are no advantages of returning a list. 2. An implementation lacks optimizations used in my patch. 3. The documentation still contains a receipt for weighted choice. It is incompatible with new function. There). Using a generator doesn't prevents state to be saved and restored. New changeset 39a4be5e003d by Raymond Hettinger in branch '3.6': Issue #18844: Make the number of selections a keyword-only argument for random.choices(). Equidistributed examples: choices(c.execute('SELECT name FROM Employees').fetchall(), k=20) choices(['hearts', 'diamonds', 'spades', 'clubs'], k=5) choices(list(product(card_facevalues, suits)), k=5) Weighted selection examples: Counter(choices(['red', 'black', 'green'], [18, 18, 2], k=3800)) # american roulette Counter(choices(['hit', 'miss'], [5, 1], k=600)) # russian roulette choices(fetch('employees'), fetch('years_of_service'), k=100) # tenure weighted choices(cohort, map(cancer_risk, map(risk_factors, cohort)), k=50) # risk weighted Star unpacking example: transpose = lambda s: zip(*s) craps = [(2, 1), (3, 2), (4, 3), (5, 4), (6, 5), (7, 6), (8, 5), (9, 4), (10, 3), (11, 2), (12, 1)] print(choices(*transpose(craps), k=10)) Comparative APIs from other languages: ################################################################### # Flipping a biased coin from collections import Counter from random import choices print(Counter(choices(range(2), [0.9, 0.1], k=1000))) ################################################################### # Bootstrapping 'From a small statistical sample infer a 90% confidence interval for the mean' # from statistics import mean from random import choices data = 1, 2, 4, 4, 10 means = sorted(mean(choices(data, k=5)) for i in range(20)) print('The sample mean of {:.1f} has a 90% confidence interval from {:.1f} to {:.1f}'.format( mean(data), means[1], means[-2])) New changeset 433cff92d565 by Raymond Hettinger in branch '3.6': Issue #18844: Fix-up examples for random.choices(). Remove over-specified test. New changeset d4e715e725ef by Raymond Hettinger in branch '3.6': Issue #18844: Add more tests
http://bugs.python.org/issue18844
CC-MAIN-2016-44
refinedweb
4,909
56.76
public class ArrayConversion { public static void main(String[] args) { int i,j,flag=0; int count=10; int ar[][]=new int[2][3]; int arCopy[]=new int[6]; for(i=0;i<2;i++){ for(j=0;j<3;j++){ ar[i][j]=count; count++; } } for(i=0;i<2;i++){ for(j=0;j<3;j++){ System.out.print(ar[i][j]); System.out.print(" "); } System.out.println(" "); } for(i=0;i<2;i++){ for(j=0;j<3;j++){ arCopy[flag]=ar[i][j]; flag++; } } System.out.println(" "); for(i=0;i<arCopy.length;i++){ System.out.print(arCopy); System.out.print(" "); } } } Try to fill in the blanks.Try to fill in the blanks. public class Foo { public static int[] twoDimToOneDim(int[][] twoDim) { int[] oneDim = new int[numberOfElements(twoDim)]; // Loop through the 2d array and place each element in the 1d array return oneDim; } private static int numberOfElements(int[][] twoDim) { int number = 0; // Loop through the 2d array and count how many element it contains. return number; } public static void main(String[] args) { int[][] twoDim = {{1,2,3,4},{5,6},{7,8,9}}; int[] oneDim = twoDimToOneDim(twoDim); System.out.println(java.util.Arrays.toString(oneDim)); } } thanks alot... well i will format it accordingly....Sorry, I really can't read that. How will you be able to write code if you can't properly spell words and use normal punctuation? but is this code is what we can say as conversion of 2D array 1D array...???????i mean the conecpt of my code is alrite ??? it conversion only .na and not jus copying??? iam really sorry.. i have already been warned here...- a new sentence starts with a capital (A,B,C, ... , Z); iam trying to improve writting... well it;s all because of chatting ,,, but will try and improve more.. sorry once again!! Now i was asking that .. the code i have written orI did not look too closely at your code. It's hard to follow and it will only work for arrays of 2*3. Every time you're changing the size of the 2D array, you will have to modify your algorithm: this is not good! you have given .. is what we call converison right??? it is not just copying but conversion ... right???????i mean if some one asks for a 2d to 1d array conversion then this code is what we should give them???? and can you please give me the full code.. i mean howWhy don't you try it yourself? to loop thorugh and count variable and how to assign them to oneDim array.. if you can provide will really help thanks alot!!! Now i was asking that .. the code i have written orIf someone asks you that, you should ask him what he actually wants. It depends on whether he calls it conversion. you have given .. is what we call converison right??? it is not just copying but conversion ... right???????i mean if some one asks for a 2d to 1d array conversion then this code is what we should give them???? Object[][] x = ... for (... i < x.length ...) { // ... for (... j < x.length ...) { // inserting of x[i][j] here } } I just wanted to confirm about 2d and 1dIt's "better" to write a more general method for handling this. What if your requirements change and the 2D array expands into something larger? arrays.Regarding trying myself iam a bit confused about looping through and assigning variables as we already know how many elements we have in our 2d array then why count number? Also i am not getting how to loop and put 2dOk, here's a little demo to loop through a 2D array: array contents in 1d array, i mean do i have to put in 2 loops, then what should be the limits for them i mean .lenght method cannont be called then wat should be the value for both the loops for the ending condition of loops. Hope iam clear. int[][] twoDim = {{1,2,3,4},{5,6},{7,8,9}}; int oneDimIndex = 0; for(int i = 0; i < twoDim.length; i++) { int[] row = twoDim; for(int j = 0; j < row.length; j++) { System.out.println("twoDim["+i+"]["+j+"] = "+row[j]+ ", should be at oneDim["+oneDimIndex+"]"); oneDimIndex++; } } Try to incorporate it into what I have posted earlier. But first run this in a single main-method and understand how it works. PS-Thanks alot for your advice,i wil surely try toThank you for your understanding. It really is much better like this! improve my writing even more. Good luck. thanks alot prometheuzz andYou're welcome. CeciNEstPasUnProgrammeur , i got it working and also understood how it works, well i havent read the array class yet thats why was getting problem but now it;s all fine. Thanks again and just one question whenThey're not really called rows and columns, but it's easy to remember them like that. we declare a 2D array suppose twoDim and then call the twoDim.length, then wat value is assigned to the lenght, the rows value or the cloumn?? They're not really called rows and columns, but it'sa row at index 'i' (columns), it can be different per easy to remember them like that. twoDim.length -> rows twoDim[i].length -> number of elements in
https://community.oracle.com/thread/1211221?tstart=89880
CC-MAIN-2014-15
refinedweb
886
76.01
.tasklist.usertasks.model;21 22 import junit.framework.Test;23 24 import org.netbeans.junit.NbTestCase;25 import org.netbeans.junit.NbTestSuite;26 27 /**28 * Tests for Duration.29 * 30 * @author tl31 */32 public class DurationTest extends NbTestCase {33 public DurationTest (String name) {34 super (name);35 }36 37 public static Test suite () {38 return new NbTestSuite(TaskListTest.class);39 }40 41 public void testDummy() throws Exception {42 Duration d = new Duration(8 * 60, 8 * 60, 7, true);43 assertTrue(d.equals(new Duration(0, 1, 0, 0)));44 d = new Duration(60, 8 * 60, 5, true);45 assertTrue(d.equals(new Duration(0, 0, 1, 0)));46 }47 }48 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/tasklist/usertasks/model/DurationTest.java.htm
CC-MAIN-2018-09
refinedweb
125
52.66
SYNOPSIS #include <sys/sendfile.h> ssize_t sendfile(int out_fd, int in_fd, off_t *offset, size_t count); DESCRIPTION sup- ports mmap(2)-like operations (i.e., it cannot be a socket); and out_fd must refer to a socket. applications may wish to fall back to read(2)/write(2) in the case where sendfile() fails with EINVAL or ENOSYS. RETURN VALUE If the transfer was successful, the number of bytes written to out_fd is returned. On error, -1 is returned, and errno is set appropriately. ERRORS. VERSIONS sendfile() is a new feature in Linux 2.2. The include file <sys/send- minimize the number of packets and to tune performance. In Linux 2.4 and earlier, out_fd could refer to a regular file, and sendfile() changed the current offset of that file. SEE ALSO mmap(2), open(2), socket(2), splice(2) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://www.linux-directory.com/man2/sendfile64.shtml
crawl-003
refinedweb
169
68.16
Windows Runtime delegates and object lifetime in C# and other GC languages Raymond In C# and other GC languages such as JavaScript, delegates (most typically used as event handlers) capture strong references to objects in their closures. This means that you can create reference cycles that are beyond the ability of the GC to collect. using Windows.Devices.Enumeration; class Circular { DeviceWatcher watcher; public Circular() { watcher = DeviceInformation.CreateWatcher(); watcher.Added += OnDeviceAdded; } void OnDeviceAdded(DeviceWatcher sender, DeviceInformation info) { ... } } The Circular class contains a reference to a DeviceWatcher, which in turn contains a reference (via the delegate) back to the Circular. This circular reference will never be collected because one of the participants is a DeviceWatcher, which is beyond the knowledge of the garbage collector. From the garbage collector’s point of view, the system looks like this: The garbage collector has full knowledge of the green boxes “delegate” and “Circular” because they are CLR objects. The garbage collector does not know about the dotted-line boxes because they are external objects beyond the scope of the CLR. What the garbage collector knows is that there is an outstanding reference to the delegate from some unknown external source, and it knows that that delegate has a reference to the Circular object, and it knows that the Circular object has a reference to some external object that goes by the name of DeviceWatcher. but it has no insight into what the DeviceWatcher object may have references to, because the DeviceWatcher is not a CLR object. It has no idea that the DeviceWatcher was in fact the question mark the whole time.¹ To avoid a memory leak, you will have to break this circular reference. Ideally, there is some natural place to do this cleanup. For example, if you are a Page, you can clean up in your OnNavigatedFrom method, or in response to the Unloaded event. Less ideally, you could add a cleanup method, possibly codified in the IDisposable pattern. There is a special case: The XAML framework has a secret deal with the CLR, whereby XAML shares more detailed information about the references it holds. This information makes it possible for the CLR to break certain categories of circular references that are commonly-encounted in XAML code. For example, this circular reference can be detected by the CLR with the assistance of information provided by the XAML framework: <!-- XAML --> <Page x:Name="AwesomePage" ...> ... <Button x:Name="SomeNamedButton" ... > ... </Page> // C# code-behind partial class AwesomePage : Page { AwesomePage() { InitializeComponent(); SomeNamedButton.Click += SomeNamedButton_Click; } void SomeNamedButton_Click(object sender, RoutedEventArgs e) { ... } } There is a circular reference here between the AwesomePage and the SomeNamedButton, but the extra information provided by the XAML framework gives the CLR enough information to recognize the cycle and collect it when it becomes garbage. ¹ “It was the question mark all along” sounds like the spoiler to a bad M. Night Shyamalan movie. Do you mean that Circular and DeviceWatcher will never be cleaned up, even when there are no external references to it? I always thought that the GC would look at the root objects of the application and clean up every object which can not be referenced by a root object or a local variable (at least in the classic .net framework). That way it should be able to remove circular references which can’t be referenced by “alive” objects. One of them might be pinned (by virtue of having underlying native resources) I agree, there is some critical piece of information missing from this article. The DeviceWatcher is not a CLR object, so the GC doesn’t have insight into it. The GC doesn’t know what objects DeviceWatcher has references to. I updated the article to clarify.
https://devblogs.microsoft.com/oldnewthing/20190522-00/?p=102511
CC-MAIN-2021-10
refinedweb
619
51.78
DID Working Group F2F, 1st day — Minutes Date: 2020-01-29 See also the Agenda and the IRC Log Attendees Present: Manu Sporny, Samuel Smith, Brent Zundel, Daniel Burnett, Tobias Looker, Ganesh Annan, Michael Jones, Drummond Reed, Joe Andrieu, Kenneth Ebert, Yancy Ribbens, Yoshiaki Fukami, Christopher Allen, David Ezell, Kaliya Young, Oliver Terbu, Justin Richer, Ivan Herman, Markus Sabadello, Jonathan Holt, Eugeniu Rusu, Phil Archer, Amy Guy, Charles Cunningham, Joachim Lohkamp Regrets: Guests: Eric Welton, Oskar van Deventer, Carsten Stöcker, Michael Shea, Juan Caballero Chair: Daniel Burnett, Brent Zundel Scribe(s): Manu Sporny, Brent Zundel, Daniel Burnett, Joe Andrieu, Kenneth Ebert, Yancy Ribbens Content: - 1. Welcome - 2. Level Setting - 3. security - 4. DIDs and IoT - 5. Encodings - 6. Different Encodings: Model Incompatibilities - 7. Abstract Data Modeling Options - 8. DID Doc Extensibility via Registries - 9. DID Doc Extensibility using JSON-LD - 10. continued Extensibility discussion - 11. Metadata Ivan Herman: Meeting slides: 1. Welcome Manu Sporny: brent is going over logistics. Brent Zundel: We are in Spaces, Microsoft Schiphol (thank you Microsoft!) … Dial in information has been sent out to the mailing list. … We need to get dinner organized tonight. About 20 people raised their hands. Christopher will plan dinner. Daniel Burnett: We will get a plan together for dinner by noon-ish. Brent Zundel: IPR Policy - if you are a DID WG member, we look forward to your contributions. If you are observing, please refrain from making substantive contributions. Eric Welton: I promise not to be substantive. laughing Daniel Burnett: You can provide input on requirements, goals, etc… but not how to solve the problem. Brent Zundel: Today’s agenda is here – … We have introductions, thank you to Microsoft for the space, food, etc. … We start the day with level setting, security issues, DID and IoT. Then a break. … We will then talk about encodings and data models. Lunch. Then extensibility, break, then metadata and an open slot at the end of the day. … we can talk about issues/PRs in open slots in agenda. … This is a good time to learn how to scribe, please volunteer. … Put your name on one of these boxes on Agenda if you want to scribe. Don’t scribe when you’re presenting. … The scribing of the meeting is happening in IRC - on #did channel. Daniel Burnett: Everyone needs to be on IRC. Brent Zundel: If you have something to say, queue, present+, etc. … We are going to introduce ourselves - your name, who you work for, single sentence why you’re here, what is your favorite programming language and why? Daniel Burnett: Hi Dan Burnett, my favorite language is Oz - created for teaching, started with small primitives, then built from there - language can do just about anything you can think of, with a small set of primitives. Example on screen shows how easy it is to do multithreading. Brent Zundel: Hi Brent Zundel, I work for Evernym. I’m here because I love DIDs, and I love the power of Self Sovereign Identity… my favorite language is ML - small functional programming language from Haskell. In grad school, I had to write a parser generator … everyone else was using C++, had to print out code because professor was a sadist… everyone else had 30-40 pages of code, my ML program was a page and a half. Kenneth Ebert: Hi Ken Ebert, from Sovrin Foundation. I’m here because of focus on privacy. Favorite programming language is used for Karel the robot, great beginning programming language. Tobias Looker: Hi Tobias Looker, from Mattr. Working on decentralized identifier tech - excited about new wave of technology for this standard. My favorite language is TypeScript or C# - from a syntax perspective, good static/dynamic balance. Yancy Ribbens: Hi I’m Yancy Ribbens, I work for no one. I took a compilers class, had to use ML - favorite language is Rust recently - new for me, like type safety and reinvention - how to write memory safe code. Yoshiaki Fukami: Hi Yoshiaki Fukami, working for Data Trading Alliance, an industrial consortium in Japan in Japan. I’m working for European Japanese reference architecture, identifiers are very important. I’m not a programmer, very difficult to choose. Christopher Allen: Hi Christopher, I used to be with Blockstream, now with Blockstream Commons. I also do RWoT. I like highly constrained environments, most fun experience is implementing Forth in assembler - 12 words in assembler, rest is made by composing base words. I’d like to do that, but in principles of typed language, believe ML is one of them. Michael Jones: Hi Mike Jones at Microsoft. Worked on a lot of different standards in identity space… OAuth, JWT, OpenID Connect - bringing some of that expertise to bear here. Want to make DIDs useful/simple while making things decentralized. I liked Bliss - DEC system DEC10, DEC 20, VAX - Bliss unofficially was an acronym for “Bill’s Language for Implementing Systems Software” - gave you full control over bits, bytes, pointers… but in a more typesafe and principled way. I write C# these days mostly because it’s easy. Drummond Reed: Hi I’m Drummond Reed at Evernym, helped get this DID thing going. Not a programmer, except in ABNF, love ABNF. I have written a few things in Basic. Sam Smith: Hi Sam Smith, worked in a lot of different things. Here on behalf of ConsenSys. Happy to be here. Also, worked in space for a while in IoT. Favorite programming language is Python - started using Python in version 1 in production. Carsten Stöcker Hi Carsten from Spherity, physicist. I like DIDs because unknown actors will meet each other to establish trust using DIDs/VCs. I do a lot of C / C++, lots of simulation. Oskar van Deventer: Hi Oskar from TNO, Dutch Research Institution. I’m technical coordinator for SSI arena focus on interoperability, integration, getting whole ecosystem running. I’m here to liaise with all of you. I like Java because I’m an organized guy. Joe Andrieu: I’m Joe Andrieu, I’m here on behalf of Legendary Requirements. I do requirements engineering. I like lisp, like LPC (written in Amsterdam). Michael Jones: I’d like to retract my favorite programming language, now xml2rfc. :) Manu Sporny: laughing Manu Sporny: Hi, I’m Manu, CEO of Digital Bazaar. One of the editors of the DID Spec, my favorite programming language is Javascript because it started messay and dirty , but turned into something that everyone uses. Drummond Reed: Javascript = “the people’s language” = true Michael Shea: Hi Michael Shea, independent, former programmer many years ago. C and C++ were my favorites - also leading task force for SSI and IoT for Sovrin Foundation. Seeing DIDs and IoT and some activity on mailing list - why I decided it was useful to join. Oskar van Deventer: The European project eSSIF-Lab (essif-lab.eu) funds start-ups and SMEs to build useful SSI stuff: open-source components, generic SSI functionality and services, and sector-specific SSI applications. Email: [email protected] Markus Sabadello: Hi Markus Sabadello, founder of Danube Tech. I’m here because everything I’m doing has to do w/ DIDs - foundational for Web. My favorite programming language is Java. Drummond Reed: DIDs = “new foundation for the Web itself” = true Eric Welton: Hi Eric Welton, I’m here because I like to write about what’s going on here. I like custom programming languages, fit for purpose. Oliver Terbu: Hi Oliver Terbu, with ConsenSys - favorite programming language i C, C++, and now like TypeScript - like implementations for ReactNative, run on mobile backend. I’m here because DIDs are important, foundational for universal identity Kaliya Young: Hi Kaliya, named IdentityWoman - mainly because only woman in the room (and here we are again). I also work w/ Wireline, interested in DIDs in existing world, less interested in how Web uses DIDs - PKI seems to be a focus, curiosity that I bring in terms of that - haven’t registered that much. I’ve been stewarding and leading in this community - thought there might be contentious things in this group, would like to moderate and help get us to other side. Drummond Reed: Yeah Identitywoman! Manu Sporny: This is important work. Drummond Reed: +1 to Kaliya helping facilitate any embroglios Kaliya Young: I don’t have a favorite programming language - if programming languages are about how you manifest into the world and create, as a facilitator - for me, Open Space Technology is something powerful that we’ve used in this community, that would be my favorite. Ganesh Annan: Hi Ganesh Annan, from Digital Bazaar. I like how DIDs affect security and privacy considerations. Javascript is strongest, but really like Python - it’s followed me throughout my career. The versatility of Python is why I love it. David Ezell: Hi David Ezell, represent Conexxus, we do standards in convenience retail. You have to have respect for C, Java, Javascript is getting better. Microsoft Macro Assembler was powerful. There is a property of decentralized technology that is important in the convenience store space. … If you think of some of the specs we write, a central authority is “good enough”… problem is that’s a self selecting group, some thing centralized is fine, others don’t participate. We are interesetd in decentralized technologies in general. We have a project underway with National Association of Convenience Stores that starts w/ age verification. Why is it important to do this at W3C? Same reason that it was important that it was important to do XML at W3C. … I’m here to offer that support. Justin Richer: Hi Justin Richer, here from Bespoke - here on behalf of SecureKey - been doing a lot of work w/ them. Favorite programming language… did quite a bit in LambdaMoo - use Java the most, Python is great - list comprehensions are beautiful language constructions. … I’m here because I want these technologies to succeed so I have more diverse tools to apply to projects. I have a lot of experience in standards space, like to bridge engineers w/ architects - I move between those spaces. Juan Caballero: Hi Juan, work with Carsten at Spherity. I like C++, Python - don’t have strong opinions. eugene: Hi Eugene, work with Jolocom - in Berlin. We work with Verifiable Credentials, excited to see where this space is going. As implementers, we want to be aware of discussions. I’m curious about some of the discussions we’re about to have today. I am a victim of Javascript/Typescript - enjoy Rust, Go - don’t know them enough to say I like them. charles: Hi Charles from Jolocom, easier to get a feeling for things in person. I like TypeScript - Typescript is easy. joachim: Hi Joachim, here from Jolocom, learning about different perspectives on topic of DID. Looking forward to learn more. Daniel Burnett: Hi, Dan Burnett from ConsenSys - got involved before DIDs… walking by when Manu was presenting Verifiable Credentials, stopped and listened, and thought this is the right way to approach identity. Takes focus off of consolidation, definition, I believe that Verifiable Credentials and Decentralized Identifiers, not identity, are two technologies that can help solve these problems in a way that avoids consolidation. Brent Zundel: We have next slide, as we discuss things, if there is something you want to talk about, add it to the slide. 2. Level Setting See slides in the deck. Daniel Burnett: I want to remind us all why we are here. … The mission of the working group is to standardize the DID URI scheme, the DID data model, and (reading the slide) … In scope (slide 12) Daniel Burnett: the item about use cases is to talk about future uses for access management … we will produce a note that is a refined use cases document … the resolution spec is not in scope … out of scope (slide 13) … we will not define browser APIs, specific DID methods, or attempt to solve identity on the web. … not an accident that none of our specs sayt we are solving identity. … any questions … deliverables (slide 14) … some links (slide 15) … summary of our w3c process … (slide 16) … working drafts don’t imply consensus … it does not mean we agree. … there is a point at which we believe it is technically complete, this is when we get to CR. … then we need at least 2 implementations of each feature … then we go to PR … slide 17 (nice process picture) … we also have a timeline … our first public working draft was last year. … we got a use cases and requirements note out … we will talk about the rubric this meeting … we backwards calculated the other dates … (slide 18) has dates for each step in getting to CR and PR … we need multiple CRs (I didn’t say that) … we need our first CR my November 2020. … we need a feature freeze in May 2020 … its easy to think we have time. … we don’t … After CR, there can’t be any substantive changes … all this to say, this is why the chairs will be pushing to feature freeze. … we don’t want to recharter to get our first spec out. … for this meeting: the primary goal is to get agreement on the JSON/JSON-LD/Abstract data model question … we cover background and necessary things day one. … so that tomorrow we can get to a decision … assuming we get that done, we can cover the other goals (slide 19) … there are aspects of the metadata discussion that may be pertinent to our main goal … any questions? Joe Andrieu: metadata is today? Daniel Burnett: this afternoon, so we have it in mind when we make the big decision … for those of you who have slides, get them in … is there anything in the agenda to talk about did resolution and what we’re doing? Joe Andrieu: we should talk about the boundary between the two Daniel Burnett: look at the agenda, if you don’t see a place where your comments will fit, let us know … we’re trying to get knowledge without going into a rathole … any otherr questions? … a story I want to tell … (standing) My technical background is in computer speech recognition. … I had to decide whether to continue with that or to continue with standards. … I jumped into WebRTC with both feet. … it has spawned better work in W3C and IETF … but, something very bad happened … we started working on a standard. One company that didn’t participate started promulgating a competing standard. … that did not result in what they wanted. … it resulted in horrendous confusion … companies looking to deploy, came in and saw the battle. … investment dried up. … conflict results in everybody losing … it is important that we come together to write A standard … that’s what matters (even if there are different formats) … we will talk about extensibility and interoperability … if you’re frustrated, talk more with those you disagree with … don’t leave, that will damage the industry … others have chimed in with similar stories … any questions? Eric Welton: I was asked to look at DIDs, because a company I worked for wanted to determine if they had cooled down enough to actually use Christopher Allen: A little more history - co-author of SSL/TLS and led group that made TLS the international standard that it is today. We had these conflicts then - one of the big breakthroughs was - we found where we could have all of the important stuff in common, and then very few things we could go in radical different direction. The design allowed for more, that helped us break through impasse. … We got 80% in common, how you get up to certain part of protocol, then had ciphersuites - you had to specify those, and as separate RFCs. If we tried to just pick a few, it would’ve failed. I think that’s a pattern in standards design - coming up with 20% agreement, then more until 80%. Daniel Burnett: This is the standards process, this is normal. This is completely normal to disagree at beginning, standards process is writing down when you agree, then keep building on that stuff. Often end up w/ what Christopher said. I can provide my input on interop for standards (over a beer). Standards are useful for companies because it reduces the development time. 3. security See slides in the deck. Brent Zundel: Two weeks ago, Dr. Sam Smith provided a deep dive into security concerns wrt. spec. I did my best to summarize, might have oversimplified… summary is that each DID Method has a different solution for verification and control of a DID. Different protocols, infrastructure, this is the big concern. We need to keep this in mind while moving forward. … We are designing an architecture that could have a lot of badness - what other concerns around architecture concerns should we have as we have in mind. Michael Shea: When you say verification of control, what do you mean? Brent Zundel: Well, resolution we need to make that’s secure, code on computer is secure, DID Document stored on ledger, what are ledger guarantees? Is it peer or key did, is it self-certifying? etc. Manu Sporny: implementation complexity is also something … we want a secure system, so we make sure we do things in a limited way … so they can be implemented in the right way. … this limits implementation at the edges. … we don’t know where this tech is going. we are going to enable some flexibility. … this may allow implementors to shoot themselves in the foot Christopher Allen: I’d like to speak for innovation in cryptography and cryptographic systems - especially over last decade and last 2-3 years, we’re going way beyond crypto of today. Most of this stuff is coming from crypto communities… cryptographic circuits, multisignatures, zero knowledge proofs, scriptless scripts - cambrian explosion right now. … JOSE was a disaster when it came out, everything is a disaster when it comes out, but it’s roots are in 1980s single signature technologies - we’re able to go beyond that. I’m not saying don’t support more proven types of approaches, but don’t lock out emerging technologies, because we could lose it all if folks go w/ new emerging technologies. Eric Welton: Is the goal for security to try to solve a meta process vs. identify characteristics concerns and address them as rubrics. Joe Andrieu: It’s going to be vital for us to distinguish in conversations and spec, proof of control over DID Document and authenticating as the DID. People conflate those two, different use cases, analyze them differently. DID Control is method specific, two different surface areas - let’s talk about them as different things. Kenneth Ebert: Even though ZKPs are strong in Sovrin ecosystem, we’re upgrading those now - if we lock ourselves in, we will lock out the future and not be able to migrate forward. We need some sort of rubric to test minimum security levels. We do want to shut out insecure systems. Samuel Smith: I’ve been thinking about this problem for a while, one of the things that might be helpful is to step back and think about how we layer security - cryptographic agility, how we address infrastructure differences, if we layer this right, we can get best of both worlds. If we layer it wrong, we’ll bifurcate the trust map and make the rest of history a never ending retreat/fallback positions and security patches. … My biggest concern is that we’re failing badly from a security design perspective - we’ve maximized ability for people to exploit. Ivan Herman: More a question coming from a non-expert… I haven’t seen anything in document for biometrics / FaceID - is this ok, do we want to have bridges for that? What do we do there. Drummond Reed: I wanted to reinforce what sam said, first on extensibility - no way people in room will know right security practices 5 years from now… we need to allow for extensibility and innovation - many of problems on how to deploy securely is something we want to consider, but these are not in scope. Let’s make sure we get layering right, but let’s not inhibit innovation. … Let’s not block innovation. Michael Jones: I wanted to second Chris’ point about cryptographic agility and innovation, we don’t know what algorithms are needed for what use cases in future. We don’t know what’ll become compromised next week. Any long term standard in security must ensure innovation in 20% for systems that update at their own pace. Joe Andrieu: Wanted to respond to Ivan - in part on use cases work - there is a huge amount of concern around privacy and biometrics - there is conflation around DID Method and biometrics - no one has proposed biometrics yet, but design allows for it, but want to move biometrics to the edge, like a phone, don’t put them in DID Document. The ultimate mechanism is cryptographic, keys secured biometrically. Ivan Herman: We may want to put that in the document. Markus Sabadello: The way the spec and DID Method specs, DIDs inherit underlying DID methods - facebook method only as secure as facebook. Is that a feature? Or do we want stronger security properties? Oliver Terbu: We have to limit the decentralized extensibility model to uphold certain amount of security properties… isn’t W3C DID Method registry already a registry for such features? … Imagine you extend DID Method in a way that changes security mechanism for basic DID Method - that’s a problem that needs to be solved, we should have an IANA registry, limit scope of extensibility to certain scope, like authoritative keys. Manu Sporny: We could extend DID Method Registry, we’ll talk about that. We do want to limit DID Methods from overriding how security primitives in DID Core work. Brent Zundel: What should we do about these issues? We could require DID Method to follow common practice, how do we layer security, security considerations, bad practices, rubric? Michael Jones: I did something dangerous on flight here and read every word of DID spec, one of the things I read was that there were places where spec imposes requirements on DID Methods. Security considerations, privacy considerations. I agree with Manu’s suggestion that methods shouldn’t change core functionality - impose requirements on DID Methods, even though we’re not doing a protocol spec, data model spec. Kenneth Ebert: We’ll need some sort of rubric to evaluate. … The number of DID Methods is growing, we’re over 40 now, we will need a rubric. Joe Andrieu: I want to endorse the idea of a rubric, we’ve been working on one about decentralization - doesn’t address security, doesn’t support privacy. … We need to clarify security related to key management, and security assuming keys are well managed. Best we can do w/ key management to point to external references… if keys are compromised, everything is off. We should separate those two things. Eric Welton: About layer/layering - what does that mean? Manu Sporny: Not to argue … Concerned with how much we are asking humans to do (bad for security). DID core spec has normative language around DID methods. Humans have to read and understand this in approving DID methods, which may ultimately fail. These rubric docs will be very complex human-readable docs. Headed towards 50-90 items that an expert has to check for each and every DID method. … some have argued decentralized extensibility is bad. My argument somewhat agrees with this, unfortunately. We need to consider this. Yancy Ribbens: The best way to harden a system is to reduce the attack surface - that means we should focus on bare minimum essentials are - segues into thing slike matrix parameters, what are use cases, we should minimize feature sets to helps us be as secure as possible. Samuel Smith: First bullet, make DID Methods follow same thing to establish control authority - maybe we can do a trust spanning layer hour glass approach - standard set of config data, looks like events - we say it’s too late to put the genie back in the bottle - that’s bad for security, but look at ecosystem, if we design DID Method the right way, all other DID Methods will die and we’ll be left w/ 1-2 DID Methods, layered security model - roots of trust, ability for that config to change the establishment mechanism, we get all we need. If you have too much extensibility, you get OpenSSL OpenVPN - too much agility to do something wrong. … If we built ideal DID Method, and that wins in market, that’s great. Markus Sabadello: Building ideal DID Method, with ideal properties - what if other DID methods that were not as good, keep going, like did:web - the point of the decentralization rubric is not to come up with the one best thing. Maybe security rubric could have the same. KERI-based ones are more secure than DNS based ones? Brent Zundel: First one doesn’t seem to be possible… which of these others? Where do we want to go - add things to security considerations. … Do we want to do a rubric to start? Expand test suite to look for specific security considerations? DID Method can algorithmically done? What does the group want to do? Christopher Allen: I am concerned that there has to be a common practice - I have a demo of multisig control authority that can’t be implemented - it’s self-sovereign multisig - I have a Tor based connection to a node I control, 3 keys, can do multisig, but not with what we have right now. I don’t think there’ll be a common practice. Manu Sporny: we need to write much of this down into security section, until it gets large enough to deserve its own doc. +1 to joe’s comment about a rubric. Don’t like security rubric, because it needs to be firm guidance. Joe Andrieu: I think we need to do a rubric, it’s going to take a lot longer than people realize, but we can’t provide direct guidance. Ganesh Annan: Establishing common practice, sounds like protocol issue - what can we state about this stuff? Manu Sporny: we need to give guidance, which can’t happen in a rubric Brent Zundel: We can normatively require DID Methods to implement themselves in a certain way. 4. DIDs and IoT See slides in the deck. Samuel Smith: IoT Characteristics - large number of devices, 80 billion by 2025 - by and large, IPv6 - probably see Internet switch over next 5 years… 5G is almost 100% IPv6… rest of world using IPv6. … This is a driving function, there will be way more IoT devices, need identifiers, than everything that we do combined, these devices are limited resources - example, IETF CoAP - UDP alternative, DTLS/CBOR solution - two major groups, industrial Internet of Things - commercial vs. home. … Differences between two are significant - many of these devices are industrial IoT. This building we’re in HVAC, security systems, IoT things - what tends to happen is integration is happening in the cloud - all device families are siloed, HVAC vendor, window treatments, have non interop devices - only way they can talk together, operationally integrate w/ the cloud - that’s a challenging problem, they can’t talk to each other in same building. … Data integration, not just talking to each other, - future is, there is not enough cloud compute capability to manage those sorts of things, zettabytes - future, has to be done on the edge. Strong driver for DIDs to have ability to authenticate in edge, do operational integration in edge. … Two types of devices - direct IP enabled devices, home automation - indirect IP enabled - some non IP stack, IP gateway device, bluetooth, zigbee, non-IP stack devices might talk to gateway - tends to happen more in commercial IoT makes it cost effective to have a gateway. … In general, authenticated control is more important than confidentiality. … CoAP based system, gateway - diagram explanation … The biggest issue for IoT is security - IoT devices are in public building, someone can pull device, mess with it, may not know. … That is an overarching concern, problem w/ IoT is most devices have default credentials. 90% of installers don’t change default passwords. Christopher Allen: They are no longer going to allow that. Samuel Smith: Some devices only have LAN access, some have unencrypted data, anything we do as identifier standard that requires WAN access, no LAN version, just won’t work. … Edge vs. Cloud integration - future will be edge, devices are not self-sovereign in any way. Edge integration, that would have to change. … IoT provisioning - typical provisioning method, some sort of on device label - limited physical security. … number sticker, scratch off ID, barcodes, QRCodes - anything more complex is going to fail. … puts an upper limit to fully support IoT, how complex can provisioning be? Bootstrapping identifier, not going to be adoptable if it’s heavyweight. … If you can do a self-contained boostrap, you can simplify the system. Some devices are using contextual information, looking at nearby devices, won’t authenticate them - different location, most cases, IoT devices have low value, no need for persistent identifier - self-certifying identifiers, you don’t need anything - if something has been hacked, throw it away, reprovision. … Rarely do these things communicate pairwise - get out of lan onto gateway. … Future is non-siloed edge integration, sensors with authorized telemetry, actuators… there is already an alternative standards tack IETF/TCG - Device ID - Implicit Identifier. Implicit identity, self certifying id created on first power up. This one is private, manufacturer doesn’t know private key, on power up, generates key on boot up. Any tampering creates new DID. … This is a CBOR/JSON CWT spec. … IoT Interop - oBIX, XML based legacy 1990s… … numerous walled garden standards, numerous consortia - in Europe, IoT EPI - big tent approach, mostly architecture, not implementation, … Project Haystacks getting the most adoption, uses tag model - key-value pair w/ registered set of tags, simple model, easy portability - broad adoption - winning over industry consortia … Interesting thing - Haystacks achieved broad industry adoption via tagged semantic model, brix project, haystacks tag ontology as overly to enable some RDF models to sit on top of haystacks. Doesn’t encumber underlying encodings while allowing any RDF standard encoding to apply to this model. … That’s the end… … Are we going to integrate w/ IoT? Constrained encodings, CBOR… Manu Sporny: great info, thanks. we are clearly missing this big sector. … how do we liaise with them. Have we missed the boat? … sounds compatible with did:key. Samuel Smith: yes. Manu Sporny: we need to reach out and share that. … w3c has web of things working group that is connected here. Samuel Smith: haystack tag ontology group is using W3C stuff. … they do an overlay using W3C standards Manu Sporny: WoT uses JSON-LD Manu Sporny: need to see what they needed in WoT to make everything work Samuel Smith: defined dictionary with meta-semantic model … for large factory automation projects, worth overhead to map to semantic model for benefit of data analytics … in most cases simple tags are enough. … for big buildings with thousands of devices the semantic overlay is helpful David Ezell: Second the shout out to Web of Things - deeply embedded in company projects, security meeting is 8am ET - rough hour, think this group should engage with them. This aspect is important part of DIDs. Was present on PSIG review of PR for Web of Things - security folks had two issues, one was the presence of “id:” in thing description, could be subject to replay attack, or recognition of endpoint if you didn’t know it was… Security section in document, security section is non-normative, was a big issue in their work. Joe Andrieu: Has anyone looked at Filament? Jeremy Miller, XMPP? Samuel Smith: Looked at filament, lots of industry specific protocols, Nest guys (filament) – Jeremy Miller, Telehash used for IoT stuff. Hundreds of them Manu Sporny: Fun fact: earliest DID method we know of was implemented using Telehash :) Christopher Allen: There is a great connection in crypto security industry around problems w/ security - verification w/ hardware crypto, as an example, box we did w/ Blockchain commons - paper display on one side, you can route to my mac via Onion, key ops w/ QRCodes - also do this w/ VPS, keys are completely split - device can’t sign anything by itself. The thing it talks to contributes signature - VC is only valid once it reaches threshold of attestment. Multisignature - VC proof would say you have to have at least two of 10 instruments for it to be valid… and approval of the cloud device. … These are the things we’re doing in cryptocurrency space, I want to see this stuff - there is a mesh there w/ IoT, but we’re doing more sophisticated crypto. 5. Encodings See slides in the deck. Markus Sabadello: Not going to go into the hard details about how interop and everything works. … we want to look at some of the formats and syntaxes that have been discussed … found nine : JON-LD, JSON, IPLD, CBOR, XML, PDF, ASN.1, YAML, XDI … please add to slide if you know of others (but don’t delete) … Let’s start with JSON-LD … Primary format in the spec. Been there since the beginning … based on semantic web principles and RDF graph model … linked data / RDF has a background connected to decentralization on the web. … Ivan’s paper: the web is about linking. linked data is trying to do the same thing for data. … Some may remember FOAF: where you can point to other identifiers who are my friends … for these JSON-LD is great. permissionless extensibility. compatibility with a bunch of things … Also a lot of interest in JSON (pure JSON) … ubiquitous support, familiar to developers. no external network dependencies, compatibility with a bunch of things (JOSE, OIDC, DIDComm) … Another is IPLD … content-addressable, distributed storage, location- & protocol- independent … censorship resistant … CBOR has also been mentioned. Also used in IPLD … very compact. great for IoT. easy to map to JSON … might be possible to do JSON-LD in CBOR … but also maybe pure JSON in CBOR … So what do we mean by data model, syntax, serialization, etc. … XML: obviously the best for everything … good for data-types, namespaces, lots of tooling … PDF has been proposed. Express a DID Document as PDF. … not so much as human readable PDF, but rather embedding a DID Document in a machine readable form inside a DID Document … These are some of the formats, some of the discussions we have had in the community on this topic … We can’t have this discussion in isolation. We can’t just compare these two directly. … There are dependencies on the intentions for DIDs … What’s the purpose of the DID Document? … Did document describes the subject -> JSON-LD preference … if DID Document contains meta-data for interacting with the subject -> JSON Preference … Similarly, do we think DIDs are part of the web? … are DID Documents resources on the web -> JSON-LD Preference … if DID Documents are more like DNS records -> JSON preference … -> likely affinity between world view and technical approach … Some DID Methods don’t care the format of the DID Document. … But for example, did:sov doesn’t have native support for any of the proposals. they store it on its own format. … the ethereum smart contract based DIDs tend to treat the Document as virtual, serializing as necessary … did:key is currently JSON-LD, but it is so simple and constrained, you may not actually be using any of the features of JSON-LD and might be better suited for “pure JSON” … other dependencies include support for public key formats … We just added JWK support, but do we really want a JWK public key inside an XML or CBOR DID Document? … How would we do that? It would be more natural to do everything in the chosen format … Fragments in DID URLs: dereferencing fragments is not defined by the DID core spec, but rather on the mime-type … so if we have multiple encodings, that might change the semantics fragment identifiers … Let’s discuss: interop, extensibility, resolver behavior Manu Sporny: two things … One. I don’t know if people knew to the group. VC spec and DID core spec, the goal was to use a subset between all these formats. … so, we strove to use the minimal subset you can represent in JSON, CBOR, and JSON-LD … so, even those the VC spec says it’s JSON-LD, it is actually a very limited subset … So, we don’t necessarily have network dependencies (because we are using a subset of JSON-LD) … specifically, one should not go out to the network in run-time … for things you don’t understand … On the other hand, in some cases, you go out to the network regardless of format Daniel Burnett: still a good portion of the world that uses XML … it’s not crazy to assume that maybe there will be an XML representation … Second, when we started, the DID Document was a physical representation, but it is not required that it ever be stored anywhere. … this is an explicit requirement: the document is not the thing, it’s the information in the document that matters Joe Andrieu: Markus, you had suggested that if the metal model for a DID Document is how to interact with the subject, you might go to JSON. I’m flipped, I don’t see that at all. … real quick. Markus suggested the mental model for a DID doc is how to interact with the subject, the format is JSON. I’m not seeing that … Don’t think I need to spend more time on that, but I don’t see the affinity. Tobias Looker: If we make all of these encodings first class options, then we have content negotiation as a huge upfront requirement for everyone using the tech … but we need to know what level we are supporting these alternatives … What is a DID Document? A lot of time… is it a rendering of events into the current state? Or is it the actual thing the decentralized system actually represents Oliver Terbu: re: fragment semantics: we are writing specifications for the did data model and did -resolution and it doesn’t seem to be a problem Samuel Smith: I think there is still confusion about what the purpose of a DID Document is. … the first thing you need to do is establish control authority? … the way methods are being written its not clear where we are on this question Justin Richer: +1 to sam’s point on information models Samuel Smith: I’m not talking about the data model, but the information model … if you think about what happens with a URL: before you go to a given server, you have to resolve the authority … Are we going to standardize that establishment of control authority Christopher Allen: some people already touched on my issues Kenneth Ebert: concur with Markus: there’s a difference between our internal representations and what are present to the outside. … we expect to be able to publish multiple serializations for interoperability Christopher Allen: One of the problems I’m seeing is that we are not talking about who the audience is for the DID document … If I’m a consumer of the app, I really don’t care how rotation happens. I’ll defer that to a resolver. … however if I am writing a resolver, then I do care about those things … I’m a little worried we aren’t thinking about these differences Justin Richer: +1 to that … we need to be honest about what the DID Document structure is … we need to be very careful about what is expressed AND expressible in a DID Document … that gets to the core of what I see in the debate … What I’m seeing is “use JSON-LD” but turn off all the stuff that makes JSON-LD special … so can we count on that extensibility? or Should we assume it is turned off? … Or on the JSON side, can we be sure we only have strings, numbers, etc.? … Given Markus’s big list is great. (NO to ASN.1 but glad its considered) … we need to get to the lowest common denominator … if this working group wants to be able to depend on LD structures to understand that a field is a particular type, etc., a date … instead we have JSON-LD docs that aren’t really JSON-LD … and vice-versa … [I mean JSON docs that aren’t really JSON] 6. Different Encodings: Model Incompatibilities See slides in the deck. Manu Sporny: Things to be aware of as we go into the discussion. … Different encodings: Model incompatibilities … Four categories: primitives, structure, canonical form, extensibility … [nice concise definitions of those four] … A bunch of primitives that are or are not supported in different models … Integers as keys (including negatives), BigNums, Byte Strings, ordered sets and unordered sets … Does order matter? Where? When? … tagged values and tagged types. “This is an IP address” … number representations in different platforms matter. … JSON states numbers can be infinite, but reality is different. Typically 32 or 64 bit, depending on platforms … In short: this is not a simple choice … Data Model structures: three kinds … 1 Flat Structure (flat name-value pairs) … 2 Tree structure (most formats allow this). … but the question is does the structure imply anything semantically. Typically yes, but what? … Do certain names mean something? “id” in JSON-LD does. In JSON it doesn’t. … Can you have arbitrary references to other entries in the document? … Because of that referencebility wrt public keys, we are almost certainly using a graph structure not a tree structure … 3. Graph structure? Jonathan Holt: technically, CBOR does specify a ‘canonical form’ in the RFC see: Manu Sporny: Canonical Forms … CBOR does not have a canonical form … One has to be defined by the protocol. CBOR doesn’t haven’t it implicitly. Rather the spec gives guidance on how to do it. … JSON does not have a canonical form, but there are schemes for doing it (JCS), but has corner cases. … JSON-LD does have a canonical form … that form is not a standard yet, but there is work happening in that direction. … if we care about digital signatures, then we care about canonical forms … Extensibility: … If we use @context from JSON-LD and the JSON encoding doesn’t, then we have two completely different models for extensibility … If JSON-Only uses registry, do JSON-LD implementations have to pay attention to it? … How do you know if your implementation is up to date with the current registry? … This comes down to: do we want a canonical form, how do we sign it without introducing security problems … buckets! … That means put extensions in a particular property, but then there is a question about what others should do if they find a bucket they don’t understand. … these are not the only things … what else are there? Michael Jones: Two responses … in passing, Manu, you promulgated the assumption that canonicalization is necessary to sign. It is not. … Canonicalization is one way to do it. But JWT signs without canonicalization. … so we don’t need to deal with that hairball Manu Sporny: We have a mismatch of definitions… I agree with Mike that you can do it. Michael Jones: Different point: CBOR v JSON … Occam’s razor suggests that we make the CBOR one exactly the JSON representation applying a standard translation to CBOR and not doing CBOR specific things Jonathan Holt: just to plug IPLD…. pass Justin Richer: -1 to version fields Oliver Terbu: staying up to date with JSON & registries is a hard problem to solve. but we could maybe deal with that with a version field Christopher Allen: first, to respond to you have to have the entire JSON thing along with it. … that means that signature chains mean an explosion of sizes. so JSON not so great for multi-sig … I’m one of those people who likes JSON-LD for a different reason. … A lot of us came to JSON-LD because of the open world model and graphs, into JSON-LD from RDF … I came into JSON-LD because I can do a merkle-tree of the quads. I can do that with JSON-LD, but not with JSON. … I have a bunch of statements in JSON-LD, with each independently provable. … It’s the simplest form of data minimization you can do. In contrast to issuing separate claims for every distinct assertion. … You just can’t do that in some of these other stacks. … We could separate from the JSON-LD thing from the canonicalization. … Just give me a merkle-tree. and I’ll verify it Samuel Smith: There are lots of standards that use much simpler data models and they work practically, and well … there is a sense that if we don’t have the ideal level of extensibility and all of these features, then we have the impression that will be a problem. … the real world is less about the ideal and more about practical adoption … sometimes that is more important to us as a standards organization … I spent ten years pushing superior tech that failed in the marketplace … concern is that we tend to err on the side of being ideologues instead of what will work in the marketplace … we should temper our conversations with a sense of adoptability Daniel Burnett: everyone gets to say everything without censorship … for now Markus Sabadello: I liked Oliver’s comments about registries and versions … Perhaps there is some way that a registry and JSON_LD context could be kept in sync Drummond Reed: I want to reinforce something Sam said. … I posted the first issue that we should look at an abstract data model … all the discussion has made me even more passionate than ever … I define that as the core data model. This is what we absolutely have to figure out and agree on. … pure encoding is not the clear differentiator. … They are all actually graph models with different complexity … What we are solving is so low level… if we solve that ONE problem well, DIDs will be enormously successful, with NO extensibility at all … this is not to say extensibility isn’t good, but that we must not fumble the ball Daniel Burnett: lunch is coming up. … I know we haven’t had enough time for discussion. We will have more time as we go. … Please keep comments short to keep the conversation flowing. 7. Abstract Data Modeling Options See slides in the deck. Daniel Burnett: So far, we’ve just been talking about potential incompatibilities … So I’m going to talk abstractly about abstract data models … What are we modeling? … Data processing (processing of a DID Doc) … Data Storage … Data transmission … The processing we are talking about is various computer languages. We HAVE to be agnostic there. … For storage, we’ll likely store them in databases, but that’s not really what we are standardizing here … Schema definitions my help. … For transmission, note we don’t have to use the same format as we do for storage … Suggestion is that the transmission format is what we are modeling here … Other than PDF, most of us are talking about transmission. … This matters because it affects how you do modeling … Why do people use JSON? … It basically was an alternative to XML for transmission … I’ve done tones of XML. JSON is effectively a wire format for messages transmission and exchange. … By that, I mean a specific serialization … Alternatives to JSON for serialization of Data: CSV, XML, YAML, CBOR … Argument: we should not be picking one, we should be using an abstract data model that works with any of these serialization … What level are we defining? … Spec was designed to be an abstract (exchange data model) … could definite at different levels … A Conceptual data model define semantic roles and relationship without getting into syntax … typically uses a graphical representation that simplify an overview … Examples: object role model (ORM, very high level) … or Entity Relationship Model(express, UML, IDEFIX) … A structural level presents roles and relationship as fields (sometimes with syntactic requirements) … if we choose a modeling language that also allows us to model ties to procedures is that implementers could then take that model and extend it, using that modeling language to provide the computation support they need. … All of that is out of scope, but if we use such a language, it might help downstream adoption … Bunch of examples of favorite structural modeling languages … Are we modeling a transmission model? … and are we modeling a conceptual or structural model? … suggestion we should probably go to a structural model, with a format that has a graphical representation … it’s just so nice for explaining to people what we are doing. … Example structural formats: EXPRESS, Protobuf, UML, IDEF1X Joe Andrieu: [see slides for details] Daniel Burnett: those are the examples. we don’t have to pick from just one of these. … this was to help us understand what our choices are as we consider abstract data models … This is a UML diagram that was complete and correct. … Some of the terms have changed, but it was accurate at the time.k … This avoids the bias of any particular syntax. … But we couldn’t get anyone to write a specific syntax EXCEPT JSON-LD, so we got rid of the UML and moved forward with JSON-LD. … We’d love to have this for DIDs, we just need people to step up and produce them. David Ezell: I’m wondering why JSON Schema didn’t make the list. … it is the underlying spec for [something] . I’m a UML fan, but the value comes out in a tooling … the graphs turn into real code. Ivan Herman: JSON Schema is a moving target. They keep coming up with new versions. Stability is a problem. … we have had discussions with the JSON Schema people. They keep saying “sure a real standard would be great”. … We have editors who have to create documents. Which are the ones that have good tools for editors? Justin Richer: big +1 to transmission and structural format coverage. There are assumptions that get glazed over … if we really are an abstract data format, then our data format need to actual exist and be serializable … what we have been doing a bad job at is realizing what those costs and weights are for other people … We often look at these as how easy it is for us in the group, without awareness of what it means for others. Drummond Reed: as one who is passionate about abstract data models, I will volunteer, if we do UML Manu Sporny: Agree that we are doing a transmission and structural format … I had thought we were ALREADY doing a data model spec. I don’t think there is much debate to be had at this level. … We can argue details, but there doesn’t seem to be anything controversial here. Daniel Burnett: right, but there are lots of people who may not have realized that we are making an abstract data model and what that means to our collaboration David Ezell: I’d like to reference the XMLInfoset spec:. It’s purpose was to decouple XML syntax from the information content. Probably worth a look. Manu Sporny: the diagrams in the VC spec were removed because of feedback from implementers Daniel Burnett: yes. I’m not saying we should have ONLY visual representations … we are chartered to define a data model AND one or more specific data realizations … LUNCH! Brent Zundel: Zoom Link for audio: Brent Zundel: Who’s going to dinner? … 21 people. 8. DID Doc Extensibility via Registries See slides in the deck. Michael Jones: What to registries accomplish? … They prevent name collisions. … And provide authoritative links to where to find definitions. … Identifiers can use names with the same meaning. Juan Caballero: Michael Jones: Samples are IANA web token claim registry. Juan Caballero: (JWA spec mentioned in ref to prev slide) Michael Jones: See … (Explain sample) … Claims for specific use cases are included. … Specifications are required to add to IANA JSON Web Token Claims pending expert review. … Second example: … How to add to a registry? … This is a decentralized method of publishing definitions. … The DID spec prohibits redefinition of terms. … Registries can help us avoid conflicts. Justin Richer: An important note when looking at the suite of JOSE fields. It creates a source for enumeration of fields. … I think this is strongly needed so that implementors can follow the directions in the documentation. … The registry tells the implementor what they have to do. … When I see that field, I know what library to point at it. Justin Richer: to clarify, I believe we really need this for DID methods Kenneth Ebert: self-issued: Claiming there is no registry does a disservice to implementors. Justin Richer: From a normative standpoint. … I think that is a mistake. … I filed an issue on that topic. Justin Richer: I want to point out to everyone that this mapping is explicitly in the WG charter Justin Richer: Especially with identifiers that are not human readable. … A more normal practice is to use human readable names. Manu Sporny: Depending on the definition of decentralized, registries have high process when there is disagreement. … Disagreement can be legit or political. Kenneth Ebert: self-issued: Fair enough. Usually there are directions to the reviewers to guide their actions. Markus Sabadello: Mike, do you think the registry approach will work with high volumes of additions? … Right now these are handled by JSON-LD definitions. … With 1000s being added I wonder if a registry can handle it. … Would it make sense for service types? Daniel Burnett: So there is a bar to entry for adding to a registry - not necessarily a bad thing Kenneth Ebert: self-issued: I would hope that service types would be registered. If not, how do implementors find out what the interoperability is supposed to be? Markus Sabadello: The registry says these are the service definitions that digital-bazaar created, or ChristopherA created. Eric Welton: registries require explicit, intentional interoperability - but an open-world model will allow opportunistic, emergent interoperability Drummond Reed: I put the original text in. Sorry … I never expected we would have over 40 methods. … It has grown beyond expectations. … Point 2 Registries are a method, but not the only method. … The core features are in the spec. The extensions are not. … Both methods can work together. Samuel Smith: We have an informal method name. The rest can be defined within the method spec. … The model can name space to avoid collisions. … Namespaces are good; we should have more of them! Kenneth Ebert: self-issued: RFC7515 Samuel Smith: Search for collision. … JWS spec discussion of names. Public names are registered. … Collisions resistant names use some natural prefixes to prevent collision. … Using my domain name as the prefix helps me define my own “namespace” … There are other collision resistant names that are longer and not registered. Jonathan Holt: as an aside, OIDs are a valid URIs Samuel Smith: You lose the definition if it is not registered. Christopher Allen: An early proposal involved a whole block chain to register all the names. … It was too complicated. … A five level staging system involved documentation, implementation, live-system, etc. … It was censorship-resistant. Justin Richer: +1 to the levels and measures, this is what the normative teeth would apply to Christopher Allen: I suspect that only a few will make it all the way through. … It’s probably out of scope for this group to set up all and run a registry long-term. Joe Andrieu: With an eye to compromise, I think there is a difference between method and property name spaces. … They should be treated differently. Justin Richer: +1 to them being separable, though I’d add they might have the same solution Christopher Allen: provisional, conformance/test available, reference implementation available, live test available, poc available, in production, deprecated 9. DID Doc Extensibility using JSON-LD See slides in the deck. Manu Sporny: Why might we want to do this? … Add method-specific properties. … Add new service types. … Add cryptographic methods. … Or merge data with Verifiable Credentials. … To extend a DID Doc with JSON-LD you define a vocabulary, create a context, and append it to an @context property. … A simple single feature would take about an hour. … Using the new context is very simple. … There is no formal mechanism, unlike a registry. … For JSON-only developers, you need to update your schema and validation. … Security may require some restrictions to JSON-LD extensions. … Decentralized extensibility can reduce peer reviews. … Sometimes JSON-LD is too complex for a use case. … Benefits include reduced roadblocks from the workgroup or registry. … Property conflicts are eliminated. … This method of extensibility is compatible with VC data model. … That’s it. Phil Archer: I’m with GS1. … A benefit is that it prevents the reinvention of the wheel. … We have a bunch of service types. … I’d like the existent context we have to be useful to others. Jonathan Holt: I am concerned about cross-domain ontologies. … Having a single domain be the authoritative source for the context is a potential security issue. Christopher Allen: Both DID docs and claims are about the subject. … We had our DID doc be a VC that was self-signed. … A second signature was base on a satoshi’s signature. … Extensibility could be provided by VCs. Joe Andrieu: The method name space should be considered separately from the property name space. … We will have differences of meaning regarding a property with the same name. … I want less in the did document. … I don’t want all the service endpoints in the did doc. … If we can reduce the amount of data in the did doc, fewer extensions will be needed. Markus Sabadello: Earlier we wanted to avoid registering did methods. … It brings us back to the definition of the purpose of the did doc. … Is it a minimal document for the did and small service endpoint … Or is it a represent of “me” more completely? … What happens when the registry potentially gets compromised? … Compromising the semantic meaning of properties is also a risk. Eric Welton: What can the did doc represent? … We mostly agree on the core fields, but what about the fields beyond that? … We can give guidance on how extensibility works. … A set of VCs might be a better way to extend the data in the did doc? … This group has very few women or Asians in it and will reflect our biases. … I am more in favor of the open world model. Manu Sporny: Responding to “http” for contexts. There are other methods. … re: “I’m concerned about centralization of semantic meaning.” Some thought (20 years) has gone into this. Juan Caballero: Manu Sporny: Schema.org is an example. … You can disagree and put up your own definition. Jonathan Holt: I am struck by the one source of truth and reliance on http protocols. … A centralized model vs. a decentralized model. Drummond Reed: I love the “turtles all the way down” analogy - we always need to keep that in mind with DID documents Brent Zundel: Can we discuss the remaining queue after break? 10. continued Extensibility discussion See slides in the deck. Ivan Herman: look at dbpedia as the machine readable version of all the media wikipedia has put together Ivan Herman: if I’m on the linked data world, my data can be combined and put into dbpedia … there are a large number of these databases … public domain etc … you can think of it in smaller terms … some vc credentials and all the way of combining data in one credential is what it’s all about … if we have that, and this is a use case for us, then json-ld can be used for that … we need to decide on the use-cases Michael Jones: does combine mean both from my databases into another Ivan Herman: imagine this is one big database and you have a query with all the details … just saying that this is a possible use case Samuel Smith: my suggestion is that we two types of docs that we discuss extensibility for … this is from the presentation I gave to the wg a week ago … there are two activities that happened when accessing a did … there are two roots of trust … cryptographic and infrastructure … infrastucture is a blockchain … and you’re dependant on the distributed concensus … cryptographic is one that’s signed … it’s a self-cert … some people provide proof in a DID doc … that type of establishment doesn’t need any extensive amount of extensibility … once you’ve established it … they are no longer a security vector … common authorizations can have a line drawn … can draw at establish auth … or above service authorization … arguments for open world or infinite become valid in that case … I think that’s the problem is that we need to make separation of concerns Michael Jones: people seem concerned about having a fixed set of people deciding whats part of the record … this wg is part of such a group … we seem to be ok doing that Manu Sporny: I think that before we had a discussion about a layer that’s pre-did document … I think the part you’re concerned about is what happens before the did doc is crated … I think we should seperate them … bu I don’t know if this group should separate them … I want to point out that its under the extensibility discussion Samuel Smith: some extensibility could go into that document … then you auth an encryption … if you look at in terms of iot… Manu Sporny: there are a lot of other uses cases that come up … veres one stores did docs … the ledger is built around the mechanism Yancy Ribbens: mechanism Manu Sporny: we merge data sources on a regular basis … there are many times when json-ld is useful … we also have use cases where some did docs refer to other did docs … in some cases they are associated … there are use cases that are not yours that are feeding into the desire to have more complex use cases Markus Sabadello: json-ld context is a type of registry … that’s not so different then a registry … it’s just anyone can run there own registry Daniel Burnett: from rfc 3968 … basically there is resolution and de-referencing … slide 93 … in the process of doing resolution we realized there are these things you need to do … but there’s no requirement that they need to be stored that way … the did doc was just an idea about this stuff … and gradually stuff started to be added … originally this was just a way to give information that was asked for … we may have chained our minds, but this was the original way it worked … that’s why some of us say the format doesn’t matter … in order to understand the de-ref you need to understand a resource Joe Andrieu: I like the separation of authority Yancy Ribbens: .. and what does that mean Sam Smith: I think it’s a problem with how identifiers are used Joe Andrieu: I know this will make you said but to answer the question I think the link is an anti-pattern … what I care about is I like json-ld … if you need linked then of course you go jsonld … I need someone to innovated with a service type and a group of white males that get to decide the future … and it needs to be not white males Drummond Reed: slide 93 … consolidates a bunch of things we’ve been talking about … the core is the core … jus like molten is the earth … to add you need to version the spec\ … they don’t need to use them all but cant be changed … next degree is the centralized base … I think we could have one that both namespaces and key type services … the point there is it’s extension through centralized coordination … but we know how to do collision free namespaces … do we have any choice? Manu Sporny: i’m opposed Drummond Reed: there’s a lot of details there and a lot are really good Phil Archer: my career in standards begin with a massive failure … because i was telling people how todo stuff … the one in working is successful … extensibility is what makes your standard useful … it has a layer of validation … we are going to bolt in on to what we already got … because I need extensibility and flexibility Eric Welton: I like the levels that sam drew … there is this idea of what was said that we have methods that have a mechanism and we wan t a core data model that does that … but then markus said is this a did document of myself\ … so everything is forming a nice structure … and it gives a better definition of the trust mechanics … and we need it to be crisp[ … and how can we formalize it into a discussion … so how can we capture and structure that Brent Zundel: merging did docs and vc is bad Tobias Looker: if we look at the presentation, jose outlines a bunch of registries … and created a process to create that core registry over time … and manu drew a process to open claims … and openid tokens can be created with new claims Ganesh Annan: s/did document of myself\/did document of myself/ Ganesh Annan: s/crisp[/crisp/ Tobias Looker: I want to start creating new claims beyond the did spec that will never be created … or Im happy for them to be semantically unambiguous … are we expecting to updated teh core data-=model … and they updated the context in an editor manor Christopher Allen: one of your points is theres a distinction about people that just want the keys … and don’t care about the establishment … vs those that care about rotability … and drummonds thing … one side is in the direction of how to rotate … and the other direction is how to record teh simplest stupidest client Manu Sporny: I like what is hearing … but it’s too much meta … we need to get to some discussion … nothing wrong with the diagram … I’m just using this as an example … so that green there the registry extension … but does the green thing make it impossible? … bu with the green stuff do you require a json-ld context … if that’s not true we haven’t solved anything … is that open to the JSON-LD folks … it’s those kind of specifics, in my head before adding to the spec Ivan Herman: first of all, I will answer … mainly with some changes coming up … in will be possible without complication the working group goes to a maintenance working group[ … and I know there may be version issues Ivan Herman: but if we don’t change to structure it will be probably possible … but what you where saying is interesting … because there is a parallel … . there should be a way to have a spec … . that has a data exchange model in the document … we have json-ld as possible … the jsonld would refer only to one single element … if someone wants to make use fo think linked data facility … if no so what … I can see something come out of this by expecting both sides … if it is an informal registry then all bets are off … but something like that might be possible Tobias Looker: if you treat the green rign as the expanding core over time … and others that will never be agreed on … then that sorts everything Juan Caballero: (in drummond’s diagram) on slide 93 Tobias Looker: an you update that context at the center … and because for example … if an open-id connect provider and they don’t publish spec … its just informal by starting to emit things … so I don’t know how it relates Christopher Allen: still haven’t figured why did stuff needs to be in an endpoint … why am I putting them all in this dag … progressive thing to get to the key that they want … they need to prove to me … in bitcoin we are trying to get to a point where no pub keys are revealed Samuel Smith: they have a combination that uses a tag based registry … and its’a semantic overlay … and you get the best of both worlds 11. Metadata See slides in the deck. Ganesh Annan: I had the pleasure of reading the long issue … as well as the rfc logs … slide 95 is background … just want to say dan you’ve touch on the things i’m talking about … see slide 97 … slide 98 did doc explained … the way I look at that is theres an object and graph model that relates to that did … slide 99 … it is the output of calling DID resolve on some did … and we did that to give full intention to that did method … slide 100 … we can not make assumptions of how its rooted in a database … a did doc is not a file in a file system … so when think about did document created … slide 102 … this can be though of as the same way as a date header … the date that’s put on the header and response … I think that’s the way we’re supposed the think about it … slide 103 … slide 104 Ivan Herman: I know it will sound like bike shedding … isn’t it time to use something a different term for a DID document … the effect of using the term “did document” leads to the discussion we’re having now Christopher Allen: there is some history … there was a DDO … then we started calling it a DID document … in my world it needs to be created Oliver Terbu: if you read the date field of http … wouldn’t this break the proof of the did document … if they protected it with the proof Drummond Reed: I want to make the point that you said anything in the did doc must describe the subject … there’s information about the did subject … should we separate that … create date and update date … are we talking about when the tea pot was made and broken or not … how do we separate that Joe Andrieu: couple things … need bear witness … it could be a bout a group which I’m a member … I don’t like arbitrarily sucking everything up because i’m not my did … is the controller in charge? … then that’ metadata … the reason I said on that update date … if a controller wants to update the date and it took 3 days for bitcoin to get the transaction … did the did controller say it Jonathan Holt: +1 to Joe Markus Sabadello: I wanted to say something similar … in the resolution spec … in the resolution there is the resolution result … to me whats in the did doc can be different for each resolution process … the metadata about the person may be different … changes only when the controller updates it … the second thing i wanted to say about created … I disagree with what gannon said … I think the did document is create when the did is created … it may not be stored as a file … and the resolver retrieves it and constructs it Kenneth Ebert: +1 to markus’ comments Jonathan Holt: … I think time is relative … the time may be relevant … was an issue with the classic … by jumping ahead in the timestamp … and that should be considered Manu Sporny: wanted to +1 drummond … there are two things were talking about … I think it’s interesting that joe and I disagree … I don’t think it has any affect on the decision … and I think markus is write that your framing of whats a crated date is spot on … that did doc that this group is creating … it has an identifier … that I think by and large has information asserted by the did … and it becomes obvious … markus example is outside since the controller didn’t make that assertion … this is info asserted by the controller … and there is a clean break about what’s data and metadata Justin Richer: what I observe here … is that people want to make statements about two things … the document and info that got the document here … and we don’t have a clear way to express that … and people are getting upset about that … like the created_at Kenneth Ebert: +1 to Justin_R’s categorization of the data/metadata Justin Richer: this is something that is something that needs to fit with the resolution process … we might be able to define things about the did doc … and however that comes back is up to the method … and theres’ lots with response headers Phil Archer: I like the thing about how the did doc isn’t real … and so I wonder if we can state in the spec … and say it’s not appropriate … and if you need software that depends on that it’s wrong … nobody has asked us about the did doc unless you ask you get redirected to the service … nobody asked about metadata about that … I think this idea that the did doc that only treats it in that way Daniel Burnett: just going to say … wonder if the did subject is the uri resource … keep goin back to rfc 3986 … the resource doesn’t need to be real … so did subject and resource doesn’t need to be the same thing … I like the idea of being clearer about separating process … from information … was looking a did info a nd did presentation … clearly there are different levels Drummond Reed: first point I want to respond to dan … we do say in the spec the did identifies the did subject … bucket 1 … not sure how the did subject could … there’s a metapoint about identifying the did subject Daniel Burnett: a did has a resource and that could be anything Justin Richer: to clarify my statement from earlier for the notes, I actually mentioned three things: data in the document, data about the document, and data about how the document got there Drummond Reed: I want to address what just says by saying I agree with him … metadata describes data … its really just a graph … you put a node in the graph of the document … and under it we put al metadata about the document … its not rocket science Phil Archer: +1 to drummond Drummond Reed: we’re not even sure we’re talking about the did subject Samuel Smith: q Markus Sabadello: with regard to the resource … . sometimes we treat the did as an identifier … and sometimes we treat as a subject. … final thing I wanted to say is that I agree that it’s inline with architecture tor return meta-data … I just don’t like did resolution response Yancy Ribbens: .. which sounds like client server protocol Ganesh Annan: +1 to DID Resolution Result Jonathan Holt: Time in blockchains is relative: i.e. The timestampof bitcoin block #180966 is 2012–05–20 23:02:53 Jonathan Holt: The timestampof bitcoin block #180967 is 2012–05–20 23:02:13. Oskar van Deventer: some of this is confusing … what is the purpose fo a did document? Justin Richer: a lot of people are already deploying … and coming in to the group we realize there are no definitions … I think there are 3 different resources … the uri as a subject … once you treat as url … you’re not getting back a subject … if theres a service end-point, you only get back what’s serialized and returned Tobias Looker: if you put a resolver into jus a uri … is that did itself considered a URL? … which in my mind is an assertions about the subject by the did controller Joe Andrieu: I think you should get back the whole document Daniel Burnett: you may not agree with me … want to talk about making it rain … and if someone else asks what it identifies … it identifies a subject … the did subject might be the sku … so what is the resource that I’m manipulating? … there might be a did for the sku … I would draw a distinction between the did subject and the resource … if the did subject is me, and you want to call me it’s dan … I don’t have a phone in my brain … and youl call some number that i’m going to answer Brent Zundel: we are having trouble defining the metadata about the data … we need to figure that out before we talk about metadata … we need to come to some agreement Daniel Burnett: maybe look at items in DID doc and list where they come from - Drummond Reed: id love to have time to draw another diagram … there are two dots Daniel Burnett: asserted by DID controller, resolution process, etc. Drummond Reed: the did is the label on that arc … the did subject is whatever that dot is on that arrow … and for any controller they define what that dot is … and if dan says its pointing at the sky, it’s whatever dan think the sky is … in someway it gives us the ability to describe what we think something is … the first dot is the did controller … there dot one … point two is JoeA as usual is right … I want to clarify what he’s correct about … when markus and I collaborated … and if we get to it we have a proposal of how to clarify that … the did document is never the subject … from a web representation, because it’s metadata, we have this thing markus pointed out … it’s a resource on the web … the DID itself is a uri and not a URL … if you want to identify the did document you add one character. or proposal is to add forward slash Eric Welton: +1 to all that … the thought I had was the did could be anything … and if I create a document outside of did land that describes the seating chart, that’s how it will be in the did document … I may want you to add information about it … and I don’t need heavy crypto … and I say if I created a did … I would have control over that and I don’t need to establish authority over that … it’s more like a uuid … a lightweight did Manu Sporny: I’m concerned this is http range 14 … http range 14 is a discussion ongoing for 20 years … there is no correct answer and thousands of engineers haven’t found an answer Phil Archer: See HTTP Range 14 in Wikipedia Drummond Reed: the Wikipedia page on HTTPRange14 is highly recommended Jonathan Holt: issue #14. Manu Sporny: the way you escape is someone produces some definitions … and if its’ good enough for a few members, then we move on … you should read about it Ivan Herman: if you have a url against a resource, and the resource is a book, what exactly are you referring to? markus sabadello2: Last year we spent several weeks discussing DIDs and httpRange-14 during the DID Resolution calls. A summary is here: Joe Andrieu: I think we have a proposal … I think it resolves what we mean … but lets get the proposal … I want to touch on how we instantiate identity as attributes … what clicked for me is there’s a problem around formal vs informal identity … whatever the controller defined may not know what it is yet … I don’t need a legal form of identification Jonathan Holt: “Tim’s argument that HTTP URIs (without “#”) should be understood as referring to documents, not cars” Joe Andrieu: i’m just trying to figure it out Phil Archer: don’t do the slash thing … a query I want the did doc or don’t … especially if it’s just one char Drummond Reed: what we did want to do to address manus point … and we spent 15 years working on the semantic web problem … we hadn’t said there’s a huge case for a type that’s abstract … and realized there is a did … and a definition can evolve over time and that we pull semantic naming over dids … and fantastic we can get to more sophisticated ways about semantics Drummond Reed: if there are additional buckets, we can agree on the metadata Christopher Allen: hopefully dids will never be seen by a user … but another challenge where i’ve dealt with the slash at the end … that will translate to xml … the pattern we’ve seen has been challenging Tobias Looker: was going to make the point about what the difference would be … trying to tease out the details to the developer … and to resolve it you need to add a slash … would not give merit by confusing the developer Ganesh Annan: want to make sure this issue moves forward] … feels like there’s some precursor about what gets put in a did document … and if theres agreement I take input and try to bring it back to the group to resolve Drummond Reed: +1 to Ganesh’s proposal of how to move forward Justin Richer: +1 Markus Sabadello: +1 Samuel Smith: I used a functional description … we have established did documents … then you go to binding of identify and you have circular definition … you first have to verify the did document has not been tampered with … and we don’t have a clear understanding of what a did document is … anything in my locus of control is in my locus of control … any information that I put information in … when I see these discussions about metadata I want to pull my hair out Manu Sporny: - A DID is an identifier that identifies a resource. - Dereferencing the DID gives you a DID Document. - A DID document is the resource that is associated with a decentralized identifier (DID). - A DID document contains information asserted about the DID by the controller of the DID. - Any information that isn’t asserted by the controller of the DID (i.e. metadata about the DID Document) is placed elsewhere. Samuel Smith: because we don’t have a way to know if its been tampered with Manu Sporny: not saying this is the right language but we should go through that on github Daniel Burnett: no to number 3 Daniel Burnett: we MUST NOT make the DID document the resource Daniel Burnett: For the minutes, manu confirmed that #2 meant to say “Resolution” instead of “Dereferencing” Markus Sabadello: I wanted to say a few more things about the slash … some times we treat is and identifier for the document … when we use a fragment … some of this is reflected here … the first one makes it sound like and identifier for the subject … 3 sound more like identifier for did document … are we ok with sometimes we use for both … usually theres a very strong reaction … one for subject one for did doc Kenneth Ebert: I think it’s useful to talk about what types … are there other types Drummond Reed: Drummond is NOT okay with conflating the resource that is the DID subject and the resource that is the DID document Daniel Burnett: along the lines of what ganon says … look at them and see where they come from … and what we want is the buckets … start with the buckets and say how does that get there … unless we look att the specifics were never going to reach a decision … I’m with markus … the document is not the resource … its information about the processing … I’m willing to say its about the subject Drummond Reed: Drummond wants to argue passionately that the root of the DID document is the DID subject, and that all direct properties of the root describe the DID subject. To describe any other “bucket”, add another property to the DID document that we define in the spec represents another resource (e.g., the DID document), and then have subproperties of that describe that resource. Drummond Reed: Drummond also reminds himself to talk about a third role in addition to “DID subject” and “DID controller”, which is “key controller” Joe Andrieu: 1 and 3 are wrong … human is not a resource the way the web thinks about resource … so I think the way you use a tool can define that tool … a hammer could be a tool or a weapon … it echos what I said earlier that we don’t have clear distinctions … if you auth it doesn’t mean you control the document Sam Smith: we have a circular definition Drummond Reed: discovered a new extensibility section … want to put in a plea to keep the did subject as just the resource identified by the did … the uri url urn distinction … we need to identify the way things are on the web Daniel Burnett: Disagree STRONGLY with Joe. Don’t confuse the group by suggesting that return of an artifact somehow just makes it the resource. This may be theoretically interesting to say, but it just turns the DID into a normal URL that references a file called the DID doc. Drummond Reed: where it gets tricky is they way you want to identify things … then at least we can take a point and say what is identifying is the subject or any other bucket … if you look in the abstract data model, and the root of the tree is a subject … we are going to establish properties … theres a did subject, controller and a third role … someone could auth using data that’s not the subject or controller … and I call it the third role … the key controller Christopher Allen: so I really want to limit the purpose to as minimal as necessary … we use a backslash for one and forward for the other … I get more territory for other stuff of developers putting in PII … you want to do the min for the business purpose … if its a control app that allows cap to work … that very first step is all I want Phil Archer: #4 contains info about the did, and surely it’s about the did subject … we need to be clear about what we’re describing … that’s what range 14 ended up with … and when you resolve that you end up with a 303 … you need to be more explicit about that … I think drummond came up with 3 Joe Andrieu: suggested fix for 4: A DID document contains information asserted by the controller of the DID. separately, we have the issue of limiting the data in the doc Phil Archer: and drummond came up with a 4th Ivan Herman: if I have a did that I established for myself,c and I use it … to make assertions about my name (not only within the DID document) … no that did is only referring to me and never anything else … that identifier will have it’s own life, for example like on the web Manu Sporny: that was the discussion I was hoping for … no chair tossing yet … at least we can have a discussion about that … want to discuss semantic ambiguity … developers always screw it up, they only copy the base. … we have it working well Brent Zundel: this WG is awesome … we’ve talked about a lot of things to do … this last 1.25 hours has made us look at different things. back when we talked about DDOs we looked back and decided that want right … lets get a PR that gets a refinement about what we did
https://www.w3.org/2019/did-wg/Meetings/Minutes/2020-01-29-did
CC-MAIN-2022-27
refinedweb
14,672
64.14
- The Speed of Sound - All the Servers You Want - Everything Is a File? - Plug and Play - A Different Core - Can I Use It? Everything Is a File? Mach used a port as a basic abstraction for communicating, since it wasn’t restricted to UNIX-like behavior. In contrast, HURD aims to be a POSIX-compliant OS. As such, it uses the filesystem as a namespace in which all objects exist. Whenever you open a file in HURD, you get a Mach port to a translator that implements a protocol used for interacting with file-like objects. This translator mechanism allows some quite neat extensions. Users need no special permissions to mount a filesystem in a directory they own, other than permission to access the underlying storage device. If this device is a physical disk, then the user might need more privileges, but not if it’s a regular file (such as an ISO 9660 image), or a remote server. This setup allows users to mount things like SMB, NFS shares, or even SVN repositories as regular filesystems just by mounting them (in HURD terminology, "setting the translator" with the settrans command). HURD users don’t run a specific FTP program, for example; they just mount the FTP server as they would any other filesystem. This idea has been taken up in other operating systems. Projects such as FUSE allow filesystem drivers to be run in userspace on monolithic kernels, and many of these systems now include SMB and FTP drivers in kernelspace (a concept considered the height of bloat when HURD was first conceived). These translators are not limited to filesystem drivers. The IDE driver, which sits under the filesystem drivers, is another example; it exports device nodes in /dev. Other device drivers can have stacked implementations in the same way. For example, a sound card driver might be set as a translator for /dev/dsp. On top of this, a user might set a translator for ~/mp3 that would play any MP3 data written to the file. A more complex translator might be set for ~/playlist, which would send files linked into that directory to ~/mp3 for playing. Something similar is possible on most *NIX systems with named pipes, but a HURD translator has two key differences. The first is that a translator can represent a directory, while a pipe can’t. The second is that the interface to a translator is a Mach port, and so can be extended trivially to respond to other messages beyond those of the standard filesystem. A similar mechanism is used on other UNIX-like systems with IOCTLs on device nodes; however, these are only available to special files created by the kernel, while HURD translators can be unprivileged processes. Many of these ideas will be familiar to Plan 9 users.
http://www.informit.com/articles/article.aspx?p=1180992&seqNum=3
CC-MAIN-2019-04
refinedweb
468
61.26
CodeGuru Forums > .NET Programming > .NET Framework > ProcessStartInfo.WorkingDirectory Ignored PDA Click to See Complete Forum and Search --> : ProcessStartInfo.WorkingDirectory Ignored buggieboy October 17th, 2008, 02:19 PM I have a previously working program, currently built with VS .NET 2008 with SP1, that starts a process to run a batch file in the context of a particular working directory. This program is now failing on my machine as well as at least one other in our organization. On other machines, it succeeds. I thought I had narrowed the difference between systems down to the fact that the machines on which it did run correctly all seemed to be lacking SP1 for .NET Framework 3.5. However, I then came across a machine on which SP1 was installed, but which still ran the program OK. The problem has to do with the use of the WorkingDirectory property of ProcessStartInfo. Normally, setting this, when UseShellExecute is set to true, will cause the started process to work in the context of the specified directory. The issue I'm seeing is that, on the problem machines, WorkingDirectory seems to be completely ignored. Here are steps for replicating the problem: 1. Compile the following console app, called CallCmd: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Diagnostics; using System.Threading; namespace CallCmd { class CallCmd { private static string file = "foo.cmd"; private static string folderName = "C:\\foobar\\"; private static Process myProcess = null; static void Main(string[] args) { ProcessStartInfo startInfo = null; startInfo = new ProcessStartInfo("\"" + file + "\""); startInfo.CreateNoWindow = true; /* Note: * When UseShellExecute is true, the WorkingDirectory * property specifies the location of the process that * is being launched by Process.Start(). */ startInfo.UseShellExecute = true; startInfo.WorkingDirectory = folderName; myProcess = new Process(); myProcess.StartInfo = startInfo; // Run the process. myProcess.Start(); } } } The important line in this program is: startInfo.WorkingDirectory = folderName; where folderName is "C:\\foobar\\" in the class definition. 2. Copy the CallCmd.exe to some folder on your path. (Note that it must not be in the folder created in the next step .) I put it in \Windows\System32. 3. Make a directory called "c:\foobar" and create two .cmd files in it. The first is foo.cmd, which contains this: @rem ---------------------------------------------------- @rem foo.cmd @rem ---------------------------------------------------- @echo off echo --------------------------------------------- > out.txt echo Inside of FOO.CMD >> out.txt echo --------------------------------------------- >> out.txt chdir >> out.txt call bar %1 %2 %3 %4 The second is bar.cmd, which contains this: @rem ---------------------------------------------------- @rem bar.cmd @rem ---------------------------------------------------- @echo off echo --------------------------------------------- >> out.txt echo Inside of BAR.CMD >> out.txt echo --------------------------------------------- >> out.txt chdir >> out.txt Note a couple of things about these batch files. First, foo.cmd calls bar.cmd without a path specified, assuming that they are both in the same folder. Secondly, the chdir command, without a path argument, simply prints whatever the command shell thinks is the current working directory. 4. Now, at a command prompt, cd to the c:\foobar directory and execute the program "CallCmd", which should be in another folder that is on your path. On machines where the bug occurs, this is the behavior: - There is NO out.txt file created in the current folder, c:\foobar. - There IS a file with this name created in the root C:\ folder! - if yout "type c:\out.txt", this is what you will see: --------------------------------------------- Inside of FOO.CMD --------------------------------------------- C:\ In other words, the bar.cmd file was not found (because the process invoked in CallCmd.exe has not been able to set the working directory to "c:\foobar", where bar.cmd is located. Furthermore, the actual working folder is for some reason believed to be the root folder, as is proven by the fact that out.txt appears in that folder and the chdir command in foo.cmd wrote "C:\" to the output file. On a machine without this bug, you will see the expected behavior, that is, out.txt is created in the c:\foobar folder, and bar.cmd is called by foo.cmd so that out.txt ends up containing this text: --------------------------------------------- Inside of FOO.CMD --------------------------------------------- C:\foobar --------------------------------------------- Inside of BAR.CMD --------------------------------------------- C:\foobar So far, I can only say that all the machines I've found to be free of this bug are running .NET 3.5 without the SP1 applied. The converse, however, has been shown to not always be the case. (That is, as mentioned above, at least one SP1 machine is able to run the program correctly. Although I'm not sure why it isn't happening everywhere, I see this is a very serious defect in the framework, since it has the effect of running a process in the wrong directory, which could have very nasty results indeed. Any theories about this would be greatly appreciated. Arjay January 8th, 2009, 04:08 PM I'm not sure this is a framework issue, but rather how the working directory is set in the command shell. I expect that if you were to launch a simple console app directly that simply displays the working directory, it would display properly. Your problem, I believe, is because you are launching a batch file and this info isn't getting passed properly. I suspect you would run into the same issue if you wrote the app in a language outside the framework (such as in C). codeguru.com
http://forums.codeguru.com/archive/index.php/t-463387.html
crawl-003
refinedweb
889
69.28
(For more resources related to this topic, see here.) Services A service is just a specific instance of a given class. For example, whenever you access doctrine such as $this->get('doctrine'); in a controller, it implies that you are accessing a service. This service is an instance of the Doctrine EntityManager class, but you never have to create this instance yourself. The code needed to create this entity manager is actually not that simple since it requires a connection to the database, some other configurations, and so on. Without this service already being defined, you would have to create this instance in your own code. Maybe you will have to repeat this initialization in each controller, thus making your application messier and harder to maintain. Some of the default services present in Symfony2 are as follows: - The annotation reader - Assetic—the asset management library - The event dispatcher - The form widgets and form factory - The Symfony2 Kernel and HttpKernel - Monolog—the logging library - The router - Twig—the templating engine It is very easy to create new services because of the Symfony2 framework. If we have a controller that has started to become quite messy with long code, a good way to refactor it and make it simpler will be to move some of the code to services. We have described all these services starting with "the" and a singular noun. This is because most of the time, services will be singleton objects where a single instance is needed. A geolocation service In this example, we imagine an application for listing events, which we will call "meetups". The controller makes it so that we can first retrieve the current user's IP address, use it as basic information to retrieve the user's location, and only display meetups within 50 kms of distance to the user's current location. Currently, the code is all set up in the controller. As it is, the controller is not actually that long yet, it has a single method and the whole class is around 50 lines of code. However, when you start to add more code, to only list the type of meetups that are the user's favorites or the ones they attended the most. When you want to mix that information and have complex calculations as to which meetups might be the most relevant to this specific user, the code could easily grow out of control! There are many ways to refactor this simple example. The geocoding logic can just be put in a separate method for now, and this will be a good step, but let's plan for the future and move some of the logic to the services where it belongs. Our current code is as follows: use Geocoder\HttpAdapter\CurlHttpAdapter; use Geocoder\Geocoder; use Geocoder\Provider\FreeGeoIpProvider; public function indexAction() { Initialize our geocoding tools (based on the excellent geocoding library at) using the following code: $adapter = new CurlHttpAdapter(); $geocoder = new Geocoder(); $geocoder->registerProviders(array( new FreeGeoIpProvider($adapter), )); Retrieve our user's IP address using the following code: $ip = $this->get('request')->getClientIp(); // Or use a default one if ($ip == '127.0.0.1') { $ip = '114.247.144.250'; } Get the coordinates and adapt them using the following code so that they are roughly a square of 50 kms on each side: $result = $geocoder->geocode($ip); $lat = $result->getLatitude(); $long = $result->getLongitude(); $lat_max = $lat + 0.25; // (Roughly 25km) $lat_min = $lat - 0.25; $long_max = $long + 0.3; // (Roughly 25km) $long_min = $long - 0.3; Create a query based on all this information using the following code: ([ 'lat_max' => $lat_max, 'lat_min' => $lat_min, 'long_max' => $long_max, 'long_min' => $long_min ]); Retrieve the results and pass them to the template using the following code: $meetups = $qb->getQuery()->execute(); return ['ip' => $ip, 'result' => $result, 'meetups' => $meetups]; } The first thing we want to do is get rid of the geocoding initialization. It would be great to have all of this taken care of automatically and we would just access the geocoder with: $this->get('geocoder');. You can define your services directly in the config.yml file of Symfony under the services key, as follows: services: geocoder: class: Geocoder\Geocoder That is it! We defined a service that can now be accessed in any of our controllers. Our code now looks as follows: // Create the geocoding class $adapter = new \Geocoder\HttpAdapter\CurlHttpAdapter(); $geocoder = $this->get('geocoder'); $geocoder->registerProviders(array( new \Geocoder\Provider\FreeGeoIpProvider($adapter), )); Well, I can see you rolling your eyes, thinking that it is not really helping so far. That's because initializing the geocoder is a bit more complex than just using the new \Geocoder\Geocoder() code. It needs another class to be instantiated and then passed as a parameter to a method. The good news is that we can do all of this in our service definition by modifying it as follows: services: # Defines the adapter class geocoder_adapter: class: Geocoder\HttpAdapter\CurlHttpAdapter public: false # Defines the provider class geocoder_provider: class: Geocoder\Provider\FreeGeoIpProvider public: false # The provider class is passed the adapter as an argument arguments: [@geocoder_adapter] geocoder: class: Geocoder\Geocoder # We call a method on the geocoder after initialization to set up the # right parameters calls: - [registerProviders, [[@geocoder_provider]]] It's a bit longer than this, but it is the code that we never have to write anywhere else ever again. A few things to notice are as follows: - We actually defined three services, as our geocoder requires two other classes to be instantiated. - We used @+service_name to pass a reference to a service as an argument to another service. - We can do more than just defining new Class($argument); we can also call a method on the class after it is instantiated. It is even possible to set properties directly when they are declared as public. - We marked the first two services as private. This means that they won't be accessible in our controllers. They can, however, be used by the Dependency Injection Container (DIC) to be injected into other services. Our code now looks as follows: // Retrieve current user's IP address $ip = $this->get('request')->getClientIp(); // Or use a default one if ($ip == '127.0.0.1') { $ip = '114.247.144.250'; } // Find the user's coordinates $result = $this->get('geocoder')->geocode($ip); $lat = $result->getLatitude(); // ... Remaining code is unchanged Here, our controllers are extending the BaseController class, which has access to DIC since it implements the ContainerAware interface. All calls to $this->get('service_name') are proxied to the container that constructs (if needed) and returns the service. Let's go one step further and define our own class that will directly get the user's IP address and return an array of maximum and minimum longitude and latitudes. We will create the following class: namespace Khepin\BookBundle\Geo; use Geocoder\Geocoder; use Symfony\Component\HttpFoundation\Request; class UserLocator { protected $geocoder; protected $user_ip; public function__construct(Geocoder $geocoder, Request $request) { $this->geocoder = $geocoder; $this->user_ip = $request->getClientIp(); if ($this->user_ip == '127.0.0.1') { $this->user_ip = '114.247.144.250'; } } public function getUserGeoBoundaries($precision = 0.3) { // Find the user's coordinates $result = $this->geocoder->geocode($this->user_ip); $lat = $result->getLatitude(); $long = $result->getLongitude(); $lat_max = $lat + 0.25; // (Roughly 25km) $lat_min = $lat - 0.25; $long_max = $long + 0.3; // (Roughly 25km) $long_min = $long - 0.3; return ['lat_max' => $lat_max, 'lat_min' => $lat_min, 'long_max' => $long_max, 'long_min' => $long_min]; } } It takes our geocoder and request variables as arguments, and then does all the heavy work we were doing in the controller at the beginning of the article. Just as we did before, we will define this class as a service, as follows, so that it becomes very easy to access from within the controllers: # config.yml services: #... user_locator: class: Khepin\BookBundle\Geo\UserLocator scope: request arguments: [@geocoder, @request] Notice that we have defined the scope here. The DIC has two scopes by default: container and prototype, to which the framework also adds a third one named request. The following table shows their differences: Now, the advantage is that the service knows everything it needs by itself, but it also becomes unusable in contexts where there are no requests. If we wanted to create a command that gets all users' last-connected IP address and sends them a newsletter of the meetups around them on the weekend, this design would prevent us from using the Khepin\BookBundle\Geo\UserLocator class to do so. As we see, by default, the services are in the container scope, which means they will only be instantiated once and then reused, therefore implementing the singleton pattern. It is also important to note that the DIC does not create all the services immediately, but only on demand. If your code in a different controller never tries to access the user_locator service, then that service and all the other ones it depends on (geocoder, geocoder_provider, and geocoder_adapter) will never be created. Also, remember that the configuration from the config.yml is cached when on a production environment, so there is also little to no overhead in defining these services. Our controller looks a lot simpler now and is as follows: $boundaries = $this->get('user_locator')->getUserGeoBoundaries(); // Create our database query ($boundaries); // Retrieve interesting meetups $meetups = $qb->getQuery()->execute(); return ['meetups' => $meetups]; The longest part here is the doctrine query, which we could easily put on the repository class to further simplify our controller. As we just saw, defining and creating services in Symfony2 is fairly easy and inexpensive. We created our own UserLocator class, made it a service, and saw that it can depend on our other services such as @geocoder service. We are not finished with services or the DIC as they are the underlying part of almost everything related to extending Symfony2. Summary In this article, we saw the importance of services and also had a look at the geolocation service. We created a class, made it a service, and saw how it can depend on our other services. Resources for Article: Further resources on this subject: - Developing an Application in Symfony 1.3 (Part 1) [Article] - Developing an Application in Symfony 1.3 (Part 2) [Article] - User Interaction and Email Automation in Symfony 1.3: Part1 [Article]
https://www.packtpub.com/books/content/services
CC-MAIN-2016-36
refinedweb
1,690
50.36
and QuoteServerThread. A single class implements the client application: QuoteClient. Let's investigate these classes, starting with the class that contains the mainmethod for the server application. Working with a Server-Side Application contains an applet version of the QuoteClientclass. The QuoteServerclass, shown here in its entirety, contains a single method: the mainmethod for the quote server application. The mainmethod simply creates a new QuoteServerThreadobject and starts it:TheTheimport java.io.*; public class QuoteServer { public static void main(String[] args) throws IOException { new QuoteServerThread().start(); } } QuoteServerThreadclass implements the main logic of the quote server. QuoteServerThreadClass When created, the QuoteServerThreadcreates a DatagramSocketon port 4445 (arbitrarily chosen). This is the DatagramSocketthrough which the server communicates with all of its clients.Remember that certain ports are dedicated to well-known services and you cannot use them. If you specify a port that is in use, the creation of theRemember that certain ports are dedicated to well-known services and you cannot use them. If you specify a port that is in use, the creation of thepublic."); } } DatagramSocketwill fail. The constructor also opens a BufferedReaderon a file named one-liners.txt which contains a list of quotes. Each quote in the file is on a line by itself. Now for the interesting part of the QuoteServerThread: its runmethod. The runmethod overrides runin the Threadclass and provides the implementation for the thread. For information about threads, see Defining and Starting a Thread. The runmethod contains a whileloop that continues as long as there are more quotes in the file. During each iteration of the loop, the thread waits for a DatagramPacketto arrive over the DatagramSocket. The packet indicates a request from a client. In response to the client's request, the QuoteServerThreadgets a quote from the file, puts it in a DatagramPacketand sends it over the DatagramSocketto the client that asked for it. Let's look first at the section that receives the requests from clients:The first statement creates an array of bytes which is then used to create aThe first statement creates an array of bytes which is then used to create abyte[] buf = new byte[256]; DatagramPacket packet = new DatagramPacket(buf, buf.length); socket.receive(packet); DatagramPacket. The DatagramPacketwill be used to receive a datagram from the socket because of the constructor used to create it. This constructor requires only two arguments: a byte array that contains client-specific data and the length of the byte array. When constructing a DatagramPacketto:If the quote file did not get opened for some reason, thenIf the quote file did not get opened for some reason, thenString dString = null; if (in == null) dString = new Date().toString(); else dString = getNextQuote(); buf = dString.getBytes(); inequals null. If this is the case, the quote server serves up the time of day instead. Otherwise, the quote server gets the next quote from the already opened file. Finally, the code converts the string to an array of bytes. Now, the runmethod sends the response to the client over the DatagramSocketwith this code.InetAddress address = packet.getAddress(); int port = packet.getPort(); packet = new DatagramPacket(buf, buf.length, address, port); socket.send(packet); The third statement creates a new DatagramPacketobject intended for sending a datagram message over the datagram socket. You can tell that the new DatagramPacketon its way. When the server has read all the quotes from the quote file, the whileloop terminates and the runmethod cleans up:socket.close(); The QuoteClientclass implements a client application for the QuoteServer. This application sends a request to the QuoteServer, waits for the response, and, when the response is received, displays it to the standard output. Let's look at the code in detail. The QuoteClientclass contains one method, the mainmethod for the client application. The top of the mainmethod declares several local variables for its use:First, theFirst, theint port; InetAddress address; DatagramSocket socket = null; DatagramPacket packet; byte[] sendBuf = new byte[256]; mainmethod processes the command-line arguments used to invoke the QuoteClientapplication:TheTheif (args.length != 1) { System.out.println("Usage: java QuoteClient <hostname>"); return; } QuoteClientapplication requires one command-line arguments: the name of the machine on which the QuoteServeris running. Next, the mainmethod creates a DatagramSocket:The client uses a constructor that does not require a port number. This constructor just binds theThe client uses a constructor that does not require a port number. This constructor just binds theDatagramSocket socket = new DatagramSocket(); DatagramSockettoprogram sends a request to the server:The code segment gets the Internet address for the host named on the command line (presumably the name of the machine on which the server is running). ThisThe code segment gets the Internet address for the host named on the command line (presumably the name of the machine on which the server is running). Thisbyte[] buf = new byte[256]; InetAddress address = InetAddress.getByName(args[0]); DatagramPacket packet = new DatagramPacket(buf, buf.length, address, 4445); socket.send(packet); InetAddressand the port number 4445 (the port number that the server used to create its DatagramSocket) are then used to create DatagramPacketdestined for that Internet address and port number. Therefore the DatagramPacketwill be delivered to the quote server. Note that the code creates a DatagramPacketwith:To get a response from the server, the client creates a "receive" packet and uses theTo get a response from the server, the client creates a "receive" packet and uses thepacket = new DatagramPacket(buf, buf.length); socket.receive(packet); String received = new String(packet.getData(), 0, packet.getLength()); System.out.println("Quote of the Moment: " + received); DatagramSocketreceiveclass name. Once the server has started, you can run the client program. Remember to run the client program with one command-line argument: the name of the host on which the QuoteServeris running. After the client sends a request and receives a response from the server, you should see output similar to this:Quote of the Moment: Good programming is 99% sweat and 1% coffee.
http://java.sun.com/docs/books/tutorial/networking/datagrams/clientServer.html
crawl-002
refinedweb
981
56.25
xml Scala 2.13.0-M4 release notes (collections changes) Under the covers — and sometimes above the covers — Scala is changing. These notes about Scala 2.13.0-M4 describe some of the changes coming to the Scala collections classes. A few things not shown in the image are: - The scala-xml library is no longer bundled with the release - Procedure syntax ( def m() { ... }) is deprecated - View bound syntax (A <% B) is deprecated - Assorted deprecated methods and classes have been removed See the Scala 2.13.0-M4 release notes for more details.: - Read more about Amazon Echo + Akka = Akkazon Ekko - Log in to post comments I completely delete this code from my current Android application, I want to make a copy of it here. It was intended to show a series of quotes (text phrases) in a “Grid” ( GridView), but (a) I never got it working as desired, and (b) I decided I didn’t want it in my application anyway. Here’s the source code for the Java controller/fragment class:
http://alvinalexander.com/index.php/taxonomy/term/3148
CC-MAIN-2020-05
refinedweb
173
72.16
A great text editor There's an old joke that Emacs would be a great operating system if only it had a decent text editor, whereas vi would be a great text editor if only it had a decent operating system. This gag reflects the single greatest strategic advantage that Emacs has always had over vi: an embedded extension programming language. Indeed, the fact that Emacs users are happy to put up with RSI-inducing control chords and are willing to write their extensions in Lisp shows just how great an advantage a built-in extension language must be. But vi programmers no longer need cast envious glances towards Emacs' parenthetical scripting language. Our favorite editor can be scripted too—and much more humanely than Emacs. In this series of articles, we'll look at the most popular modern variant of vi, the Vim editor, and at the simple yet extremely powerful scripting language that Vim provides. This first article explores the basic building blocks of Vim scripting: variables, values, expressions, simple flow control, and a few of Vim's numerous utility functions. I'll assume that you already have access to Vim and are familiar with its interactive features. If that's not the case, some good starting points are Vim's own Web site and various online resources and hardcopy books, or you can simply type :help inside Vim itself. See the Resources section for links. Unless otherwise indicated, all the examples in this series of articles assume you're using Vim version 7.2 or higher. You can check which version of Vim you're using by invoking the editor like so: vim --version or by typing :version within Vim itself. If you're using an older incarnation of Vim, upgrading to the latest release is strongly recommended, as previous versions do not support many of the features of Vimscript that we'll be exploring. The Resources section has a link to download and upgrade Vim. Vimscript Vim's scripting language, known as Vimscript, is a typical dynamic imperative language and offers most of the usual language features: variables, expressions, control structures, built-in functions, user-defined functions, first-class strings, high-level data structures (lists and dictionaries), terminal and file I/O, regex pattern matching, exceptions, and an integrated debugger. You can read Vim's own documentation of Vimscript via the built-in help system, by typing: :help vim-script-intro inside any Vim session. Or just read on. Running Vim scripts There are numerous ways to execute Vim scripting commands. The simplest approach is to put them in a file (typically with a .vim extension) and then execute the file by :source-ing it from within a Vim session: :source /full/path/to/the/scriptfile.vim Alternatively, you can type scripting commands directly on the Vim command line, after the colon. For example: :call MyBackupFunc(expand('%'), { 'all':1, 'save':'recent'}) But very few people do that. After all, the whole point of scripting is to reduce the amount of typing you have to do. So the most common way to invoke Vim scripts is by creating new keyboard mappings, like so: :nmap ;s :source /full/path/to/the/scriptfile.vim<CR> :nmap \b :call MyBackupFunc(expand('%'), { 'all': 1 })<CR> Commands like these are usually placed in the .vimrc initialization file in your home directory. Thereafter, when you're in Normal mode (in other words, not inserting text), the key sequence ;s will execute the specified script file, and a \b sequence will call the MyBackupFunc() function (which you presumably defined somewhere in your .vimrc as well). All of the Vimscript examples in this article use key mappings of various types as triggers. In later articles, we'll explore two other common invocation techniques: running scripts as colon commands from Vim's command line, and using editor events to trigger scripts automatically. A syntactic example Vim has very sophisticated syntax highlighting facilities, which you can turn on with the built-in :syntax enable command, and off again with :syntax off. It's annoying to have to type ten or more characters every time you want to toggle syntax highlighting, though. Instead, you could place the following lines of Vimscript in your .vimrc file: Listing 1. Toggling syntax highlighting function! ToggleSyntax() if exists("g:syntax_on") syntax off else syntax enable endif endfunction nmap <silent> ;s :call ToggleSyntax()<CR> This causes the ;s sequence to flip syntax highlighting on or off each time it's typed when you're in Normal mode. Let's look at each component of that script. The first block of code is obviously a function declaration, defining a function named ToggleSyntax(), which takes no arguments. That user-defined function first calls a built-in Vim function named exists(), passing it a string. The exists() function determines whether a variable with the name specified by the string (in this case, the global variable g:syntax_on) has been defined. If so, the if statement executes a syntax off; otherwise it executes a syntax enable. Because syntax enable defines the g:syntax_on variable, and syntax off undefines it, calling the ToggleSyntax() function repeatedly alternates between enabling and disabling syntax highlighting. All that remains is to set up a key sequence ( ;s in this example) to call the ToggleSyntax() function: nmap <silent> ;s :call ToggleSyntax()<CR> nmap stands for "normal-mode key mapping." The <silent> option after the nmap causes the mapping not to echo any command it's executing, ensuring that the new ;s command will do its work unobtrusively. That work is to execute the command: :call ToggleSyntax()<CR> which is how you call a function in Vimscript when you intend to ignore the return value. Note that the <CR> at the end is the literal sequence of characters <, C, R, >. Vimscript recognizes this as being equivalent to a literal carriage return. In fact, Vimscript understands many other similar representations of unprintable characters. For example, you could create a keyboard mapping to make your space bar act like the page-down key (as it does in most Web browsers), like so: :nmap <Space> <PageDown> You can see the complete list of these special symbols by typing :help keycodes within Vim. Note too that ToggleSyntax() was able to call the built-in syntax command directly. That's because every built-in colon command in Vim is automatically also a statement in Vimscript. For example, to make it easier to create centered titles for documents written in Vim, you could create a function that capitalizes each word on the current line, centers the entire line, and then jumps to the next line, like so: Listing 2. Creating centered titles function! CapitalizeCenterAndMoveDown() s/\<./\u&/g "Built-in substitution capitalizes each word center "Built-in center command centers entire line +1 "Built-in relative motion (+1 line down) endfunction nmap <silent> \C :call CapitalizeCenterAndMoveDown()<CR> Vimscript statements As the previous examples illustrate, all statements in Vimscript are terminated by a newline (as in shell scripts or Python). If you need to run a statement across multiple lines, the continuation marker is a single backslash. Unusually, the backslash doesn't go at the end of the line to be continued, but rather at the start of the continuation line: Listing 3. Continuing lines using backslash call SetName( \ first_name, \ middle_initial, \ family_name \ ) You can also put two or more statements on a single line by separating them with a vertical bar: echo "Starting..." | call Phase(1) | call Phase(2) | echo "Done" That is, the vertical bar in Vimscript is equivalent to a semicolon in most other programming languages. Unfortunately, Vim couldn't use the semicolon, as that character already means something else at the start of a command (specifically, it means "from the current line to..." as part of the command's line range). One important use of the vertical bar as a statement separator is in commenting. Vimscript comments start with a double-quote and continue to the end of the line, like so: Listing 4. Commenting in Vimscript if exists("g:syntax_on") syntax off "Not 'syntax clear' (which does something else) else syntax enable "Not 'syntax on' (which overrides colorscheme) endif Unfortunately, Vimscript strings can also start with a double-quote and always take precedence over comments. This means you can't put a comment anywhere that a string might be expected, because it will always be interpreted as a string: echo "> " "Print generic prompt The echo command expects one or more strings, so this line produces an error complaining about the missing closing quote on (what Vim assumes to be) the second string. Comments can, however, always appear at the very start of a statement, so you can fix the above problem by using a vertical bar to explicitly begin a new statement before starting the comment, like so: echo "> " |"Print generic prompt Values and variables Variable assignment in Vimscript requires a special keyword, let: Listing 5. Using the let keyword let name = "Damian" let height = 165 let interests = [ 'Cinema', 'Literature', 'World Domination', 101 ] let phone = { 'cell':5551017346, 'home':5558038728, 'work':'?' } Note that strings can be specified with either double-quotes or single-quotes as delimiters. Double-quoted strings honor special "escape sequences" such as "\n" (for newline), "\t" (for tab), "\u263A" (for Unicode smiley face), or "\<ESC>" (for the escape character). In contrast, single-quoted strings treat everything inside their delimiters as literal characters—except two consecutive single-quotes, which are treated as a literal single-quote. Values in Vimscript are typically one of the following three types: - scalar: a single value, such as a string or a number. For example: "Damian"or 165 - list: an ordered sequence of values delimited by square brackets, with implicit integer indices starting at zero. For example: ['Cinema', 'Literature', 'World Domination', 101] - dictionary: an unordered set of values delimited by braces, with explicit string keys. For example: {'cell':5551017346, 'home':5558038728, 'work':'?'} Note that the values in a list or dictionary don't have to be all of the same type; you can mix strings, numbers, and even nested lists and dictionaries if you wish. Unlike values, variables have no inherent type. Instead, they take on the type of the first value assigned to them.). Variable types, once assigned, are permanent and strictly enforced at runtime: let interests = 'unknown' " Error: variable type mismatch By default, a variable is scoped to the function in which it is first assigned to, or is global if its first assignment occurs outside any function. However, variables may also be explicitly declared as belonging to other scopes, using a variety of prefixes, as summarized in Table 1. Table 1. Vimscript variable scoping There are also pseudovariables that scripts can use to access the other types of value containers that Vim provides. These are summarized in Table 2. Table 2. Vimscript pseudovariables The "option" pseudovariables can be particularly useful. For example, you could set up two key-maps to increase or decrease the current tabspacing like so: nmap <silent> ]] :let &tabstop += 1<CR> nmap <silent> [[ :let &tabstop -= &tabstop > 1 ? 1 : 0<CR> Expressions Note that the [[ key-mapping in the previous example uses an expression containing a C-like "ternary expression": &tabstop > 1 ? 1 : 0 This prevents the key map from decrementing the current tab spacing below the sane minimum of 1. As this example suggests, expressions in Vimscript are composed of the same basic operators that are used in most other modern scripting languages, and with generally the same syntax. The available operators (grouped by increasing precedence) are summarized in Table 3. Table 3. Vimscript operator precedence table Logical caveats In Vimscript, as in C, only the numeric value zero is false in a boolean context; any non-zero numeric value—whether positive or negative—is considered true. However, all the logical and comparison operators consistently return the value 1 for true. When a string is used as a boolean, it is first converted to an integer, and then evaluated for truth (non-zero) or falsehood (zero). This implies that the vast majority of strings—including most non-empty strings—will evaluate as being false. A typical mistake is to test for an empty string like so: Listing 6. Flawed test for empty string let result_string = GetResult(); if !result_string echo "No result" endif The problem is that, although this does work correctly when result_string is assigned an empty string, it also indicates "No result" if result_string contains a string like "I am NOT an empty string", because that string is first converted to a number (zero) and then to a boolean (false). The correct solution is to explicitly test strings for emptiness using the appropriate built-in function: Listing 7. Correct test for empty string if empty(result_string) echo "No result" endif Comparator caveats In Vimscript, comparators always perform numeric comparison, unless both operands are strings. In particular, if one operand is a string and the other a number, the string will be converted to a number and the two operands then compared numerically. This can lead to subtle errors: let ident = 'Vim' if ident == 0 "Always true (string 'Vim' converted to number 0) A more robust solution in such cases is: Click to see code listing String comparisons normally honor the local setting of Vim's ignorecase option, but any string comparator can also be explicitly marked as case-sensitive (by appending a #) or case-insensitive (by appending a ?): Listing 8. Casing string comparators if name ==? 'Batman' |"Equality always case insensitive echo "I'm Batman" elseif name <# 'ee cummings' |"Less-than always case sensitive echo "the sky was can dy lu minous" endif Using the "explicitly cased" operators for all string comparisons is strongly recommended, because they ensure that scripts behave reliably regardless of variations in the user's option settings. Arithmetic caveats When using arithmetic expressions, it's also important to remember that, until version 7.2, Vim supported only integer arithmetic. A common mistake under earlier versions was writing something like: Listing 9. Problem with integer arithmetic "Step through each file... for filenum in range(filecount) " Show progress... echo (filenum / filecount * 100) . '% done'" Make progress... call process_file(filenum) endfor Because filenum will always be less than filecount, the integer division filenum/filecount will always produce zero, so each iteration of the loop will echo: Now 0% done Even under version 7.2, Vim does only floating-point arithmetic if one of the operands is explicitly floating-point: let filecount = 234 echo filecount/100 |" echoes 2 echo filecount/100.0 |" echoes 2.34 Another toggling example It's easy to adapt the syntax-toggling script shown earlier to create other useful tools. For example, if there is a set of words that you frequently misspell or misapply, you could add a script to your .vimrc to activate Vim's match mechanism and highlight problematic words when you're proofreading text. For example, you could create a key-mapping (say: ;p) that causes text like the previous paragraph to be displayed within Vim like so: It's easy to adapt the syntax-toggling script shown earlier to create other useful tools. For example, if there is a set of words that you frequently misspell or misapply, you could add a script toyour .vimrc to activate Vim's match mechanism and highlight problematic words when you're proofreading text. That script might look like this: Listing 10. Highlighting frequently misused words "Create a text highlighting style that always stands out... highlight STANDOUT term=bold cterm=bold gui=bold "List of troublesome words... let s:words = [ \ "it's", "its", \ "your", "you're", \ "were", "we're", "where", \ "their", "they're", "there", \ "to", "too", "two" \ ] "Build a Vim command to match troublesome words... let s:words_matcher \ = 'match STANDOUT /\c\<\(' . join(s:words, '\|') . '\)\>/' "Toggle word checking on or off... function! WordCheck () "Toggle the flag (or set it if it doesn't yet exist)... let w:check_words = exists('w:check_words') ? !w:check_words : 1 "Turn match mechanism on/off, according to new state of flag... if w:check_words exec s:words_matcher else match none endif endfunction "Use ;p to toggle checking... nmap <silent> ;p :call WordCheck()<CR> The variable w:check_words is used as a boolean flag to toggle word checking on or off. The first line of the WordCheck() function checks to see if the flag already exists, in which case the assignment simply toggles the variable's boolean value: let w:check_words = exists('w:check_words') ? !w:check_words : 1 If w:check_words does not yet exist, it is created by assigning the value 1 to it: let w:check_words = exists('w:check_words') ? !w:check_words : 1 Note the use of the w: prefix, which means that the flag variable is always local to the current window. This allows word checking to be toggled independently for each editor window (which is consistent with the behavior of the match command, whose effects are always local to the current window as well). Word checking is enabled by setting Vim's match command. A match expects a text-highlighting specification ( STANDOUT in this example), followed by a regular expression that specifies which text to highlight. In this case, that regex is constructed by OR'ing together all of the words specified in the script's s:words list variable (that is: join(s:words, '\|')). That set of alternatives is then bracketed by case-insensitive word boundaries ( \c\<\(...\)\>) to ensure that only entire words are matched, regardless of capitalization. The WordCheck() function then converts the resulting string as a Vim command and executes it ( exec s:words_matcher) to turn on the matching facility. When w:check_words is toggled off, the function performs a match none command instead, to deactivate the special matching. Scripting in Insert mode Vimscripting is by no means restricted to Normal mode. You can also use the imap or iabbrev commands to set up key-mappings or abbreviations that can be used while inserting text. For example: imap <silent> <C-D><C-D> <C-R>=strftime("%e %b %Y")<CR> imap <silent> <C-T><C-T> <C-R>=strftime("%l:%M %p")<CR> With these mappings in your .vimrc, typing CTRL-D twice while in Insert mode causes Vim to call its built-in strftime() function and insert the resulting date, while double-tapping CTRL-T likewise inserts the current time. You can use the same general pattern to cause an insertion map or an abbreviation to perform any scriptable action. Just put the appropriate Vimscript expression or function call between an initial <C-R>= (which tells Vim to insert the result of evaluating what follows) and a final <CR> (which tells Vim to actually evaluate the preceding expression). Remember, though, that <C-R> (Vim's abbreviation for CTRL-R) is not the same as <CR> (Vim's abbreviation for a carriage return). For example, you could use Vim's built-in getcwd() function to create an abbreviation for the current working directory, like so: iabbrev <silent> CWD <C-R>=getcwd()<CR> Or you could embed a simple calculator that can be called by typing CTRL-C during text insertions: imap <silent> <C-C> <C-R>=string(eval(input("Calculate: ")))<CR> Here, the expression: string( eval( input("Calculate: ") ) ) first calls the built-in input() function to request the user to type in their calculation, which input() then returns as a string. That input string is then passed to the built-in eval(), which evaluates it as a Vimscript expression and returns the result. Next, the built-in string() function converts the numeric result back to a string, which the key-mapping's <C-R>= sequence is then able to insert. A more complex Insert-mode script Insertion mappings can involve scripts considerably more sophisticated than the previous examples. In such cases, it's usually a good idea to refactor the code out into a user-defined function, which the key-mapping can then call. For example, you could change the behavior of CTRL-Y during insertions. Normally a CTRL-Y in Insert mode does a "vertical copy." That is, it copies the character in the same column from the line immediately above the cursor. For example, a CTRL-Y in the following situation would insert an "m" at the cursor: Glib jocks quiz nymph to vex dwarf Glib jocks quiz ny_ However, you might prefer your vertical copies to ignore any intervening empty lines and instead copy the character from the same column of the first non-blank line anywhere above the insertion point. That would mean, for instance, that a CTRL-Y in the following situation would also insert an "m", even though the immediately preceding line is empty: Glib jocks quiz nymph to vex dwarf Glib jocks quiz ny_ You could achieve this enhanced behavior by placing the following in your .vimrc file: Listing 11. Improving vertical copies to ignore blank lines "Locate and return character "above" current cursor position... function! LookUpwards() "Locate current column and preceding line from which to copy... let column_num = virtcol('.') let target_pattern = '\%' . column_num . 'v.' let target_line_num = search(target_pattern . '*\S', 'bnW') "If target line found, return vertically copied character... if !target_line_num return "" else return matchstr(getline(target_line_num), target_pattern) endif endfunction "Reimplement CTRL-Y within insert mode... imap <silent> <C-Y> <C-R><C-R>=LookUpwards()<CR> The LookUpwards() function first determines which on-screen column (or "virtual column") the insertion point is currently in, using the built-in virtcol() function. The '.' argument specifies that you want the column number of the current cursor position: let column_num = virtcol('.') LookUpwards() then uses the built-in search() function to look backwards through the file from the cursor position: let target_pattern = '\%' . column_num . 'v.' let target_line_num = search(target_pattern . '*\S', 'bnW') The search uses a special target pattern (namely: \%column_numv.*\S) to locate the closest preceding line that has a non-whitespace character ( \S) at or after ( .*) the cursor column ( \%column_numv). The second argument to search() is the configuration string bnW, which tells the function to search backwards but not to move the cursor nor to wrap the search. If the search is successful, search() returns the line number of the appropriate preceding line; if the search fails, it returns zero. The if statement then works out which character—if any—is to be copied back down to the insertion point. If a suitable preceding line was not found, target_line_num will have been assigned zero, so the first return statement is executed and returns an empty string (indicating "insert nothing"). If, however, a suitable preceding line was identified, the second return statement is executed instead. It first gets a copy of that preceding line from the current editor buffer: return matchstr(getline(target_line_num), target_pattern) It then finds and returns the one-character string that the previous call to search() successfully matched: return matchstr(getline(target_line_num), target_pattern) Having implemented this new vertical copy behavior inside LookUpwards(), all that remains is to override the standard CTRL-Y command in Insert mode, using an imap: imap <silent> <C-Y> <C-R><C-R>=LookUpwards()<CR> Note that, whereas earlier imap examples all used <C-R>= to invoke a Vimscript function call, this example uses <C-R><C-R>= instead. The single-CTRL-R form inserts the result of the subsequent expression as if it had been directly typed, which means that any special characters within the result retain their special meanings and behavior. The double-CTRL-R form, on the other hand, inserts the result as verbatim text without any further processing. Verbatim insertion is more appropriate in this example, since the aim is to exactly copy the text above the cursor. If the key-mapping used <C-R>=, copying a literal escape character from the previous line would be equivalent to typing it, and would cause the editor to instantly drop out of Insert mode. Learning Vim's built-in functions As you can see from each of the preceding examples, much of Vimscript's power comes from its extensive set of over 200 built-in functions. You can start learning about them by typing: :help functions or, to access a (more useful) categorized listing: :help function-list Looking ahead Vimscript is a mechanism for reshaping and extending the Vim editor. Scripting lets you create new tools (such as a problem-word highlighter) and simplify common tasks (like changing tabspacing, or inserting time and date information, or toggling syntax highlighting), and even completely redesign existing editor features (for example, enhancing CTRL-Y's "copy-the-previous-line" behavior). For many people, the easiest way to learn any new language is by example. To that end, you can find an endless supply of sample Vimscripts—most of which are also useful tools in their own right—on the Vim Tips wiki. Or, for more extensive examples of Vim scripting, you can trawl the 2000+ larger projects housed in the Vim script archive. Both are listed in the Resources section below. If you're already familiar with Perl or Python or Ruby or PHP or Lua or Awk or Tcl or any shell language, then Vimscript will be both hauntingly familiar (in its general approach and concepts) and frustratingly different (in its particular syntactic idiosyncrasies). To overcome that cognitive dissonance and master Vimscript, you're going to have to spend some time experimenting, exploring, and playing with the language. To that end, why not take your biggest personal gripe about the way Vim currently works and see if you can script a better solution for yourself? This article has described only Vimscript's basic variables, values, expressions, and functions. The range of "better solutions" you're likely to be able to construct with just those few components is, of course, extremely limited. So, in future installments, we'll look at more advanced Vimscript tools and techniques: data structures, flow control, user-defined commands, event-driven scripting, building Vim modules, and extending Vim using other scripting languages. In particular, the next article in this series will focus on the features of Vimscript's user-defined functions and on the many ways they can make your Vim experience better. Resources Learn - To learn more about the Vim editor and its many commands, see: - The Vim homepage - The online book A Byte of Vim - Various hardcopy books on Vim - Vim's own manual - Steve Oualline's Vim Cookbook - For more extensive examples of Vimscripting, see: - The Vim Tips wiki - The Vim script.
http://www.ibm.com/developerworks/linux/library/l-vim-script-1/
CC-MAIN-2014-42
refinedweb
4,390
50.06
(). Hello Jeff,. I think Clojure macros hit a sweet spot between CL macros and Scheme macros - I think the mandatory namespace qualification is a win,as you get certain aspects of hygiene, and metadata might allow for more contextual info to be attached to symbols a la syntax case. I will continue to follow your blog :) have fun with lisp. Hi Pat, Thanks for the comments and pointing out some of the possible applications of metadata in the future.. Of course, this wouldn't force you into static typing, it's just be like Clojure Lint. I wonder if this would allow you to layer something like Typed Scheme over the top whilst minimizing the syntactic overhead of types. I think I'll remain pondering this idea for some time :) Hello Jeff,? Pat Hi Patrick, I'm not too versed in the literature either I'm afraid. I looked at chapter 30 of Programming Languages: Application and Interpretation for a basic description. TAPL is the bible of types in programming languages, but I confess I haven't got a copy (yet!). In terms of a particular implementation, Hindley–Milner inference is the only algorithm I've looked at. Unfortunately I never got very far with this idea. I think there's definitely some scope to do some cool stuff there, but I never have the time to do anything other than toy projects :( If anyone wants to start some kind of project looking at this, then I'd be happy to contribute :)
http://www.fatvat.co.uk/2009/01/metadata-in-clojure.html?showComment=1250188765174
CC-MAIN-2022-27
refinedweb
252
67.99
Event Based Programming in JavaFX Old Song, New World I decided to try my hand at some JavaFX programming to see what the language had to offer. Two of the key features of JavaFX are its ability to bind to data, and its access to all Java libraries. I used that to see how it handles for event-based programming. I built this minesweeper game: As the World Turns: Reactive Data Models JavaFX let me build reactive data models using bind and on replace. When some piece of state changes, the change propagates through based on code right of the declarations. These keywords shrink the boilerplate down to a few readable characters. Here's a piece of code from TileControl.fx that uses both: package class TileControl { ... var tileNode : TileNode; //View of the tile public-init var cell: HexBoards.ClientCell; //Model of the tile def cellState = bind cell.state on replace oldCellState { if(tileNode != null) { tileNode.update(cellState); } }; ...} I'm not completely convinced that TileControl -- and MVC-- is worth the extra class. I could have bound cell.state directly to a field in tileNode. It does prevent these few important lines from being lost in a sea of graphics code, and keeps the model from leaking into the verbose TileNode graphics code. More importantly, it lets a model of several layers, say rules for a more complex board game or some obscure business logic, propagate based on their declarations. An outer layer can define its own dependencies on the inner layer, so the system stays very clean. Old World Meets New: Event-based Programming and Clean Code I like event-based programming. It tends to keep class structures shallow and clean, and separates a program into understandable parts. When I throw in a way to distribute the events I can get multiple machines to form a coherent system, usually fairly painlessly. That minesweeper game shows the idea in JavaFX on a small scale. I used JMS to separate a Server, which knows where all the mines are, from a Client, which only knows what the player has uncovered. The client and server have no direct access to each other's objects; they are loosely coupled via JMS events. It's overkill for this little project, with one player, no reward (not even bragging rights) in the game, and client and server collocated in a single process. However, it'd make creating a distributed multiplayer game, or any other distributed system very easy. (To save me having to work with network connections on your web page, I've used SomnifugiJMS and colocated the Server and Client in the applet. It needs your permission to read system properties and to use JMX.) I set up some simple wrapper classes to handle the JMS calls. Nothing to write home about, but it does bundle up the boiler plate neatly. JavaFX doesn't do much with exception handling. I haven't spotted where uncaught exceptions go yet. (Maybe another blog there...) In any case, here's one of the four helper classes: package class Publisher { def connection = SomniJNDIBypass.IT.getTopicConnectionFactory().createTopicConnection(); def session = connection.createTopicSession(false,Session.AUTO_ACKNOWLEDGE); public-init var topicName : String; var publisher : TopicPublisher; init { def topic = SomniJNDIBypass.IT.getTopic(topicName); publisher = session.createPublisher(topic); connection.start(); } package function publishObject(object : Serializable) { var message = session.createObjectMessage(); message.setObject(object); publisher.publish(message); } package function close() { publisher.close(); session.close(); connection.close(); } } Earth To Mars Once I'd typed the boilerplate, publishing events when something changed was easy with on replace. Here's what happens in the server after a client finds a safe cell: package class Game { ... var safeTestedAddresses = [] on replace oldValue { def address = safeTestedAddresses[sizeof safeTestedAddresses - 1] as Address2D; def cell : HexBoards.ServerSafeCell = board.getCell(address) as HexBoards.ServerSafeCell; def event = Events.SafeCellTestedEvent { address: address; mineNeighborCount: cell.minesTouched; } publisher.publishObject(event); }; ... } Receiving Events... "Oh, Crap... Alien Thread" Inbound messages seemed like they'd be just as easy. They kind of worked in JavaFX 1.1, although I saw some screen twitching that reminded me of trying to run Swing-based code on the wrong thread. JavaFX 1.2 seems to spike the whole works and just did nothing -- no error message, just not responsive. I asked Josh for some help, and he sent this reply: All JavaFX stuff happens on the GUI thread by default. The exceptions are APIs which do threading for you, such as loading an image in the background. If you create your own (Java) Thread then you are on your own. We won't stop you but if you touch some JavaFX structures some weird things may happen. If you need to do some non GUI work in a different thread (talk to the network, compute some calculation, etc.) then you should do it in Java and use a callback to get back into the JavaFX side. You can either use the usual Swing way, SwingUtilities.invokeLater(), or use the new FX.deferLater function. Since we have function references in JavaFX this sort of callback works quite well. Just before I got that response, I found this two-year-old email from Tom Ball: Part 2 is to come up with a replacement for "do later". The canonical use case for "do later" is "oh, crap, I got called back in some other thread that isn't the EDT, get me to the EDT!" This comes about because you may implement an interface that represents a callback, and the callback happens in the wrong thread. In that case, the body of "do later" should really be the whole method, since you don't want to be touching any data from the alien thread. Aliens Among Us I normally prefer receive()s in my own threads to MessageListeners, but I didn't see a good way to use receive() or even receiveNoWait() without either polling or locking down the graphics thread with a blocking call. Using FX.deferAction() inside a MessageListener was pretty easy, and everything flowed from there: package class TestMessageListener extends MessageListener { var board : HexBoards.ClientMineBoard; //On a JMS Thread. Oh crap. override function onMessage(message : Message) { //Get back to the GUI thread before something bad happens. FX.deferAction(function() { def event : Events.CellTestedEvent = (message as ObjectMessage).getObject() as Events.CellTestedEvent; board.processEvent(event); }); } } The World Is Not Enough JavaFX is already doing some event-based programming in the background, single-threaded, on the graphics thread, using its single queue. The reactive data model is great, so long as it can live on the graphics thread along with everything else, without bogging things down. But bogging down the graphics thread was always one of the risks in AWT and Swing. JavaFX doesn't save us from that. Simon Morris posted an approach for building very clean parsers in JavaFX. If the program is only about parsing, that should work well. However, if you need the graphics thread for graphics, your JavaFX program might sputter or jam during the parse, or any other big computation or big i/o operation. World on a Thread Osvaldo Pinali posted a blog with a postscript about the power of automatic propagation through bind. Fabrizio Giudici's concerns about encapsulation I think are misplaced.* The great thing about bind is that when you create your objects' code, you don't have to predict how those objects will be used and build the corresponding boilerplate. Someone later uses bind when they want an update, binding to the fields they care about. It's getting back to OO's forgotten roots in message-passing, and taking a step beyond. Instead of being limited to API provided by a developer, you ask an object to send a message when something you care about changes. Osvaldo talks about his days in constraint programming. Propagation in constraint programming was tricky to get right. Mixing concurrency and propagation is even tricker. JavaFX solves this problem by only propagating changes on the graphics thread, alongside all the other graphics work. It can't take advantage of multiple threads and multiple cores; it can't dedicate one core to keep graphics responsive and use the rest for computation and i/o. The tail end of Tom Ball's email lays out a long term goal: Part 3 (to be deferred for a while) involves creating a functional subset of FX that can be safely invoked in threads other than the EDT. I hold out some hope that the "valueof" operator discussed this week (in the context of holding some variables constant in bind expressions) would provide the key: that an "async closure" would be a closure which could not have the side effect of reading or writing FX attributes. Instead, at the time the closure was created, the appropriate values would have to be copied with "valueof", so that the closure was operating on local copies. The goal is to create FX code that can't touch arbitrary application state, but instead copies what it needs. Josh says, We have basically done parts 1 and 2 of Tom's plan. ... Part 3, a threadsafe functional subset of the language hasn't been done yet." Tom's description of where they're going implies that the graphics thread is going to control all the data and hand copies off to other threads via some programming construct. It'd be better, but will still be limited by flows in and out of a single thread. * Fabrizio has a solid practical point, though. His example shows that some part of control flow and mutability is out of kilter. I'll keep my binds on defs, one-way only, for now. World of Tomorrow Osvaldo Pinali's blog's main point was to open a discussion about what we need next in JavaFX. I think the ability to use JavaFX for big jobs beyond user interface work should be high on the list. FX.deferAction() is already using the graphics event queue; one queue already exists. One easy way to gain some concurrency is with events flowing into multiple queues from wherever, processed by a thread dedicated to each queue. The complexity comes in when figuring out which objects live on which queues. JavaFX right now makes an easy choice; there's only one queue for one world of data structures. The other extreme, one thread per object, is too resource-heavy to sustain. I'd like the power to segregate my objects into groups that I define. For example, I'd like to put the user interface of a game on one thread, the game's logic on a second thread, and large computes and i/o operations on other threads. That would give JavaFX unique power in two domains: user interfaces and scalable propagation. - Login or register to post comments - Printer-friendly version - dwalend's blog - 6408 reads by fabriziogiudici - 2009-07-05 23:54It took a while for me to understand what's happening, but it looks like FireFox 3.5 is pretty bad screwed out. While the first time I accessed this blog was with 3.0, and I could see the applet, with 3.5 I can't see the applet, and even worse the whole navigator is broken (can't edit the URL in the navigation bar, can't write anything in this comment box; I have been forced to use Safari). I think it's the fault of FireFox 3.5, since it is plagued by a high number of bugs. Back to the topic. "Osvaldo Pinali posted a blog with a postscript about the power of automatic propagation through bind. Fabrizio Giudici's concerns about encapsulation I think are misplaced.*" Well, it depends. Seeing binding as message passing is a great idea as you can define the bindable structures as messages independent of the internal state of the object. Binding to internal state can be a trouble, and that's where my concern is. As usual, we have to distinguish from the binding feature (a powerful tool) and the use people make of it. by dwalend - 2009-07-04 08:09... queue.put() might be easy to do... I'll try the experiment. by dwalend - 2009-07-03 19:34Hi aleixmr, I'm not that bothered by the asynchrony. The user is always on his own thread, so asynchrony is part of the UI puzzle. I like the "one thread for pixels" approach (although "one thread per display" might be better). Driving via interrupts (pre-Mac-OSX for example) I found much harder to get my head around. I think the FX.deferAction is clear, clean and very compact. I just wish we had queue.put(function) instead. My long-lived complaint about having to use the graphics event thread was that the parts we have to use looked nothing like the rest of the system. There's no non-graphics event queue, non-graphics worker, or non-graphics invokeLater(), so the other concurrency puzzles get solved differently from Swing's. Foxtrot standardizes some other options for Swing UIs, but (1) you still have to learn all of Swing's rules to use it, (2) you have to learn Foxtrot's additional rules, and (3) it's still UI-only. JavaFX brings us this very profound feature -- easy automatic propagation -- but the feature seems pinned to the graphics thread. Tantalizing. by aleixmr - 2009-07-03 10:56Hey !! that's pretty crazy, still 10 years and we need to do things on invokelater !!! I don't like that asynchronous solution coz it gets your code messy and difficult to read ! I like the foxtrot aproach I use it for swing and works like a charm, synchronous solution !!! (Please don't get me wrong, asynchronous callback is needed to !) by dwalend - 2009-07-02 05:43Now working on firefox - windows. Looks like the problem with linux is that the OS thinks its still March and I signed the .jars a few days ago. Dave by dwalend - 2009-07-02 05:10whp, Which browser? Which OS? Which versions? I've seen a lot of variability by OS and browser. Works for me on Safari - MacOS, Firefox - MacOS (asks for a password), IE - Windows. Haven't seen it work on firefox - windows or anything - linux yet. Does it work now? (I updated the jnlp. It had a file URL. Now points somewhere more reasonable. Maybe a "feature" for the NetBeans plugin to add.) Thanks, Dave by whp - 2009-07-02 03:31Exception: java.io.FileNotFoundException: JNLP file error:. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct. JavaFX deployment bites again. by dwalend - 2009-07-06 17:52Fabrizio, Thanks for the report on firefox. I've realized "applets in Java in N browsers on M operating systems" is too many wheels in wheels to rely on. I'll try jnlp next time so my work only has to spin on top of Java and the OS. var, def, public-init and bind play together in some interesting ways. bind with def is OK. It seems not so much encapsulation as clashing mutators. I haven't thought through two-way binds or bound functions yet. (Heck, this is my first javafx project.) Do you see something similar or something else?
https://weblogs.java.net/node/242548/atom/feed
CC-MAIN-2015-40
refinedweb
2,549
73.78
A while ago, I developed a small PyGtk programme that could dynamically reload all the working callbacks and logic while the GUI was still running. I could get away with this because of the flexible way Python loads modules at runtime, but it ended up being a waste of time as implementing it took more time that actually using it. For sanity's sake it quickly becomes clear you almost never want to rely on being able to refer to a half initialized module. And wouldn't it be nice if Python enforced this. My suggestion is that module importing occur in a temporary local namespace that exists only until the end of the module code is executed, then a small function could copy everything from the temporary namespace into the module object. The usual closure semantics would guarantee that top-level functions could still call each other, but they would effectively become immutable after the namespace wraps up. The 'global' keyword could be used at the top level in a module to force it to be defined in the module immediately, and to ensure internal references to the object go through the module object. This would be a big change in module import semantics, but should have remarkably few consequences, as it really is an enforcement mechanism for good style. The copying from the temporary namespace into the module object would be a good place to insert a hook function to filter what objects are actually published to the module. You could by default not copy any object indentified by a leading underscore.
https://mail.python.org/pipermail/python-ideas/2007-January/000062.html
CC-MAIN-2014-15
refinedweb
264
53.44
Exercise 2 ask us to write a loop that does not terminate until it sees a zero. A Do-While loop will best serve our interest for this exercise. The loop will sum all user inputted integers until a zero is inputted. Here is my solution: 2. Write a program that asks the user to type in numbers. After each entry, the program should report the cumulative sum of the entries to date. The program should terminate when the user enters 0. #include <iostream> using namespace std; int main() { int number; int sum = 0; do { cout << "Enter numbers: "; cin >> number; sum = sum + number; cout << "Current sum is: " << sum << endl; } while (number != 0); return 0; } Advertisements I went with a simple ‘while’ loop as I wanted to check the cin prior to running the statement. [code] #include using namespace std; int main() { cout << "Enter a number, press " << endl; cout << "Press 0 to end program" << endl;; cout <> a; while (a != 0) { total += a; cout <> a; } cout << "total = " << total << endl; } [code\]
https://rundata.wordpress.com/2012/11/02/c-primer-chapter-5-exercise-2/
CC-MAIN-2017-26
refinedweb
168
72.97
The include statement includes a template and returns the rendered content of that file into the current namespace: {% include 'header.html' %} Body {% include 'footer.html' %} Included templates have access to the variables of the active context. If you are using the filesystem loader, the templates are looked for in the paths defined by it. You can add additional variables by passing them after the with keyword: {# template.html will have access to the variables from the current context and the additional ones provided #} {% include 'template.html' with {'foo': 'bar'} %} {% set vars = {'foo': 'bar'} %} {% include 'template.html' with vars %} You can disable access to the context by appending the only keyword: {# only the foo variable will be accessible #} {% include 'template.html' with {'foo': 'bar'} only %} {# no variables will be accessible #} {% include 'template.html' only %} Tip When including a template created by an end user, you should consider sandboxing it. More information in the Twig for Developers chapter and in the sandbox tag documentation. The template name can be any valid Twig expression: {% include some_var %} {% include ajax ? 'ajax.html' : 'not_ajax.html' %} And if the expression evaluates to a Twig_Template or a Twig_TemplateWrapper instance, Twig will use it directly: // {% include template %} $template = $twig->load('some_template.twig'); $twig->display('template.twig', array('template' => $template)); You can mark an include with ignore missing in which case Twig will ignore the statement if the template to be included does not exist. It has to be placed just after the template name. Here some valid examples: {% include 'sidebar.html' ignore missing %} {% include 'sidebar.html' ignore missing with {'foo': 'bar'} %} {% include 'sidebar.html' ignore missing only %} You can also provide a list of templates that are checked for existence before inclusion. The first template that exists will be included: {% include ['page_detailed.html', 'page.html'] %} If ignore missing is given, it will fall back to rendering nothing if none of the templates exist, otherwise it will throw an exception. © 2009–2018 by the Twig Team Licensed under the three clause BSD license. The Twig logo is © 2010–2018 Symfony
https://docs.w3cub.com/twig~2/tags/include/
CC-MAIN-2020-24
refinedweb
340
57.47
what is use of new and this keyword in java The keyword this is useful when you need to refer to instance of the class from its method. The keyword helps us to avoid name conflicts. The keyword new in Java that allocates new objects and initialises them. gabhinav, I suggest you don't waste the time to drop these type of posts here. This forum was sincerely help to others while developing the code or any typical issues and bugs etc., don't you got this and new usages in google. Google it man. Please click on the " Mark this thread as solved" which is shown in the below or delete it before you got a bad reputation. I hope you understood. VirtualAsset's defintions are correct. Examples might help, though: "This" would often be used in a constructor or in a setter method. You'll often see it used like this: public class MyClass{ String someField; public MyClass(String someField) { this.someField = someField; } public void setSomeField(String somefield) { this.someField = someField; } To understand this, you have to understand shadowing. If a method declares a variable with a name that's used as a field of the class (ie, takes a parameter called "someField" when there's already a "someField" existing in the class scope), there will be two different variables with that name, and the local one willshadow the class-level one. A reference to "someField" will affect the method-level "someField", not the one at the class level. But what if you want to work with the class-level one - for example, what if you want to set a class field? For that you use the "this" keyword, which refers to the current instance of the class. If you have a MyClass object called mc, "this.someField" refers to mc.someField, even if you're in a local context that otherwise shadows someField. That's probably the most common use of "this". Another which I've come across would be when you want to create an object that can refer back to the current one. For example, if you have an applet that's creating a Model, a View, and a Controller*, you want those to be able to call methods of the applet, you might pass a reference to the applet in via the constructor. That's when you'd use "this": Model m = new Model (this); View v = new View (this); Controller c = new Controller (this, m, v); Now each of those objects can reach back up to the applet that spawned them, perhaps to get parameters or call utility methods, and the Controller can pass information to the Model and the View. I hope that helps with "this". "New" is easier, it just instantiates an object. Dog fido = new Dog("Fido", "boxer"); calls Dog's constructor with those two strings as parameters and passes fido a pointer to the resulting object. *Model/View/Controller is a very useful design pattern - if it's new to you, please look it up before you ask about ...
https://www.daniweb.com/programming/software-development/threads/298294/core-java-question
CC-MAIN-2017-09
refinedweb
510
71.44
Complex examples Common ancestor This example replaces the dom In some cases, you want to find the common ancestor of two given nodes. To do this, you can use the intersect expression. The intersect expression returns all nodes that occur in both given sequences. Common ancestor XQuery let $nodeA := //nodeA, $nodeB := //nodeB return ($nodeA/ancestor-or-self::* intersect $nodeB/ancestor-or-self::*)[last()] Open example in playground This example explained: The let statement is used to easily access the two given nodes. The $nodeand A/ancestor-or-self::* $nodeboth result in a sequence of ancestors. B/ancestor-or-self::* The intersection of these two sequences (or sets) is determined using the intersect expression. The parentheses ("()") around the intersect expression are used to prevent the filter from becoming a part of the $nodeB/ancestor-or-self::* path expression. The last step is to get the first ancestor that is common to both given nodes. For that, we'll use the [last()]filter. Why do we use last()if we're looking for the first ancestor that matched, you may ask? Well, that's because the result of the intersect will be sorted in document order. Highes level nodes This example replaces the blueprint In some cases you want to find the highest level nodes in a given sequence of nodes. This means that we're looking for the nodes that do not have an ancestor in the given sequence. To do this, the fn:outermost function can be used. Highest level nodes XQuery (: Selecting some nodes. <tips> is an ancestor of <tip> :) let $input := (//tips, //tip[@id = "examples"], //title) return outermost($input) Open example in playground Contains This example replaces the blueprint We can check whether a node contains another node by comparing the ancestry of the node to check with the possible ancestor node. Contains XQuery let $possibleAncestor := //tips, $nodeToCheck := //tip[@id = "example"] return $nodeToCheck/ancestor::* = $possibleAncestor
https://documentation.fontoxml.com/latest/complex-examples-fc437bab8c4d
CC-MAIN-2021-25
refinedweb
317
54.12
how to validate the email login how to validate the email login // JavaScript Document JOIN US...; } } return true } function validate(){ var fname=document.form.fname.value... into login(firstname,lastname,email,contactNo,address,city) values('"+fname login How to create login page in jsp login form validation - JSP-Servlet login form validation hi, how to validate the login form that contains user name and password using post method . the validation should not allow user to login in the address bar thanks regards, anand login how to create login page in jsp Here is a jsp code that creates the login page and check whether the user is valid or not. 1...;tr><td></td><td><input type="submit" value="Login"> JSP Login Page Question JSP Login Page Question hey..i have a login page where different users first registered and then after they can login. my question is how a user can see only his data after login where different users data are stored login hello i need some help please help how can identify admin from user when logging in? please make some answer and some explanation... Please visit the following link; login page - Java Beginners login page I have one login page in jsp that is login.jsp. Now i have validate user from database by another page validate.jsp. but if the user...:// JSP LOGIN JSP LOGIN sir....i have two user types usercategory and user and both user have userid and password in two different table and now i am making login page for which i have to retrieve usertype,userid and password from two validate text field iPhone validate text field iPhone i got two different UITextfield in my iPhone application for user login. Just wondering how to validate JSP - Login JSP - Login how to disabled back button of the browser when user successfully logged validate bank account number validate bank account number how to validate bank account number in jsp validate parameter before using it validate parameter before using it How to validate the parameter before using it in JSP LOGIN PROBLEM - JDBC LOGIN PROBLEM sir iam harikrishna studying B.Tech Fourth year sir i'm designing one website in that one i have to give facility as login id... with JSP AND JDBC can u plz.........send me the code for checking username jsp login page jsp login page hi tell me how to create a login page using jsp and servlet and not using bean... please tell how to create a database in sql server... please tell with code JSP LOGIN Page JSP LOGIN Page sir....i have two user types usercategory and user and both user have userid and password in two different table and now i am making login page for which i have to retrieve usertype,userid and password from two login application login application how to create login application ? Hi, Please check the following tutorials: Video tutorial - JSP Login Logout Example Login Authentication using Bean and Servlet In JSP simple code to login user login form - JSP-Servlet login form Q no.1:- Creat a login form in servlets? Hi...*; import javax.servlet.http.*; public class Login extends HttpServlet...(); pw.println(""); pw.println("Login"); pw.println(""); pw.println jsp login page jsp login page Hi All, can any one tell me how to create Login page using JSP and Beans. A simple log in page. Please reply ASAP. Thanx, am2085 Hello Friend, Please visit the following link: JSP Login Using Login & Registration - JSP-Servlet Login & Registration Pls tell how can create login and registration step by step in servlet. how can show user data in servlet and how can add...:// Hope that the above links will be helpful Validate Validate hi all..im a beginner java and i need some help from u guys.. Its about validate. My HR request BF = 'brought forward', leave cannot apply date greater than 31/03 of particular year. So, if the user apply the date jsp login code when username , password and dropdown box value is correct... jsp login code when username , password and dropdown box value is correct... my project has login in whic i should select the company name in dropdown box.... so when i login i all the three username,password and dropdown box jsp login code when username , password and dropdown box value is correct... jsp login code when username , password and dropdown box value is correct... my project has login in which i should select the company name in dropdown box.... so when i login i all the three username,password and dropdown box... To create a Servlet "login.java" for validate the user login Login Query Login Query how to login with usertype in jsp page and redirect...="Select * from login where email='"+email+"' and password='"+password+"' and user..."); } } else { session.setAttribute("error","login Login/Logout With Session Login/Logout With Session In this section, we are going to develop a login/logout application with session... in the struts.xml: <action name="login" With Drop Down, Having Departments in different table Login With Drop Down, Having Departments in different table Hi all, I am doing a project using JSP. My Issue is..... I have two departments name... is to select the department from the drop down box and then login to the page. I JSP data after login where different users data are stored in database JSP data after login where different users data are stored in database hey..i have a login page where different users first registered and then after they can login. my question is how a user can see only his data after login login authentication - Development process login authentication hi.. how to validate username and password for both admin and user using java script which is retrive from backend code for login fom - Struts code for login fom we have a login form with fields...: In this admin can login and also narmal uses can log in with this same page.For this what is the internal cde.how can it validate Validate TextArea Validate TextArea In this section, you will learn how to validate your text area...; Step 3: Create a JSP page that contains text area login authentication - Java Beginners login authentication i 've designed the jsp page for login. now i need to connect it with the bean and do authentication for the login for the "register" table in mysql. can anybody help me with the code immediately Struts2 ajax validation example. Struts2 ajax validation example. In this example, you will see how to validate login through Ajax in struts2. 1-index.jsp <html> <...:submit </sx:submit> login and register - Java Beginners login and register pls send me the code for login and register immediately Hi friend, Please specify the technology you want code for login and register. For example : JSP,Servlet,Struts,JSF jsp code for storing login and logout time to an account jsp code for storing login and logout time to an account I need simple jsp code for extracting and storing login and logout time in a database table...:// login-password - Java Beginners ; } Login Application in JSP...login-password complete code of login-password form then how to connect the bank simulation project? Hi friend, jsp JSP entered name and password is valid HII Im developing a login page using jsp and eclipse,there are two fields username and password,I want.../Login" onsubmit="javascript:return validate();"> <table> <tr>< Login page validation on ipad Login page validation on ipad I am using userId & password labels and their corresponding textfield. Now I want to validate the details by single sign in button and want to switch on to next page. Used two labels again JSP Alert ; JSP Alert is used to put the validation on login... the validation on the login form using Javascript method in JSP page. Understand... ( ) is used to validate the username and password in JSP. If the username login/logout operation login/logout operation how to do log in/out operations in jsp plz kindly send the steps leing behind this operation...: login/logout operation login/logout operation how to do log in/out operations in jsp...: login authentication - Java Beginners login authentication i 've problem in authenticating users thru jsp and java bean. cud u pls help me. here is the bean and jsp code samples. after giving user name and pwd in the jsp page, though it is wrong the jsp page shows login screen - Java Beginners login screen how can code a login screen please send me the code with two entries Username and Password and validate it please Hi Friend...(500,80,100,20); SUBMIT=new JButton("Login"); SUBMIT.setBounds DropDown Department Login DropDown Department Login Hi all, I am doing a project using JSP... down box and then login to the page. I have a drop down list box with Customs and Accounts..A Username and a Password field.Now want to login to the page jsp jsp Hi,please send me login page code using jsp 1)login.jsp: <html> <script> function validate(){ var username...="post" action="check.jsp" onsubmit="javascript:return validate();"> < jQuery to validate Email Address jQuery to validate Email Address  ... that validate the email address from server and displays on the user browser...;result". You can easily replace PHP with JSP, or ASP program. Steps to develop login for different user in the same page login for different user in the same page how can i do the login for different user in the same page throug jsp login login page display an error showing failure to login even when the correct information is entered according to der roles lOGIN according to der roles how to login a user according to roles...;html> <script> function validate(){ var username=document.form.user.value...="check.jsp" onsubmit="javascript:return validate();"> <table> <tr>>< Login form Login Form with jsp Now for your confidence with JSP syntax, following example of login form will really help to understand jsp page. In this example we Controling login - JDBC ; } For login application on JSP visit to :... and jsp plz............send some guidelines to solve that problem can u plz login how to login admin and user with the same fields(name & password) in single login page while the table for admin and user is seprate in database(mysql) please provide me solution Validations using Struts 2 Annotations In this section we are going to validate our login application using Annotations in Action class. Our current login application does not validate the user against... (using Annotations to validate Login forms) Now let's develop how to connect jsp with sql database by netbeans in a login page? how to connect jsp with sql database by netbeans in a login page? how to connect jsp with sql database by netbeans in a login page JSF Validation In Login Application :view> <html> <head><title>JSF Simple Login Example<...;td><h:outputText value="Enter Login ID: " /></td> <... for the resultforfail.jsp file: Login Failed! Here is the code login i want to now how i can write code for form login incolude user and password in Jcreator 4.50 Hello Friend, Visit Here Thanks admin login control page - JSP-Servlet admin login control page how to encrypted password by using jsp... adminPage.jsp? JSP Example code for encrypted passwordJSP Encrypted Password Example CodeadminPage.jsp<html> <title>Admin login
http://www.roseindia.net/tutorialhelp/comment/83668
CC-MAIN-2014-49
refinedweb
1,905
53.92
JawnJawn "Jawn is for parsing jay-sawn." OriginOrigin The term "jawn" comes from the Philadelphia area. It conveys about as much information as "thing" does. I chose the name because I had moved to Montreal so I was remembering Philly fondly. Also, there isn't a better way to describe objects encoded in JSON than "things". Finally, we get a catchy slogan. Jawn was designed to parse JSON into an AST as quickly as possible. OverviewOverview Jawn consists of four parts: - A fast, generic JSON parser ( jawn-parser) - A small, somewhat anemic AST ( jawn-ast) - Support packages which parse to third-party ASTs - A few helpful utilities ( jawn-util) Currently Jawn is competitive with the fastest Java JSON libraries (GSON and Jackson) and in the author's benchmarks it often wins. It seems to be faster than any other Scala parser that exists (as of July 2014). Given the plethora of really nice JSON libraries for Scala, the expectation is that you're probably here for jawn-parser or a support package. Quick StartQuick Start Jawn supports Scala 2.10, 2.11, 2.12, and 2.13.0-M3. Here's a build.sbt snippet that shows you how to depend on Jawn in your own SBT project: resolvers += Resolver.sonatypeRepo("releases") // use this if you just want jawn's parser, and will implement your own facade libraryDependencies += "org.spire-math" %% "jawn-parser" % "0.12.1" // use this if you want jawn's parser and also jawn's ast libraryDependencies += "org.spire-math" %% "jawn-ast" % "0.12.1" If you want to use Jawn's parser with another project's AST, see the "Supporting external ASTs with Jawn" section. For example, with Spray you would say: libraryDependencies += "org.spire-math" %% "jawn-spray" % "0.12.1" There are a few reasons you might want to do this: - The library's built-in parser is significantly slower than Jawn's. - Jawn supports more input types ( ByteBuffer, File, etc.). - You need asynchronous JSON parsing. (NOTE: previous to version 0.8.3 the support libraries would have been named "spray-support" instead of "jawn-spray".) DependenciesDependencies jawn-parser has no dependencies other than Scala. jawn-ast depends on jawn-parser but nothing else. The various support projects (e.g. jawn-argonaut) depend on the library they are supporting. ParsingParsing Jawn's parser is both fast and relatively featureful. Assuming you want to get back an AST of type J and you have a Facade[J] defined, you can use the following parse signatures: Parser.parseUnsafe[J](String) → J Parser.parseFromString[J](String) → Try[J] Parser.parsefromPath[J](String) → Try[J] Parser.parseFromFile[J](File) → Try[J] Parser.parseFromChannel[J](ReadableByteChannel) → Try[J] Parser.parseFromByteBuffer[J](ByteBuffer) → Try[J] Jawn also supports asynchronous parsing, which allows users to feed the parser with data as it is available. There are three modes: SingleValuewaits to return a single Jvalue once parsing is done. UnwrapArrayif the top-level element is an array, return values as they become available. ValueStreamparse one-or-more json values separated by whitespace. Here's an example: import jawn.ast import jawn.AsyncParser import jawn.ParseException val p = ast.JParser.async(mode = AsyncParser.UnwrapArray) def chunks: Stream[String] = ??? def sink(j: ast.JValue): Unit = ??? def loop(st: Stream[String]): Either[ParseException, Unit] = st match { case s #:: tail => p.absorb(s) match { case Right(js) => js.foreach(sink) loop(tail) case Left(e) => Left(e) } case _ => p.finish().right.map(_.foreach(sink)) } loop(chunks) You can also call jawn.Parser.async[J] to use async parsing with an arbitrary data type (provided you also have an implicit Facade[J]). Supporting external ASTs with JawnSupporting external ASTs with Jawn Jawn currently supports six external ASTs directly: Each of these subprojects provides a Parser object (an instance of SupportParser[J]) that is parameterized on the given project's AST ( J). The following methods are available: Parser.parseUnsafe(String) → J Parser.parseFromString(String) → Try[J] Parser.parsefromPath(String) → Try[J] Parser.parseFromFile(File) → Try[J] Parser.parseFromChannel(ReadableByteChannel) → Try[J] Parser.parseFromByteBuffer(ByteBuffer) → Try[J] These methods parallel those provided by jawn.Parser. For the following snippets, XYZ is one of ( argonaut, json4s, play, rojoma, rojoma-v3 or spray): This is how you would include the subproject in build.sbt: resolvers += Resolver.sonatypeRepo("releases") libraryDependencies += "org.spire-math" %% jawn-"XYZ" % "0.12.1" This is an example of how you might use the parser into your code: import jawn.support.XYZ.Parser val myResult = Parser.parseFromString(myString) Do-It-Yourself ParsingDo-It-Yourself Parsing Jawn supports building any JSON AST you need via type classes. You benefit from Jawn's fast parser while still using your favorite Scala JSON library. This mechanism is also what allows Jawn to provide "support" for other libraries' ASTs. To include Jawn's parser in your project, add the following snippet to your build.sbt file: resolvers += Resolver.sonatypeRepo("releases") libraryDependencies += "org.spire-math" %% "jawn-parser" % "0.12.1" To support your AST of choice, you'll want to define a Facade[J] instance, where the J type parameter represents the base of your JSON AST. For example, here's a facade that supports Spray: import spray.json._ object Spray extends SimpleFacade[JsValue] { def jnull() = JsNull def jfalse() = JsFalse def jtrue() = JsTrue def jnum(s: String) = JsNumber(s) def jint(s: String) = JsNumber(s) def jstring(s: String) = JsString(s) def jarray(vs: List[JsValue]) = JsArray(vs) def jobject(vs: Map[String, JsValue]) = JsObject(vs) } Most ASTs will be easy to define using the SimpleFacade or MutableFacade traits. However, if an ASTs object or array instances do more than just wrap a Scala collection, it may be necessary to extend Facade directly. You can also look at the facades used by the support projects to help you create your own. This could also be useful if you wanted to use an older version of a supported library. Using the ASTUsing the AST AccessAccess For accessing atomic values, JValue supports two sets of methods: get-style methods and as-style methods. The get-style methods return Some(_) when called on a compatible JSON value (e.g. strings can return Some[String], numbers can return Some[Double], etc.), and None otherwise: getBoolean → Option[Boolean] getString → Option[String] getLong → Option[Long] getDouble → Option[Double] getBigInt → Option[BigInt] getBigDecimal → Option[BigDecimal] In constrast, the as-style methods will either return an unwrapped value (instead of returning Some(_)) or throw an exception (instead of returning None): asBoolean → Boolean // or exception asString → String // or exception asLong → Long // or exception asDouble → Double // or exception asBigInt → BigInt // or exception asBigDecimal → BigDecimal // or exception To access elements of an array, call get with an Int position: get(i: Int) → JValue // returns JNull if index is illegal To access elements of an object, call get with a String key: get(k: String) → JValue // returns JNull if key is not found Both of these methods also return JNull if the value is not the appropraite container. This allows the caller to chain lookups without having to check that each level is correct: val v: JValue = ??? // returns JNull if a problem is encountered in structure of 'v'. val t: JValue = v.get("novels").get(0).get("title") // if 'v' had the right structure and 't' is JString(s), then Some(s). // otherwise, None. val titleOrNone: Option[String] = t.getString // equivalent to titleOrNone.getOrElse(throw ...) val titleOrDie: String = t.asString UpdatingUpdating The atomic values ( JNum, JBoolean, JNum, and JString) are immutable. Objects are fully-mutable and can have items added, removed, or changed: set(k: String, v: JValue) → Unit remove(k: String) → Option[JValue] If set is called on a non-object, an exception will be thrown. If remove is called on a non-object, None will be returned. Arrays are semi-mutable. Their values can be changed, but their size is fixed: set(i: Int, v: JValue) → Unit If set is called on a non-array, or called with an illegal index, an exception will be thrown. (A future version of Jawn may provide an array whose length can be changed.) ProfilingProfiling Jawn uses JMH along with the sbt-jmh plugin. Running BenchmarksRunning Benchmarks The benchmarks are located in the benchmark project. You can run the benchmarks by typing benchmark/jmh:run from SBT. There are many supported arguments, so here are a few examples: Run all benchmarks, with 10 warmups, 10 iterations, using 3 threads: benchmark/jmh:run -wi 10 -i 10 -f1 -t3 Run just the CountriesBench test (5 warmups, 5 iterations, 1 thread): benchmark/jmh:run -wi 5 -i 5 -f1 -t1 .*CountriesBench Benchmark IssuesBenchmark Issues Currently, the benchmarks are a bit fiddily. The most obvious symptom is that if you compile the benchmarks, make changes, and compile again, you may see errors like: [error] (benchmark/jmh:generateJavaSources) java.lang.NoClassDefFoundError: jawn/benchmark/Bla25Bench The fix here is to run benchmark/clean and try again. You will also see intermittent problems like: [error] (benchmark/jmh:compile) java.lang.reflect.MalformedParameterizedTypeException The solution here is easier (though frustrating): just try it again. If you continue to have problems, consider cleaning the project and trying again. (In the future I hope to make the benchmarking here a bit more resilient. Suggestions and pull requests gladly welcome!) FilesFiles The benchmarks use files located in benchmark/src/main/resources. If you want to test your own files (e.g. mydata.json), you would: - Copy the file to benchmark/src/main/resources/mydata.json. - Add the following code to JmhBenchmarks.scala: class MyDataBench extends JmhBenchmarks("mydata.json") Jawn has been tested with much larger files, e.g. 100M - 1G, but these are obviously too large to ship with the project. With large files, it's usually easier to comment out most of the benchmarking methods and only test one (or a few) methods. Some of the slower JSON parsers get much slower for large files. Interpreting the resultsInterpreting the results Remember that the benchmarking results you see will vary based on: - Hardware - Java version - JSON file size - JSON file structure - JSON data values I have tried to use each library in the most idiomatic and fastest way possible (to parse the JSON into a simple AST). Pull requests to update library versions and improve usage are very welcome. Future WorkFuture Work More support libraries could be added. It's likely that some of Jawn's I/O could be optimized a bit more, and also made more configurable. The heuristics around all-at-once loading versus input chunking could definitely be improved. In cases where the user doesn't need fast lookups into JSON objects, an even lighter AST could be used to improve parsing and rendering speeds. Strategies to cache/intern field names of objects could pay big dividends in some cases (this might require AST changes). If you have ideas for any of these (or other ideas) please feel free to open an issue or pull request so we can talk about it. DisclaimersDisclaimers Jawn only supports UTF-8 when parsing bytes. This might change in the future, but for now that's the target case. You can always decode your data to a string, and handle the character set decoding using Java's standard tools. Jawn's AST is intended to be very lightweight and simple. It supports simple access, and limited mutable updates. It intentionally lacks the power and sophistication of many other JSON libraries. All code is available to you under the MIT license, available at.
https://index.scala-lang.org/non/jawn/jawn-parser/0.10.4?target=_2.12
CC-MAIN-2020-45
refinedweb
1,919
58.69
good programming examples. great code system. :-) thnx for giving a good continue example.....I liked it and understood as well The above program code is very helpful & easy to understand. It is the best source to clear our concept. Post your Comment Continue and break statement is an example of break and continue statement. Example - public class ContinueBreak... is an example of break and continue statement. Example - public class ContinueBreak...Continue and break statement How can we use Continue and break break and continue break and continue hi difference between break and continue break and continue break and continue hi i am jane pls explain the difference between break and continue C break continue example C break continue example In this section, you will learn how to use break statement with continue statement in C. The continue statement provides a convenient way JavaScript Break Continue Statement JavaScript Break-Continue Statement: Generally we need to put the break.... Example 1(Break): <html> <head> <title>Write your title here... of the program. On the other hand continue helps us to continue the flow... use break. It gives the following output: C:\javac>... loops here two print '*'. In this example, if we haven't use break statement thus Java Continue in Boolean expression. Sometimes Java Continue is used with break statement...) { System.out.println("This is an example of Continue Label in Java...Java Continue refers to Continue statement in java, used for skipping Java Break loop of opposite nature, break and continue respectively. In the following example break... in terminating the loops. Break Loop Example... Java Break loop   Java Break keyword and for handling these loops Java provides keywords such as break and continue... of the loop. Example below demonstrates the working of break statement. In the program... in Java an Example import javax.swing.*; public class Java_Break_keyword Continue statement in java . Difference between break and continue is, break exit from the loop... is the example which shows the use of continue in the loop. There are two form... statement. Example : Unlabeled continue statement for(i=0;i<5;i++) { for(j Java - Continue statement in Java the statements written after the continue statement. There is the difference between break and continue statement that the break statement exit control from the loop... Java - Continue statement in Java   Java Break Lable Java Break Lable In Java, break statement is used in two ways as labeled and unlabeled. break is commonly used as unlabeled. Labeled break statement is used Java - Break statement in java ; 2.The continue statement 3.The return statement Break: The break...;javac Break.java C:\chandan>java Break The Prime number in between 1 - 50... Java - Break statement in java   Java Break command . . break is often used with label continue, which also comes under Java branching... in Java Example import javax.swing.*; public class Java_Break... Java Break command   How to use 'continue' keyword in jsp page ? */ continue; else /* use break statement to transfer control out... How to use 'continue' keyword in jsp page... that shows how to use 'continue' keyword in jsp page. The continue statement skips Java Break example Java Break example  ... these loops Java provides keywords such as break and continue respectively... Category in which it has two keywords of opposite nature, break and continue php do while break php do while break receiving a Fatal saying cannot break/continue.. please suggest. Thank U Continue in java between break and continue statement that the break statement exit control... Continue.java C:\chandan>java Continue chandan Value of a : 0... Resource: Java - Continue statement in Java Java Break continue Continue PHP Break ) break; else echo "$a\t"; } ?> Output: 1 2 3 4 Example...Break Control Structure: Break is a special kind of control structure which helps us to break any loop or sequence, like if we want to break the flow of any Continue Statement in java 7 Continue Statement in java 7 In this tutorial we will discuss about continue statement in java 7. Continue Statement : Sometimes you need to skip block of statements under specific condition so for that you can use continue Implementing Continue Statement In Java Implementing Continue Statement In Java  ... continue statement. First all of define class "Continue"...*; class Continue{ public static void main(String Java Break out of for loop statements. These Java labels are break and continue respectively. 'break' is mainly used... = "break", strc = "continue", choice = ""; JOptionPane.showMessageDialog(null... Java Break out of for loop   Java Break while example Java Break while example Break is often used in terminating for loop but it can also be used for terminating other loops as well such as while, do while etc. . Break The break Keyword The break Keyword  .... In other word we can say that break keyword is used to prematurely exit.... Also break keyword is used for terminate a loop. The break always exits Java break for loop baadshah. In the example, a labeled break statement is used to terminate for loop... for Loop Example public class Java_break_for_loop { public static void... Java break for loop   good egRamesh August 25, 2011 at 3:37 PM good programming examples. c++archit December 7, 2011 at 8:27 PM great code system. :-) continue example reviewafsha January 3, 2012 at 11:07 AM thnx for giving a good continue example.....I liked it and understood as well BREAK & CONTINUE STATEMENTS IN CSujishnu Adhya November 3, 2012 at 10:41 AM The above program code is very helpful & easy to understand. It is the best source to clear our concept. Post your Comment
http://roseindia.net/discussion/23747-C-break-continue-example.html
CC-MAIN-2015-22
refinedweb
925
59.5
django-crowdsourced-fields 0.4 A reusable Django app that allows to mark certain fields of your models as masterdata. Users would still be able to enter their own values but the app will map them to unique instances. Admin staff is able to review all user generated entriesand mark them as approved. A reusable Django app that allows to mark certain fields of your models as masterdata. Users would still be able to enter their own values but the app will map them to unique instances. Admin staff is able to review all user generated entries and mark them as approved. An example could be a vehicle site, where you would like to allow users to enter make and model for their vehicle but you want to make sure that an entry of “bmw” and “Bmw” results in “BMW”. The app also comes with a nice jQuery combobox for such fields, where user get auto-suggestions while they type. Installation You need to install the following prerequisites in order to use this app: pip install Django pip install South If you want to install the latest stable release from PyPi: $ pip install django-crowdsourced-fields If you feel adventurous and want to install the latest commit from GitHub: $ pip install -e git://github.com/bitmazk/django-crowdsourced-fields.git#egg=crowdsourced_fields Add crowdsourced_fields to your INSTALLED_APPS: INSTALLED_APPS = ( ..., 'crowdsourced_fields', ) Don’t forget to migrate your database: ./manage.py migrate crowdsourced_fields Add jQuery and jQuery UI to your base template or at least to the template that should display forms with crowdsourced fields. Also include the jQuery UI styles and special styles provided by this app: <link rel="stylesheet" type="text/css" href=""> {{ form.media.css }} <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.21/jquery-ui.min.js"></script> {{ form.media.js }} You might want to include the jQuery and jQuery UI parts in your base template and the {{ form.media }} parts only in the template that uses a form with crowdsourced fields. Usage Prepare your models First you need to modify the model that should have crowdsourced fields: from crowdsourced_fields.models import CrowdSourcedModelMixin class YourModel(CrowdsourcedModelMixin, models.Model): CROWDSOURCED_FIELDS = { 'make': {'item_type': 'makes', } 'model': {'item_type': 'models', } } make = models.CharField(...) model = models.CharField(...) CROWDSOURCED_FIELDS is a dictionary of dictionaries. The main keys are the fields that should be crowdsourced. This must be CharFields. The inner dictionary supports the following keys as settings: - item_type (mandatory): The name of the group under which the data of this field should be grouped. Let’s assume you have two models and both have a field country which should have access to the same data. By giving the same item_type for the field on both models, they will use the same set of crowdsourced data. For each field that you selected, the mixin will dynamically add a method called fieldname_crowdsourced to the model. Therefore we will save both, the value that the user actually entered (in it’s original field) and a link to the unique and approved value that we maintain through this app. For your staff users it is save to change the values of the CrowdsourcedItem objects. Since you should use those in your templates, any typo fixes would be reflected on your site immediately without the need of a datamigration. Create a model form Next you would create a ModelForm for your model with crowdsourced fields: from django import forms from crowdsourced_fields.forms import CrowdsourcedFieldsFormMixin from your_app.models import YourModel class YourModelForm(CrowdsourcedFieldsFormMixin, forms.ModelForm): class Meta: model = YourModel The CrowdsourcedFieldsFormMixin will do the magic for you and add replace the original form field (a text input) with a combobox that has all existing values ready for autosuggest. Create your template As mentioned above, first make sure that jQuery, jQuery UI and the form’s media is available in your template. After that you can initiate the comboboxes like so: $(document).ready(function(){ $('#id_country').combobox({ source: {{ form.country_crowdsourced_values|safe }} }); }); In this case country would be the name of the form field. Contribute If you want to contribute to this project, please perform the following steps: # Fork this repository # Clone your fork $ mkvirtualenv -p python2.7 django-crowdsourced-fields $ pip install -r requirements.txt $ ./online_docs/tests/runtests.sh # You should get no failing tests $ git co -b feature_branch master # Implement your feature and tests $ ./crowdsourced_fields/tests/runtests.sh # You should still get no failing,ORM,jQuery,combobox,models,fields - License: The MIT License - Platform: OS Independent - Package Index Owner: mbrochh - Package Index Maintainer: dkaufhold, Tobias.Lorenz - DOAP record: django-crowdsourced-fields-0.4.xml
https://pypi.python.org/pypi/django-crowdsourced-fields/0.4
CC-MAIN-2016-22
refinedweb
778
55.24
A multi-worker pipe mechanism that uses AWS SQS Project description A multi-worker pipe mechanism that uses AWS SQS. Instructions Install the latest version of the package: pip install sqspipes Create a client from sqspipes import TaskClient client = TaskClient( domain='my-app', aws_key='YOUR_AWS_KEY', aws_secret='YOUR_AWS_SECRET', aws_region='us-west-2' ) Make sure that the aws_key provided has full access to the SQS service, since it needs to be able to create & delete queues. Also ensure that the aws_region provided is either us-west-2 or us-east-2, since other regions do not support FIFO queues which are used by this package. Define the tasks you may have: import os import sys import random import string import time def _generate(max_size): return ''.join(random.choice(string.ascii_lowercase) for _ in range(random.randint(1, max_size))) def _reduce(value, keep='vowels'): vowels = ['a', 'e', 'i', 'o', 'u', ] result = [v for v in value if (v in vowels) == (keep == 'vowels')] return value, ''.join(result) def _count(data): value, vowels = data return value, len(vowels) In this example we have a simple flow that looks like this: generate word -> reduce word to only its vowels -> count the reduced word This is similar to a map-reduce algorithm, however using this module you might have many layers where each transforms the original data in a different way. These layers (tasks) are then combined like bash pipes, where the output from a task is the input to the next one. Notice the few things: - The first argument of each task is going to be fed with the output from the previous one, with the obvious exception of the first task. - The output of each task should be json serializable. - You may return None from a task if you do not want it to continue further in the processing line. This could be done e.g because your tasks are picked from a database, so you could return None if that database is empty. If for any reason you want to process None like a normal task output/input, you can pass ignore_none=False as a parameter to the TaskClient constructor. In that case, you can use the following to return an empty task output. from sqspipes import EmptyTaskOutput def my_task() # your task's logic here return EmptyTaskOutput() # for some reason, None is a valid task output # later in your code... TaskClient( domain='my-app', aws_key='YOUR_AWS_KEY', aws_secret='YOUR_AWS_SECRET', aws_region='us-west-2', ignore_none=False ) Register the tasks Now that you have created the various tasks, you simply have to define their order & other runtime parameters, like this: client.register_tasks([ {'method': _generate, 'workers': 32, 'interval': 0.1}, {'method': _reduce, 'workers': 2}, {'method': _count, 'workers': 16} ]) The following keys are supported for each task: `method`: A callable object. This is the function that will actually be executed. For all tasks except for the first one, the first argument of this method will be the result of the previous task's method. `name`: The name of this tasks. If no name is provided, the method's name is automatically used. `workers`: The number of worker threads that will be processing messages in parallel. Defaults to 1. `priorities`: The number of different priority levels, where 0 is the lowest possible priority. Defaults to 1, maximum value is 16. `interval`: Only applies to the first task. Number of seconds to wait between each execution. Can either be an number, or a callable that returns an number (e.g `lambda: random.random() * 5`) Defaults to 0. Execute the tasks A script that would execute the tasks we described would look like this: # script.py file import sys def generate(workers): for res in client.run('_generate', args=(10, ), iterate=True, workers=workers): print(res) def reduce(workers): for res in client.run('_reduce', iterate=True, workers=workers): print('%s -> %s' % res) def count(workers): for result in client.run('_count', iterate=True, workers=workers): print('%s -> %d' % result) try: n_workers = int(sys.argv[2]) except ValueError: n_workers = None try: if sys.argv[1] == 'generate': generate(n_workers) elif sys.argv[1] == 'reduce': reduce(n_workers) elif sys.argv[1] == 'count': count(n_workers) else: raise ValueError('Invalid argument: must be one of generate, reduce or count') except IndexError: raise ValueError('Script argument is required') In this example, we have a script which, based on the provided argument, executes one of the three tasks defined in the previous step. Notice that you can have the following setup: - A machine M1 running the command python script.py generate 8 that would create 8 workers which would submit new words for processing. - A machine M2 running the command python script.py reduce 16 that would create 16 workers that would reduce words only to their vowels. - A machine in this example could be a different node (VM, physical computer etc.), but tasks could of course run on the same infrastructure as well. - An unhandled exception on one of the tasks will bring down the entire task runner. This is intentional, since otherwise if unhandled exceptions were “swallowed”, it would be much harder to debug issues, or even identify and track down those “lost” packages. It is up to you to handle any exceptions you want in any possible manner. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sqspipes/0.1.2/
CC-MAIN-2020-50
refinedweb
896
55.54
Let's dive into the basics of unsupervised Machine Learning algorithms! In this weeks BaseCamp tutorial, we will show you the (probably) most common clustering algorithm: KMeans. We will be working with the famous packages pandas, numpy, and sklearn. Let's start by loading them: import pandas as pd import numpy as np We will be working with NBA data, more specifically the average statistics of all players from the regular season 2016/2017. We will use these data to separate players into different clusters/groups based on the players' statistics and we will see if some interesting groups of players are created. You can download the dataset here. The data is in the JSON format, therefore we will use the package JSON to load the file into Python and parse it: data_path = "path to your file" with open(data_path + 'players_average_2017.csv') as json_data: data = json.load(json_data) # Parsing columns = data["resultSets"][0]["headers"] df = pd.DataFrame(data["resultSets"][0]["rowSet"], columns=columns) In the column names, we can see that there are plenty of columns in rank in it. These are just the ordering of players for specific statistics and we won't use them for clustering. We will use the code below to drop the columns that have the substring "_RANK" in them. # we will drop the columns with _RANK in it. to_drop = [] for cl in columns: if cl.find("_RANK") != -1: to_drop.append(cl) df.drop(to_drop, axis=1, inplace=1) df.head() Let's start with some simple descriptive statistics of our dataset: # How many players did occur in at least 1 game during the season? print len(df.PLAYER_ID) # How many points were scored in average by 1 player in 1 game? print df.PTS.mean() # the average above is not correct , we have to use weighted average, # people who played more games will have bigger impact on average print (df.PTS * df.GP).sum() / df.GP.sum() # Descriptive statistics for important variables df.drop(["PLAYER_ID","TEAM_ID"], axis = 1).describe() In this phase, we should remember or write down the values of descriptive statistics, so we can compare them later with specific clusters of players. DATA PREPARATION As usual, we have to transform all categorical variables into numeric ones. Furthermore, there is one very important step in data preparation for the KMeans algorithm. We need to make sure, that all the variables we use are on the same scale. Therefore we will apply one of the scalers from sklearn to our dataset: train = df.drop(["PLAYER_ID","TEAM_ID", "PLAYER_NAME","CFPARAMS" ], axis = 1) # For scaling from sklearn.preprocessing import MinMaxScaler # For transformation into numeric categories from sklearn.preprocessing import LabelEncoder le = LabelEncoder() train["TEAM_ABBREVIATION"] = le.fit_transform(train["TEAM_ABBREVIATION"]) train.head() We can check if we have any missing values in our data. If we don't, we can directly our variables: train.count() # Scaling sc = MinMaxScaler() sc.fit(train) train_sc = sc.transform(train) For more information about the LabelEncoder and MinMaxScaler check the links below: MODELLING K-Means Interface: You can explore the KMeans here. The constructor of the KMeans class returns an estimator with the fit() method that enables you to perform clustering. This process is consistent with other sklearn algorithms we have explored in previous tutorials. K-Means Parameters: Using the above link, we can see that there are a few parameters which control the K-Means algorithm. We will look at one parameter specifically: the number of clusters used in the algorithm. The number of clusters needs to be chosen based on the domain knowledge of the data. from sklearn.cluster import KMeans # Run K-means using 4 cluster centers on user_np_matrix kmeans_4 = KMeans(n_clusters=4, random_state=0) kmeans_4.fit(train_sc) # Run K-means using 5 cluster centers on user_np_matrix kmeans_5 = KMeans(n_clusters=5, random_state=0) kmeans_5.fit(train_sc) # Run K-means using 6 cluster centers on user_np_matrix kmeans_6 = KMeans(n_clusters=6, random_state=0) kmeans_6.fit(train_sc) # Run K-means using 7 cluster centers on user_np_matrix kmeans_7 = KMeans(n_clusters=7, random_state=0) kmeans_7.fit(train_sc) EVALUATION In some cases, we don't know up front what the best amount of clusters to create is. Therefore it is very important to run the algorithm a couple of times for the different number of clusters and evaluate each run separately. To do this we are going to use two metrics: - Inertia - Silhouette score Inertia: Inertia is a metric that is used to estimate how close the data points in a cluster are. This is calculated as the sum of squared distance for each point to its closest centroid, i.e., its assigned cluster center. The intuition behind Inertia is that clusters with lower Inertia are better, as it means closely related points form a cluster. Inertia is calculated by scikit-learn by default: print("Inertia for KMeans with 4 clusters = %lf " %(kmeans_4.inertia_)) print("Inertia for KMeans with 5 clusters = %lf " %(kmeans_5.inertia_)) print("Inertia for KMeans with 6 clusters = %lf "%(kmeans_6.inertia_)) print("Inertia for KMeans with 7 clusters = %lf "%(kmeans_7.inertia_)) Silhouette Score: The Silhouette Score measures how closely created various clusters are. A higher Silhouette Score is better as it means that we don't have too many overlapping clusters. The Silhouette Score can be computed using sklearn.metrics.silhouette_score from scikit learn and values a range between -1 and 1. -1 means that observations are assigned to the wrong cluster, 0 means that clusters overlap and 1 means that observations are assigned to the correct cluster: from sklearn.metrics import silhouette_score def get_silhouette_score(data, model): cluster_labels = model.fit_predict(data) score = silhouette_score(data, cluster_labels) return score print get_silhouette_score(train_sc, kmeans_4) print get_silhouette_score(train_sc, kmeans_5) print get_silhouette_score(train_sc, kmeans_6) print get_silhouette_score(train_sc, kmeans_7) EXPLORATION OF CLUSTERS Using the evaluation methods above we can see that we have a different result for each method. Using Inertia, the solution with 7 clusters was evaluated as the best. On the other hand, using Silhouette Score it was the solution with 4 clusters that was the best. We should check a number of players per cluster, to find out if we have some clusters which are either too big or too small: # Adding labels to our dataframe df["labels_4"] = kmeans_4.labels_ df["labels_7"] = kmeans_7.labels_ # number of players per cluster df[["labels_4","PLAYER_NAME"]].groupby("labels_4").count() # number of players per cluster df[["labels_7","PLAYER_NAME"]].groupby("labels_7").count() There is one cluster in both solutions where a performance metric of the players is much higher than average. It is cluster number 2 for K-means with 4 clusters and number 3 for the solution with 7 clusters. Sizes of these clusters are 76 and 37 respectively. Below you can see the list of players from these two clusters. Be aware that your numbers can be slightly different for each time you fit the model: Andrew Wiggins Anthony Davis Blake Griffin Carmelo Anthony Chris Paul Damian Lillard DeMar DeRozan DeMarcus Cousins Eric Bledsoe Giannis Antetokounmpo Goran Dragic Gordon Hayward Isaiah Thomas James Harden Jimmy Butler Joel Embiid John Wall Karl-Anthony Towns Kawhi Leonard Kemba Walker Kevin Durant Kevin Love Kyle Lowry Kyrie Irving LeBron James Marc Gasol Mike Conley Paul George Paul Millsap Rudy Gay Russell Westbrook Stephen Curry These players are clearly the superstars of the league, with the biggest impact on a game and the most touches per game. In the 4 cluster solution, there are also 40 additional players which aren't on the same superstar level. CONCLUSION The 7 cluster solution seems to be better because there are definitely more than four different player types which are supposed to be split into separate groups. However, the list above still consists of players on every position (as long as they are the superstars in the league). Therefore you can try to go even further and do the clustering with more clusters to see if these guys will be split into more groups or what else might happen to them. Let us know how it went for you. You can also check our online bootcamp for more Data Science education.
https://www.basecamp.ai/blog/introduction-to-clustering
CC-MAIN-2018-26
refinedweb
1,343
55.24
#include <Wt/Ext/LineEdit> A line edit. To act upon text changes, connect a slot to the changed() signal. This signal is emitted when the user changed the content, and subsequently removes the focus from the line edit. To act upon editing, connect a slot to the keyWentUp() signal. At all times, the current content may be accessed with the text() method. The API is a super-set of the WLineEdit API. Enum that describes how the contents is displayed. Event signal emitted when enter was pressed. This signal is emitted when the Enter or Return key was pressed. Returns the maximum length of text that can be entered. Set the echo mode. The default echo mode is Normal. Specify the maximum length of text that can be entered. A value <= 0 indicates that there is no limit. The default value is -1. Return the current width of the line edit in number of characters.
http://webtoolkit.eu/wt/doc/reference/html/classWt_1_1Ext_1_1LineEdit.html
CC-MAIN-2015-32
refinedweb
155
78.75
Content-type: text/html getttyent, getttyent_r, getttynam, getttynam_r, setttyent, setttyent_r, endttyent, endttyent_r - Get a /etc/securettys file entry Standard C Library (libc.a) #include <ttyent.h> struct ttyent *getttyent ( void ); struct ttyent *getttynam( const char *name); int setttyent(void); void endttyent(void); The following obsolete functions are supported in order to maintain backward compatibility with previous versions of the operating system. You should not use them in new designs. int getttyent_r( struct ttyent *tte, char *buf, int len, FILE **tty_fp); int getttynam_r( const char *name, struct ttyent *tte, char *buf, int len); int setttyent_r( FILE **tty_fp); void endttyent_r( FILE **tty_fp); Points to the ttyent structure. The ttyent.h header file defines the ttyent structure. Specifies the name of the requested tty description. Is data for the tty. Specifies the length of buf. Specifies a secure ttys file stream. The getttyent() and getttynam() functions each return a pointer to an object that has the following ttyent fields. These fields describe a line from the secure tty description file. The members of the structure include the following: Name of the character-special file. The string "none". The string "none". A mask of bit fields. The TTY_SECURE flag indicates users with a user ID of 0 (zero) are allowed to log in on this terminal. A NULL pointer A NULL pointer. If any of the fields pointing to character strings are unspecified, they are returned as NULL pointers. The field ty_status will be 0 (zero) if root logins are not allowed. The getttyent() function reads the next line from the tty (End-Of-File) is encountered. The getttyent(), setttyent(), endttyent(), and getttynam() functions return a pointer to thread-specific data. Subsequent calls to these functions from the same thread overwrite this data. The getttyent_r(), setttyent_r(), endttyent_r(), and getttynam_r() functions are obsolete reentrant versions of these functions. They are supported in order to maintain backward compatibility with previous versions of the operating system and should not be used in new designs. Note that you must initialize the *tty_fp parameter to NULL before its first access by any of these functions. Upon successful completion, the getttyent() and getttynam() functions return a pointer to a ttyent structure. If they fail or reach the end of the terminal control database file, they return a null pointer. Upon successful completion, the setttyent() function returns a value of 1. Upon failure, it returns a value of 0 (zero). Upon successful completion, the getttyent_r() and getttynam_r() functions store the ttyent structure in the location pointed to by tte, and return a value of 0 (zero). Upon failure, they return a value of -1. Upon successful completion, the setttyent_r() function returns a value of 0 (zero). Upon failure, it returns a value of -1. If any of the following conditions occurs, the getttyent_r() or getttynam_r() functions set errno to the corresponding value: The search failed. In addition, if any of the following conditions occurs, the getttyent_r() or setttyent_r() functions set errno to the corresponding value: The tty_fp, tte, or buf parameter is invalid, or the len parameter is too small. /etc/securettys Contains the terminal control database file. Commands: login(1) Files: securettys(4). delim off
https://backdrift.org/man/tru64/man3/getttynam_r.3.html
CC-MAIN-2017-43
refinedweb
525
57.87
Ok, there's a problem that can't really be overcome in the pydev debugger when using turbogears... (just doing "import turbogears" would already break it). Actually, no OPTIMIZED debugger would be able to work with that. I'm saying optimized because the implementation seems to take into account naive debuggers which would trace all the calls within all the frames (pydev only traces frames with breakpoints). The problem is: there's a module that turbogears uses (in my tests: DecoratorTools-1.4-py2.5.egg) which has a decorator named: decorate_assignment. This decorator uses the tracing facility that python provides for debuggers and removes the current debugger tracer function. It still tries to restore it if it was tracing the frame previously (but that would hardly ever happen in an optimized debugger). So, there's no way to actually fix that from pydev, but there are some options to make it work: 1. Using the pydev extensions remote debugger (but if that decorator is called after the remote debugger is set, the debugger would stop working again, so, this option would only useful if that decorator is not used later). 2. Removing that decorator from the places that use it in turbogears (the implications for that would have to be checked). 3. Hard-coding it to return the pydev tracing function. To do that, the file: DecoratorTools-1.4-py2.5.egg\peak\util\decorators.py must be changed so that the function "def decorate_assignment(callback, depth=2, frame=None):" does not use the call: "oldtrace = [frame.f_trace]" and uses the code below instead: oldtrace = None try: import pydevd debugger=pydevd.GetGlobalDebugger() if debugger is not None: oldtrace = [debugger.trace_dispatch] except ImportError: pass if oldtrace is None: oldtrace = [frame.f_trace] The 3rd option is probably the easier in the short run for those wanting to debug turbogears in pydev, but I think that the 2nd should be the one actually used (as a general rule, I believe that only debuggers should play with the tracing facility, because it tends to bee way to instrusive, and it's probably the most un-optimized way of doing something, as you're going to trace all that happens, which can lead to a large overhead). 23 comments: Ah, I was just wondering about that. Thanks! Have you talked with the author of Peak to get his suggestions? If this really is an oversight of the developer of that package , I'm sure he'd like to know. Hello I tested successfully the 3rd option with python-2.4, DecoratorTools-1.4 and pydev 1.3.5 But I have upgraded to python-2.5, DecoratorTools-1.5 and pydev 1.3.8 and now, my application is not working when running in the debugger. I get the pydev warning about usage of sys.settrace() and the application start, I can click some link, but I looks like when I'am submiting forms, I get error: TypeError: ("'NoneType' object is not callable", <bound method Root.quick_login of <myapps.controllers.Root object at 0xa92ebac>>) Any idea ? Hummm... no real idea... I haven't checked how DecoratorTools 1.5 is, but the 'patched code' may have to be changed somehow... also, don't you have a full stack trace for that exception? == eclipse DecoratorTools-1.5 == Their is no difference between DecoratorTools-1.4 and 1.5 except this > def enclosing_frame(frame=None, level=3): > """Get an enclosing frame that skips DecoratorTools callback code""" > frame = frame or sys._getframe(level) > while frame.f_globals.get('__name__')==__name__: frame = frame.f_back > return frame > 356c356 < frame = frame or sys._getframe(depth) --- > frame = enclosing_frame(frame, depth+1) Here are some message I have only in debuger mode 2007-08-17 21:53:10,761 turbogears.visit DEBUG Loading visit manager from plugin: sqlalchemy /kolab/lib/python/threading.py:697: RuntimeWarning: tp_compare didn't return -1 or -2 for exception return _active[_get_ident()] 2007-08-17 21:53:10,805 turbogears.visit INFO Visit filter initialised Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x9bb290c> ignored Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x9bb28ec> ignored ... Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x9bb2eac> ignored Exception exceptions.SystemError: 'error return without exception set' in <generator object at 0x9bb292c> ignored And here my stack trace URL: File '/kolab/lib/python/site-packages/Paste-1.4-py2.5.egg/paste/evalexception/middleware.py', line 308 in respond return_iter = list(app_iter) File '/kolab/lib/python/site-packages/CherryPy-2.2.1-py2.5.egg/cherrypy/_cpwsgi.py', line 75 in wsgiApp environ['wsgi.input']) File '/kolab/lib/python/site-packages/CherryPy-2.2.1-py2.5.egg/cherrypy/_cphttptools.py', line 72 in run self._run() File '/kolab/lib/python/site-packages/CherryPy-2.2.1-py2.5.egg/cherrypy/_cphttptools.py', line 105 in _run self.main() File '/kolab/lib/python/site-packages/CherryPy-2.2.1-py2.5.egg/cherrypy/_cphttptools.py', line 254 in main body = page_handler(*virtual_path, **self.params) File '<string>', line 3 in quick_login File '/kolab/lib/python/site-packages/TurboGears-1.0.3.2-py2.5.egg/turbogears/controllers.py', line 340 in expose _build_rules(func) File '/kolab/lib/python/site-packages/TurboGears-1.0.3.2-py2.5.egg/turbogears/controllers.py', line 246 in _build_rules for ruleinfo in func._ruleinfo: File '/kolab/lib/python/site-packages/TurboGears-1.0.3.2-py2.5.egg/turbogears/controllers.py', line 246 in _build_rules for ruleinfo in func._ruleinfo: TypeError: ("'NoneType' object is not callable", <bound method Root.quick_login of <apps.controllers.Root object at 0x9ab3ecc>>) The problem is not related to my new eclipse with pydev 1.3.8 because I tried with my old environment, python-2.4, TG 1.0.2 and got no error. Then the problem is python2.5 or TG 1.0.3. I believe the problem is related to python 2.5... I know the error below is a python 2.5 bug (and others may just follow it?) /kolab/lib/python/threading.py:697: RuntimeWarning: tp_compare didn't return -1 or -2 for exception I've added a bug about it to the python bugtracker, but got no responses about it... (maybe you can ask there...) Cheers, Fabio It didn't post the link correctly... trying to put it is hyperlink link to sf tracker Is this the correct link? it says "Only Group Members Can View Private ArtifactTypes." I am running into the exact same problem with Python 2.5 and pydev 1.3.8: C:\Python25\Lib\threading.py:698: RuntimeWarning: tp_compare didn't return -1 or -2 for exception return _active[_get_ident()] Strange... it seems that it's no longer available... I've found a link to that in this link. But I don't know what happened to that bug... (did the python guys change their bug-tracking?) Searching a little more... that bug can be found at: I get this error and I don't understand why. I'm not doing anything more complicated than plain python executed as CGI through apache. I wrote: sys.path.append("/usr/local/eclipse/plugins/org.python.pydev.debug_1.3.10/pysrc/") import pydevd; pydevd.settrace(stdoutToServer=True, stderrToServer=True, suspend=True) The thing breaks correctly, and I can step, but the Eclipse debugger doesn't actually catch exceptions in the code. Apache's stderr says: %stderr PYDEV DEBUGGER WARNING: sys.settrace() should not be used when the debugger is being used. This may cause the debugger to stop working correctly. If this is needed, please check: to see how to restore the debug tracing back correctly. Call Location: File "/usr/local/eclipse/plugins/org.python.pydev.debug_1.3.10/pysrc/pydevd.py", line 743, in settrace sys.settrace(debugger.trace_dispatch) Traceback (most recent call last): File "/usr/local/eclipse/plugins/org.python.pydev.debug_1.3.10/pysrc/pydevd_frame.py", line 117, in trace_dispatch self.doWaitSuspend(t, frame, event, arg) File "/usr/local/eclipse/plugins/org.python.pydev.debug_1.3.10/pysrc/pydevd_frame.py", line 26, in doWaitSuspend self.mainDebugger.doWaitSuspend(*args, **kwargs) File "/usr/local/eclipse/plugins/org.python.pydev.debug_1.3.10/pysrc/pydevd.py", line 510, in doWaitSuspend frame.f_back.f_trace = GetGlobalDebugger().trace_dispatch AttributeError: 'NoneType' object has no attribute 'f_trace' Traceback (most recent call last): File "/usr/home/marauder/work/gtex/bookings/site/admin/zones.py", line 8, in >module> import page, database File "/usr/home/marauder/work/gtex/bookings/site/admin/page.py", line 16, in >module> filemode='a') File "/usr/local/lib/python2.5/logging/__init__.py", line 1240, in basicConfig hdlr = FileHandler(filename, mode) File "/usr/local/lib/python2.5/logging/__init__.py", line 770, in __init__ stream = open(filename, mode) IOError: [Errno 2] No such file or directory: '/usr/home/marauder/work/gtex/bookings/logs/adminlog' I'm running into the same problem with Python 2.5 and getting errors like: "TypeError: 'NoneType' object is not callable" I've notice this occurs on functions in controllers.py with an empty @expose() A work-around which might not work for everyone is to replace it with @expose("json"). python 2.5.2 is out and I have tested I dont have any error messages anymore. I tried debugging with pydev and turbogears but It looks like pydev detect the sys.settrace(tracer) an refuse to go further. Is is possible to bypass this protection ? Actually, calling sys.settrace still works (it won't actually stop the execution when it happens... it just prints it to the screen). You can use pydevd_tracing.SetTrace to call the original tracing without having a warning message. But aside from that, you must still restore the tracing facility as specified in the post for the debugger to keep working (and it won't be able to debug code that happens when the tracing function is replaced). Cheers, Fabio I notice you have code that's checking sys.settrace and issuing a warning; but if settrace() is being passed one of *your* trace functions (i.e., it's restoring the tracer, not replacing it), then why not just call the old settrace() with the correct function? That's mostly a safeguard... I could make a check and don't warn if it's a debugger function, but I think that having a different interface is safer. Cheers, Fabio What I mean is, you could safely restore PyDev's global trace function if settrace() is called with None, or with a local trace function. In other words, tracing a DecoratorTools-using app (or anything else that uses a similar technique) would then work correctly, except for the warning(s). i, i tried to change the code as written in the blog here. I use an Mac OSX 10.4.11 and 2.5.2 and Turbogears 1.0.5 Decorator Tools 1.7 And debugging is not possible. I tried pydev Does anyone have an hint for me or same problem ? Or an link which i didnt found so far. Thanks in advance thomas Just in case this is still an issue, Python 2.6 provides a sys.gettrace() function which means that instead of unconditionally trying to reset the settrace function to the pydev debugger tracer all you need to do in decorator tools is change the line:- oldtrace = [frame.f_trace] to oldtrace = [sys.gettrace()] and debugging should work again. This works fine with the latest version of decorator tools and turbogears 1.1. The solution originally outlined in this post won't work with versions of turbogears greater than 1.0.4 because of the way decorator assignment is being used in the peak Rules package. Thanks for the tip... as it happens, I was able to implement a refactoring that might actually fix this for older Pythons as well (but definitely in 2.6). It's in SVN if you'd like to give it a try; I'd like some feedback before pushing it out as an official release. I am using python 2.6 with Turbogears 1.5 and PEAK Rules 0.5a1 and DecoratorTools 1.7. The solution suggested Phillip Frantz doesn't work. The warning PYDEV DEBUGGER WARNING persists. If I try the solution by Fabio Zadrozny, I get the following error: File "/usr/local/lib/python2.6/dist-packages/PEAK_Rules-0.5a1.dev_r2600-py2.6.egg/peak/rules/core.py", line 657, in class MethodList(Method): TypeError: 'NoneType' object is not callable Dolf: based on the comments above, the patch won't remove the warning but should make the breakpoints work. That said, I tried Philip's Patch and my test runs to completion - without stopping on any breakpoints. Fabio's patch from the post gives me the same None not callable problem. This is TurboGears 2 and DecoratorTools 1.8.
http://pydev.blogspot.com/2007/06/why-cant-pydev-debugger-work-with.html?showComment=1183128960000
CC-MAIN-2013-48
refinedweb
2,142
52.05
PsychoModerators Content count11,936 Joined Last visited Days Won113 Community Reputation553 Excellent About Psycho - RankMove along, nothing to see here Profile Information - GenderNot Telling - LocationCanada Refactoring this code... Psycho replied to phreak3r's topic in PHP Coding HelpInstead of one long procedural body of code, create functions or classes for certain operations - especially if you need to do the same thing anywhere else in your application. That way you can create intuitive calls within your code that makes it much easier to read/manage. For example, you could create a function called usernameExists($uname) that returns a TRUE?FALSE based on whether the passed username already exists or not. Then also create a function to create a new user. Try to avoid "SELECT *" in your queries. Only select the fields you need. Otherwise, you can create conditions that leak data. In this case you are just checking if the record exists, so select the username or some other innocuous field. Alternatively, you could do a COUNT(*) query. Your process to see if a record exists with one query before running another query to create a record is problematic. It is possible for a "race condition" to occur which would allow a duplicate to be created. You should instead create the DB table to ensure that field is unique. Then just try to perform the insert. If it fails, check the error to see if it was due to a duplicate. Lastly, use comments! It may seem obvious when you are writing code what is happening, but when you have to come back later or if someone else has to work on the code it is invaluable. Here's a slight update to the code Barand posted with some modifications. //Function to see if a username exists function usernameExists($uname) { $username_query = $conn->prepare("SELECT username from profiles001 WHERE username=?"); $username_query->execute( [ $_POST['username'] ] ); return ($username_query->fetch() != false); } //Function to create a new user function createUser($userDataAry) { $sqlInsert = $conn->prepare("INSERT INTO profiles001 (username, password, email, c_status, doc, avatar, bio) VALUES (?, ?, ?, ?, NOW() , ? , ? )"); $sqlInsert->execute( $userDataAry ); } if ($_SERVER['REQUEST_METHOD'] == 'POST') { //Check if username already eists if (usernameExists($uname)) { header('Location: /soapbox/signup.php'); exit; } //Get data from the $_FILES array $file = $_FILES['file']; # other code here # if (empty($fileDestination)) $fileDestination = "assets/soap.jpg"; //Create the user $hashed_password = password_hash($_POST['password'], PASSWORD_DEFAULT); createUser( [ $username, $hashed_password, $email, $confirmation_status, $fileDestination, $bio ] ); } Ajax call without reloading ? Psycho replied to saynotojava's topic in PHP Coding HelpTo be clear, Requinix is being sarcastic. It is absolutely possible, you just aren't understanding what he was stating in the first response. Here is an analogy that might help. Think of a web page like a "printout" from a printer. So, let's say you create a document on your computer and print it out - then you change some content in the file on your computer. Would you expect the content on the already printed page to change? Of course not! Now, imagine that JavaScript can modify the computer document AND/OR modify the printed page like an eraser and pen. So, in your currently code your JavaScript is only changing the value of $_SESSION["so"] in the electronic document. You would need to refresh the page (i.e. create a new printout) OR modify the code to change the content in the existing output. Requinix already provided an example of how to do that, but let me explain in simple terms. 1) When creating the output for the page create an element on the page that can be referenced in the JavaScript <span id="value"><?php echo $_SESSION["so"]; ?></span> 2) Run your javascript to update the value to be changed on the page. In this case, you would have the JS use an AJAX call to a page that updates the session value and returns that value to the calling AJAX script in the original page. 3) The AJAX script then takes that return value and modifies the element created in step 1. In fact, the JQuery framework has a simple method for doing this without needing to use the full AJAX method. .load() rearrange dates in a DB table Psycho replied to ajoo's topic in PHP Coding HelpI'm not sure what the OP is really wanting here as the "requirements" are very confusing. Right now, the DB contains a timestamp for a login and a timestamp for a logout. If the intent is to replace those values with a human readable period of time (i.e. 5 hours, 23 minutes) then no changes to the DB should be made. That type of logic should be made in the output process. There are plenty of resources that will take two timestamps and produce a period of time output. Best & Most Secure way for Form Validation Psycho replied to Rommeo's topic in PHP Coding HelpWait. What is/are the error(s)? No way to tell if any function will or will not work to solve an error without knowing the specific error and what code is causing it. Use of conditions (Switch/IF) Psycho replied to gansai's topic in PHP Coding Help//Get the count of true conditions $conditionCount = 0; $conditionCount += (info_cond1()) ? 1 : 0; $conditionCount += (info_cond2()) ? 1 : 0; $conditionCount += (info_cond3()) ? 1 : 0; //If only one condition, set correct div if($conditionCount==1) { //Only one condition is true if(info_cond1()) { echo '<div>DIV A</div>'; } if(info_cond2()) { echo '<div>DIV B</div>'; } if(info_cond3()) { echo '<div>DIV C</div>'; } } elseif($conditionCount>1) { //Multiple conditions are true echo '<div>DIV D</div>'; } else { //No conditions are true echo '<div>DIV E</div>'; } timer not updating in database reliably Psycho replied to piano0011's topic in PHP Coding HelpA lot of "generalized" info and not enough specifics. 1. What type of field is time_achieved right now?I am guessing it is a "time" field type. My guess is that the value that is getting passed to update that field is not properly formatted for that field type - therefore it defaults to 00:00:00. But, I also see you have some javascript to reset the value of a field to 00:00:00 - perhaps there is something calling that function onsubmit of the form. 2. You ask about how to do a countdown. Storing a time value will not work for that. Instead you should be storing a timestamp of when the countdown should end. Then you can calculate how much time is left. If a user submits 00:20:00 (for a 20 minute timer), then your logic should set a timestamp 20 minutes in the future. Displaying null for unwanted data Psycho replied to mattix's topic in PHP Coding HelpI'm not following. You first state that you are using a regular expression to remove all but numbers from the values. Then you state that you want a value such as "Tnex>=40" to return Null or empty (as opposed to "40"). So, I'm not sure what you really want returned. Then you state that when you try to add the logic to your query you are not getting the rows that don't match the criteria. But, the query you provided doesn't have any where clause. If you had, I might have a better understanding of what you are really wanting from above. Lastly, the most important question is WHY are you doing this? If that data is not valid, then you should fix the data instead of creating complex process to handle the bad data on the output. I would update all the values in the database to just the number (or empty if you prefer) and implement logic when saving new values to ensure only valid data is entered. Although I would NOT advise the approach you are wanting, one of these will do what I think you are wanting Return the value if it is only a number - else return an empty string SELECT Date, Tnex, Mode, IF(Snex*1 = 0, "", Snex) as Snex FROM datatb Return the value with all non-numeric characters removed SELECT Date, Tnex, Mode, REGEXP_REPLACE(Snex, '[^0-9]+', '') as Snex FROM datatb how to reset a counter in a for loop Psycho replied to piano0011's topic in PHP Coding HelpYou need to use CODE tags - not QUOTE tags. E.g. [ CODE ] $foo = 'bar'; [ /CODE ] (or use the <> button in the editor) will be presented like this $foo = 'bar'; As to the issue with $resultCheck. That variable is defined once at the top of the script - it never changes. So the condition if ($resultCheck == $increment) { would only pass if the number of initial results is exactly the same as the $increment value. I have a suspicion that you think the value of $resultCheck will dynamically change in the loop as records are consumed. As Barand has stated, this code has a lot of flaws. Before writing any code, I suggest getting a piece of paper and create a rough outline of the logic flow instead of trying to determine the logic as you write the code. As you start writing out the code you might find flaws in the originally planned logic and that's OK. but, by having a rough plan before you start you will be better able to make changes as you go instead of coding yourself into a corner. Do you see my coding error? Psycho replied to phppup's topic in PHP Coding HelpWhere is $first_name defined? Do the two records being returned happen to have no first name? If so, it all makes sense to me. forms forms forms... Please help Psycho replied to Kylemcc's topic in PHP Coding HelpBarand gave a lot of good advice and there is a lot more that can/should be done. However, a forum post is not the right medium to give a tutorial on all the aspects in creating a good form. But, I will elaborate on one thing Barand stated: Right now, three is no "processing logic", there is just a single statement to INSERT the data into the DB. As Barand stated you need to be using "prepared statements" (here's a good tutorial). But, you need to validate the user input before you even attempt to INSERT the data, otherwise simple input mistakes will lead to corrupt data. For example, you need to check that required fields have an input and for any fields that do have an input you need to ensure it is a proper value for that field. Number fields should be numbers, dates should be dates, etc. Also, if a field has a properly formatted value, it may still not be valid. You wouldn't want to accept a date if the user accidentally entered a year of 2118, right? One way to help users enter data (especially when format is important, i.e. date) is to use the placeholder parameter for input fields. It puts an "example" value as a guid into the field until the user puts focus on the field <input type="text" name="date" placeholder="MM-DD-YYYY"> Also, using javascript plugins for things like date inputs is also a good idea. But, don't rely upon them for ensuring user input is correct. Get the firm working first with just HTML - then add any JavaScript to enhance the user experience. Here is a quick and dirty example of a form and how I tend to approach them. <?php //Variable to hold form error description $ Name: <input type="text" name="name" placeholder="" required <br> Date of Birth: <input type="text" name="dob" placeholder="mm-dd-yyyy" required <br> Weight in pounds: <input type="text" name="pounds" placeholder="No. of pounds" value="<?php echo htmlentities($pounds); ?>"> </form> </body> </html> PHP Multidimensional Arrays Psycho replied to Andy_K's topic in PHP Coding HelpAnd who said you wanted it on all of them? He was saying that when you build the array you should make the determination as to which elements to make active. Also, your statements are confusing. In the first post you state you want to search for an element by a string value, but in your last post you state you know the Article_ID. So, why would you be searching for a text value if you know the Article_ID? Not sure what to call this... (expertise needed) Psycho replied to stlyusss's topic in PHP Coding HelpTry this <?php //Read file into variable $file = "Sample.txt"; $text = file_get_contents($file); //Create array to hold results $results = array(); //Split the content based on *NEXT* $questions = preg_split("#\*NEXT\*[^\n]*#is", $text); //Process each question section foreach($questions as $question) { //Find the question text if(preg_match("#}(.*)#", $question, $question_match)) { //Exctract the question text $question_text = trim($question_match[1]); //Find the answers preg_match_all("#([ABCD]\)) ([^\n]*)#i", $question, $answers_match, PREG_PATTERN_ORDER); $answers = array_combine ( ['A','B','C','D'], array_map('trim', $answers_match[2])); //Find the correct ansewer preg_match("#Answer\: ([ABCD])#", $question, $correct_match); $correct = $correct_match[1]; //Put question parts into results $results[] = array( 'question' => $question_text, 'answers' => $answers, 'correct' => $correct ); } } //See results echo "<pre>" . print_r($results, true) . "</pre>"; JavaScript not working in IE11 Psycho replied to jarvis's topic in Javascript HelpTo my knowledge, <output> tags do not have a "value" attribute. In your form there are fields like this: <output class="loan-amount" name="principal" id="principal" onChange="calculate();"></output> Then, in the calculate function there is logic like this: var principal = document.getElementById("principal").value; Typically I see IE making 'assumptions' in how it interprets code, but this seems to be one instance where IE is doing the right thing and not assuming the field has a property which it does not. I would write all of that much differently, but to make it work correctly move the name/id parameters from the <output> tags to the corresponding <input> tags. That way the code is referencing the value of the input fields. I am noticing major load time in retrieving results from large tables Psycho replied to imgrooot's topic in PHP Coding HelpAnother reason is that the data should be "agnostic" to how it is being used. When working in PHP, and many languages, you can reference the data via the column names. However, what if there was a need to reference the data via numerical index (there's a reason why there are explicit options to only retrieve data via numerical indexes). You might be passing the results to another process that can't use the column names and will simply reference the data by the order it is presented. In that case, if "SELECT *" is used and field order is changed or fields are added, the functionality could break. By selecting just the fields that are needed in the order that they are needed, the functionality will not break (unless someone was to remove a field). truncate text Psycho replied to toolman's topic in PHP Coding HelpPlease read requinix's earlier response. If you are still getting the full output with your code above, that is because the function wpjm_the_job_description() is outputting the content to the page and not returning it. E.g. wpjm_the_job_description() { $value = "Get some text from some process or source to be displayed"; echo $value; //The function is directly outputting text to the page return; //Nothing is returned from the function for you to modify it } You will either need to see if there is a function to get the string rather than outputting it or you can try modifying that function directly.
https://forums.phpfreaks.com/profile/32104-psycho/
CC-MAIN-2018-51
refinedweb
2,580
61.06
In this article, we'll cover the important things you can quickly learn to be informed and confident with using and conversing on React Router v4. What is React Router? React Router is a client-side router (CSR) for use with React projects (I know, duh right?). It provides routing which is the fancy term for the rendering of different components depending on URL paths within a React web application. How does one install and use? Run the following command in your project to save react-router-dom as a project dependency: npm i -S react-router-dom Using the ES2015 syntax, you can import React Router into your React components using: import * from 'react-router-dom' Setting up your basic routes Link components can be thought of as your typical anchor links that, when clicked, redirect the user to the path specified in its to property. Route components can be thought of as the controller for the render. When you see these components, just think this is what they say: WHEN I see the URL as what is specified in my path property, THEN I will render the component listed in my component or render property. The basic example below is mostly lifted from ReactTraining’s basic example: Nesting Routes Nesting routes is exactly the same as creating high-level routes, except for defining a BrowserRouter component (as your nested routes are composed within the high-level BrowserRouter component anyway). It simply needs more Link and Route components. We can indefinitely nest further routes within any other nested routes. Passing URL parameters In the previous example, we had different components defined for each topic in nested routes. In the following example, we have a single Topic component that is rendered for the three different routes. The Topic component dynamically renders the topicId, which is passed as a property by the Route, and its value defined as part of the URL using the :. When a Route defines a component to render, it passes a match object to its props (along with location and history, but unimportant for now). This match object has a params object which contains any variables passed by Route defined using the : notation in the path URL (aka Route Parameter). In this way, we can cut down on separately creating components for each Link and instead make one re-usable component rendered with the information passed to it. Avoiding hard-coding nested links When creating nested links, our Link component still needs to refer to the entire URL path instead of the location it’s really concerned with. This means that nested links would have hard-coded locations to its parent links, which isn’t great when a name change occurs and a big renaming effort is required. Instead, using the match object passed by Route, we can dynamically refer to it’s location and use that to avoid hard-coding. For example: <Route path="/topics" component={Topics}/> pas ses a match object w ith a url property with the value "/topi cs" . Topics , via its props, can reus e the match.url when defining its nested links. Avoiding ambiguous matches By default, when you specify Route components, each matching path will be rendered inclusively. Using URL parameters, this becomes problematic. As the parameter effectively acts as a wildcard (so any text is found as matching), you will find that when these are mixed with hard-coded routes, they both will display. Using exact won’t help either. React Router’s solution is the Switch component. The Switch component will render the child Route component on the first matching path exclusively. This means that if all hard-coded routes are specified first, then these will be rendered only. Multiple Route Rendering There are times when you don’t want ambiguous matching of Route components, but there will be times when you do. Remembering that we can think of Route as a simple “WHEN I see this path, THEN render this component”, which means we can have multiple Route components matching a single page, but providing different content. In the below example, we use one component to pass the URL parameter to show the user their current path, and another component that renders with the content component. That means two different components are rendered for the one URL path. Rendering from Route directly A Route component can be passed a component to render if one is available. It can also render a component directly using the render property. <Route exact path={url} render={ () => <h3>Please select a topic</h3>} /> Passing properties to a component using Route URL passed parameters are fine, but what about passing in properties that require more data to components, such as objects? Route doesn’t allow you to append properties it doesn’t recognize. <Route exact path="/" component={Home} doSomething={() => "doSomething" } /> // doesn't work However what can be done to pass properties is to use Route render method. <Route exact path="/" render={(props) => <Home {...props} doSomething={() => "doSomething"} /> Passing properties to a component using Link You can also pass properties to a component via the Link component. Instead of passing in a String to the to property, we can pass in an object instead. On this object, we can declare the pathname representing to URL we want to navigate to. We also declare a state object that contains any custom properties we want. These properties are contained on the location object (in location.state). In the below example (contrived, I know…), we pass in a message property to display on a page. Redirection — Using Redirect You can use the Redirect component to redirect the users’ immediate URL to another. In the below example, we see a redirect for a user depending on whether the isAuthenticated state on component RedirectExample is true or false, appropriately redirecting if they’re logged in (to /home) or logged out (to /dashboard): Redirection — Using withRouter() Another way to redirect is by using the higher-order component withRouter. This allows you to pass the properties of Route ( match, location, and importantly in this example history) to components that aren’t rendered via the typical Route component. We do this by wrapping our exported component with the withRouter. Why is having history important? We can use history to force redirection by pushing a URL to the history object. There are caveats to this way of routing which I don’t detail (see the withRouter documentation). Also, the exported component must still be composed within another component that is rendering within BrowserRouter (which I don’t show in this example). Default Route Component There will be times when a Link may refer to a URL that has no corresponding <Route path='/something' component={Something}/> . Or a user will type an incorrect URL in the browser bar. When that happens, all we will have is a non-responsive page or link where nothing happens (which is not as bad as actually being navigated to a non-existent page). Most times we want to at least show the user we can’t find their content, maybe with a witty image like Github’s 404 page. In this case, you’ll want a default component, also known as a no-match or catch-all component. When a user clicks a link (or indeed types something incorrectly in the browser navigation bar), so long as there are no other matching components, we will be directed to the default component to be rendered. Note the use of Switch (ambiguous matching with URL parameters). As this Route has no path, it will be effectively always rendered. We require a Switch to only render if it cannot find any other matching Route path’s. Custom Links The Link component at its core renders an anchor element for whatever is passed as its to property. There are times when we would want customizations made to the Link component without having to have these customizations everywhere. React Router allows a means of doing this is by creating a custom link (see React Router training guide for more info — we use their example more or less below). For a custom link, one is essentially wrapping the existing Link component inside a custom component, and providing additional information, not unlike the Higher Order Component pattern. In our example, we only show the links for the pages that aren’t displayed. For our CustomLink component to have the knowledge of what page is currently displayed, we need to wrap the Link component in a Route component so that we can pass the match object that comes with React Router’s Route. We pass this wrapping our Link component as a child to the Route component. Route, if you remember, simply checks the current path and when we match, will render “something” (either defined by the component/ render properties or as as a child of Route — such as our Link component). We subvert this slightly with an equality check to say if we don’t find a match object (if our current path doesn’t match what’s defined in the path property in the Route declared in CustomLink), then render a Link, otherwise render nothing. Parsing Query Strings Query strings are parameters in the URL that look like variables and are prefixed by the question mark (such as ). I’m going to rip the band-aid off quick — React Router doesn’t come with a way to parse query strings ?. It does, however, locate query strings as a String in its passed properties, within l ocation.search, wherein the above example would be defined as s earch: "?name=Greg". So what to do? There are a number of ways to solve this problem, including reinventing the wheel. I would like to anecdotally highlight the npm package query string as a solution which I’ve used and has become my de facto query string parser. Prompting users when transitioning React Router v4 comes with a Prompt component, which as you can guess, displays a prompt to the user. This is useful as UX feature for users when they are warned of potentially losing form data if they transition from the current form page. The basic pattern in implementing a navigational prompt is to have a boolean state property which decides whether they should be prompted or not. This state is updated via the form based on whether there are values selected for any form field. Within the form, you render a Prompt component, passing in two properties: a when (which is your boolean state set earlier) and a message to display. Below is an example (it mostly follows the example from ReactTraining prevent transitions): Summary In the article, you will have learned all the basics to use React Router in your web applications. You should be even able to converse with your many friends and family about the joys of React Routing. If you liked this article, please share your like with a friendly clap. If you didn’t like this article and would like to register your resentment, you can do so by giving a hateful clap. The opinions expressed in this publication are those of the author. They do not purport to reflect the opinions or views of any organization or business that the author may be connected to.
https://www.freecodecamp.org/news/bluffers-guide-to-react-router-v4-20f607a10478/
CC-MAIN-2021-31
refinedweb
1,885
59.33
This action might not be possible to undo. Are you sure you want to continue? 1 October 2010 EUROPEAN CREDIT ALPHA More answers than questions Matthew Leeming +44 (0) 20 7773 9320 [email protected] Zoso Davies +44 (0) 20 7773 5815 [email protected] Arup Ghosh +44 (0) 20 7773 6275 [email protected] This version corrects Figure 3, where the scale was incorrect on the left hand axis. Eugene Regis Strategic Market View: There and back again 4 +44 (0) 20 7773 9169 [email protected] Aziz Sunderji +44 (0) 20 7773 7881 [email protected] Dominik Winnicki +44 (0) 20 3134 9716 [email protected] Driven by mixed signals from the economic and political front credit spreads seesawed this week before finally ending up where they started. Risk aversion remains, but largely driven by macroeconomic uncertainties, while strong corporate fundamentals should provide spreads with a buffer if future growth stays anaemic. Sovereign volatility continues to drive valuation dislocations and we highlight a credit-equity normalisation trade on EDP. For investors worried about poor economic growth, we also recommend going long a basket of selected names with counter-cyclical performance while simultaneously shorting the index as a suitable trade for generating counter-cyclical alpha. Distressed debt markets – time to grow 9 We see the European distressed debt market as growing in size. This will come from weak borrowers who survived on forbearance measures and the low rate of Euribor hitting maturity and amortisation points and European banks continuing with balance sheet shrinkage. Also, with the cost of bailing out Europe’s banking systems via bad banks increasingly interlinked to sovereign funding rates, there is further potential for distressed assets to come from both bad banks and distressed banks. Credit at a glance 16 Corporates generated just over 50bp of excess returns in September, led by financials and in particular the Tier 1 part of the capital structure. Insurance, which is more heavily weighted towards Tier 1 than banking, was the top performing sector this month – utilities underperformed. Indices were marginally tighter week on week, while investment grade cash was wider. Despite this, our measure of the cash-CDS basis was broadly unchanged as single-name contracts lagged the index tightening. PLEASE SEE ANALYST CERTIFICATIONS AND IMPORTANT DISCLOSURES STARTING AFTER PAGE 22 Barclays Capital | European Credit Alpha CREDIT VIEWS ON A PAGE CREDIT STRATEGY CATEGORY High Grade THESIS Monetise steep skew, hedge against moderate widening Use bank Tier 1s to boost yield or beta Relative value in step-up bonds Relative value trades between corporates and sovereigns Cyclical hedge in CDS Normalisation trades TRADE IDEAS 1x2 payer spreads on Main to December Despite the rally that has reduced the pick-up to senior, we continue to believe there is value in European bank Tier 1 as a high-beta asset class Switch into Lafarge step-ups: LGFP €7.625% 2014 and LGFP €7.625% 2016s Short corporates trading tighter than their own sovereign (only in AAA sovereigns) High Yield Relative value across ratings buckets Relative value in senior vs subordinated cash Basket of cyclical shorts vs index: LMETEL (Ericsson), MICH, PUBFP, STM, VLOF, WKLNA, WPP Basket of cyclical longs vs index: BNFP (Danone), ROSW, EXHO, TSCOLN, ULVRLN Yield/credit curves steep at front end, buy short-dated credit Basis trades > -100bp We favour single-B-rated paper and expect it to compress towards BB On capital structures of performing names with secured and unsecured bonds, unsecured tiers look attractive: Buy Ardagh € ‘17, Ineos € ‘16, Europcar € ‘14 and Lecta € ‘14 against secured issues On capital structures of performing names with term loans and pari passu secured bonds, buy the secured bond: Buy Smurfit € ‘17/19 against term loans Loans or short-duration bonds trading sub-par, which we expect to be refinanced Short-duration paper on high-beta credits with strong liquidity Selected name-specific 5s10s DV01 neutral steepeners for borrowers we expect to refinance We favour bonds with cash yield above the current yield of the index Switch into bonds that are closer to maturity to reduce sensitivity to spread widening and curve steepening Switch into bonds with protective covenants, such as change-of-control (CoC) puts and/or step-up coupons Buy outright CDS protection on an LBO target; Buy outright CDS protection on a target and sell protection on a correlated index Implement a CDS steepener on a potential candidate Long equity call options for LBO candidates Trades on issuers we expect to refinance Returns are hard to find Event-driven trades: LBOs Cash CDS Equity options HIGH GRADE CREDIT RESEARCH SECTORS COMPANIES Autos Banks Consumer & Retail General Industrial Insurance Pharmaceuticals TMT OVERWEIGHT Banks, Consumer, Industrials UNDERWEIGHT Telecoms, Media, Technology, Utilities, Pharmaceuticals FAVOURED*: OVERWEIGHT – BONDS/SELL PROTECTION – CDS UNFAVOURED*: UNDERWEIGHT – BONDS/BUY PROTECTION – CDS BMW (CDS), Daimler (CDS) RBS (cash), Commerzbank (cash), UniCredit (cash) Accor (cash), Kingfisher (cash), Rentokil (cash), Metro, BAT, Imperial Tobacco, Tesco (CDS), PPR (CDS) BAA (cash), Finmeccanica, Alstom (CDS), CRH, Clariant (cash), Thyssenkrupp (CDS) Stalif (cash), Llydin (T1), Eureko (bonds), Munich Re (bonds), Zurich (bonds) Roche, Novartis (cash) BT Group, OTE, Telefonica, Lagardère, Swisscom (CDS), Telenor (CDS), Nokia (CDS) EDP, Enel, Gas Natural, Veolia Environnement, REN, Glencore, Iberdrola (CDS) BMW (cash), VW (cash), Volvo (cash), Michelin (cash) Allied Irish Banks (cash), BCP (cash), BES (cash), Dexia (cash), Monte dei Paschi (cash) Carrefour, Next (CDS), Diageo, Experian (CDS), Carlsberg (CDS) Metso, Akzo Nobel, Bayer, Clariant (CDS), Rolls Royce (CDS), Lafarge, Sanofi-Aventis, Holcim (cash) Aegon (CDS), Hannover Re (CDS), Generali (CDS), Unipol (CDS) AstraZeneca Deutsche Telekom, Vodafone, TeliaSonera, TKA, KPN, STM (CDS), FT, Ericsson, Pearson** (CDS), Portugal Telecom (CDS), Wolters Kluwer (CDS), Ericsson (CDS), WPP (CDS) United Utilities Plc, Suez Environnement, Elia, Verbund, Edison, Fortum (CDS), EnBW (CDS), Vattenfall (CDS) Utilities HIGH YIELD CREDIT RESEARCH COMPANIES Basic Industries General Industrial FAVOURED*: OVERWEIGHT – BONDS/SELL PROTECTION – CDS UNFAVOURED*: UNDERWEIGHT – BONDS/BUY PROTECTION – CDS Consumer & Retail TMT Ardagh Glass 2016/2017/2020, Lecta 2014 (sub + snr), M-Real 2013, Smurfit Kappa Group 2015/2017/2019 Evonik Degussa 13s, Evonik Industries 14s, Savcio, HeidelbergCement 2012/2014/2017/2018/2019, Valeo (cash), GKN (CDS) Pernod (EUR) Wind €11.0% 2015, KDG €+700 PIK 2014, Seat €8.0% 2014, Unity €8.125% 2017, UPC €8.0% 2016 Clondalkin Industries 2013/2014, Norske Skog 17s Lufthansa (cash), Air France (cash), Stora Enso, UPM Ono €8.0% 2014, €10.5% 2014, Unity €9.625% 2019, Virgin Media £7.0% 2014 Note: Recent changes where available are in bold text; *ratings below apply to bonds and CDS (where applicable) unless specified; **Barclays Capital is acting as financial advisor to Pearson PLC in its potential acquisition of Sistema Educacional Brasileiro's school learning systems business. Source: Barclays Capital 1 October 2010 2 Wholesale inventories GE: Trade balance UK: PPI Tue. of Mich. Car registrations Source: Bloomberg. 13 Oct Casino Q3 sales (after mkt) Thu. PPI UK: Construction PMI US: ISM non-Manufacturing EZ: Retail sales. 6 Oct Thu. 15 Oct US: CPI. Barclays Capital 1 October 2010 3 . Factory orders EZ: Sentix. Retail sales. 11 Oct Company Securitas Sodexo Ladbrokes Release/event 9M results FY results Interim trading update US: FOMC Minutes. EZ: ECB Monthly report Economic data Tue. 5 Oct Tesco Tui Travel Sainsbury H1 interim results Interim sales Q2 sales Wed. 12 Oct Wed. company reports. Small business optimism GE: CPI UK: Trade balance. Empire manufacturing. Jobless claims. 4 Oct Company Release/event Economic data US: Pending home sales. Confidence. Jobless claims EZ: ECB Rates decision GE: IP UK: NIESR GDP Estimate. Unemployment. RPI. Budget EZ: IP UK: Unemployment report US: PPI. Barclays Capital The week after Date Mon. Trade balance. Advanced retail sales. Business inventories EZ: CPI. 14 Oct Carrefour Diageo Roche SABMiller Suedzucker Syngenta Q3 sales Interim statement Q3 sales Q2 sales Q2 results Q3 sales Fri. PMI UK: Services PMI US: ADP employment report EZ: Q2 GDP (Final) GE: Factory orders US: Consumer credit. company reports.Barclays Capital | European Credit Alpha REPORTING CALENDAR Next Week Date Mon. 7 Oct M&S Q2 sales Fri. U. CPI US: MBA Mortgage applications. IP US: Non-farm payrolls. BoE rates decision. Trade balance. 8 Oct Source: Bloomberg. the 3m LTRO on Wednesday and the subsequent fine tuning operation saw much lower demand than the market had expected signalling an improved liquidity position for European banks.Barclays Capital | European Credit Alpha STRATEGIC MARKET OVERVIEW There and back again Arup Ghosh +44 (0) 20 7773 6275 [email protected] Aziz Sunderji +44 (0) 20 7773 7881 aziz.ghosh@barcap. and we highlight an attractive credit-equity normalisation trade on EDP. with the basis turning more positive for investment grade names.com Zoso Davies +44 (0) 20 7773 5815 zoso.winnicki@barcap. a fact that should provide spreads with a buffer if future growth remains anaemic.com Driven by mixed signals from the economic and political front credit spreads see-sawed this week before finally ending up where they started. Sovereign volatility continues to drive dislocations for corporate names. A mixture of news.sunderji@barcap. At the same time concerns remain around the banking sector in Ireland and fiscal conditions in Portugal.davies@barcap. the iTraxx main index ended the week back where it started it at 110bp. which have led to their sovereign CDS spreads widening over the past month as other peripheral spreads tightened (Figure 2). Barclays Capital 1 October 2010 4 . Having widened by 7bp on Tuesday. We also revisit our trade idea from last week to generate counter-cyclical alpha. Week in review Credit continued to trade rangebound this week. both good and bad from either side of the Atlantic has led to a lack of conviction and light trading volumes. We recommend going long a basket of selected names with counter-cyclical performance while simultaneously shorting the index as a suitable trade idea for investors worried about poor economic growth. The general macroeconomic picture seems to be improving. This is not reflected in current valuations. While risk aversion remains strong we believe it is driven more by macroeconomic uncertainties. as corporate fundamentals are the strongest they have been for a long time. where the CDS has been widening in tandem with its sovereign even though the stock has barely moved. Stronger peripheral sovereigns like Spain and Italy also saw good demand for their bonds in auctions. There has been a sharp divergence in credit views across Europe in the past few days. Figure 1: Peripheral European sovereigns have seen diverging performances over the last month… 150 100 50 0 -50 -100 -150 -200 Portugal Ireland SovX Spain Italy Greece While the aggregate sovereign index has stayed virtually unchanged. Barclays Capital The Irish finance minstry's plan for further capital injection in Anglo Irish bank was greeted by the market with guarded optimisim 6 23-Sep-10 8 10 Net 1 month spread change (bps) Source: Bloomberg. The upshot has been cash outperformance relative to CDS.com Matthew Leeming +44 (0) 20 7773 9320 matthew.com Dominik Winnicki +44 (0) 20 3134 9716 dominik. European peripherals have seen a sharp divergence in performance Figure 2: … while the sharpest changes recently have been in the Irish sovereign CDS curve 560 540 520 500 480 460 440 420 400 0 2 4 30-Sep-10 Source: Bloomberg. The GBP6. The plan reaffirms the government’s intention to make senior bondholders whole on their investment. significantly behind the USD92bn of dollardenominated unsecured debt priced in September. We based our universe on a subset of the names included in the Barclays Capital European Aggregate Industrials Index. Telefonica and RCI Banque pulled proposed deals after citing difficulties in pricing and reduced interest given the challenging market conditions.3bn is for Anglo Irish bank. Of this €4. This seems to suggest that with this fresh capital injection Ireland might be able to finally draw a line under the banking crisis that has been threatening to plunge the country into a double-dip recession. and take into account the changing cohort over time.25bn of investment grade debt priced in September makes it the busiest month of 2010 for sterling markets. France announced its 2011 budget stating it intends to cut its public deficit by 1. Adding to the mixed economic signals in Europe is the continuing slow burn of political uncertainty across both core and peripheral countries.EUR45bn in September. though consensus on such regulations might be hard to achieve. Wednesday saw trade unions coordinate protests across a dozen European countries. In Figure 3 we plot the time series of average net debt to EBITDA for European investment grade corporates going back to Q1 02. bringing the total EUR-denominated issuance to c. bringing net capital requirements for the bank to €29.Barclays Capital | European Credit Alpha On Thursday the Irish Central Bank unveiled a fresh recapitalisation plan for the banking system. over a longer horizon we expect spreads to start reflecting the underlying fundamentals of these corporate names.EUR7bn of debt this week.3bn from previous estimates of €25bn. Whither corporate spreads? Corporate spreads in Europe are currently being driven to some extent by the market’s risk aversion due to prevailing macroeconomic uncertainty.7% of GDP. Last week’s volatility and heightened uncertainty led to a couple of casualties on the corporate issuance front. and Irish sovereign CDS curves have started steepening in the front end. The market reaction to the announcement has been moderately positive. as well as a demonstration in Brussels. While we expect this to continue over the short to medium term. Overall primary markets priced c. but seeks to impose writedowns on subordinated bond holders as had been expected. while Italy announced it should be on track with its three-year deficit reduction programme. These included a day-long general strike in Spain protesting against labour reforms and austerity measures undertaken by the current government. These costs are expected to bring Ireland’s fiscal deficit to 32% of GDP. with additional costs expected of around €10bn in a base-case scenario and €15bn in a “stress scenario”. much higher than the 3% guideline for Eurozone countries. Also on Wednesday the European Commission proposed plans to impose fines on fiscally undisciplined member states. The Irish finance minster conceded in an interview that the large size of Anglo Irish as a proportion of the country’s balance sheet means it is “systemically important” and its failure cannot be contemplated. 1 October 2010 5 . CDS underperformance versus equity has been particularly strong in ELEPOR. to the extent that average leverage now is at one of the lowest levels since 2002. Source: Bloomberg. Driven by sovereign contagion these credits have been widening since early September. European corporates have sharply delevered since the beginning of the credit crisis. Judging by the two-month historical relationship between the CDS and the stock. While the CDS has been tracking Portugal sovereign CDS very closely on the way up in September. the stock has been trading quite firm and is now slightly above its September average (Figure 4). the very strong fundamentals should also buffer spreads from widening too much in the event growth remains anaemic for a while albeit presuming macro uncertainties do not intensify over the same period. 1 October 2010 6 . In fact.50bp tighter than the current market level. the equity-implied CDS spread is now c. Has ELEPOR gone too far? The concerns around Portugal’s fiscal conditions while driving up its sovereign spreads are also affecting ongoing deterioration in Portugal including names such as ELEPOR. however. Spreads haven’t quite moved in line with this. spreads should move tighter to reflect these improved fundamental levels. cash spreads do not seem to be reflecting this entirely yet 3 400 300 200 100 1 0 2 ND/EBITDA (LH axis) L-OAS (RH axis) Note: The leverage and spread values here are averages for the Barclays Capital European Aggregate Industrials Index. Barclays Capital As is evident. PORTEL and BESPL.Barclays Capital | European Credit Alpha Figure 3: While average leverage seems to be at one of its lowest levels since 2002. and we believe that as macro uncertainties fade and risk aversion retreats. but this contrasts with the relative resilience in stock prices of these companies. In particular.55 2. One advantage to being long defensive sectors versus the index is that investors are implicitly short both financials and peripherals. Cost of borrowing EDP stock is c.35bp. Retail and Food & Beverages – the counter-cyclicality of Retail being driven by non-cyclical distributors such as Tesco.45 2. in our view. as the increased cost of funding and deteriorating sentiment start to weigh on valuation.5 2.4 2.2 01 Jul 15 Jul 29 Jul 12 Aug Equity Source: Barclays Capital 280 230 180 PROFIT 200 150 26 Aug 09 Sep 23 Sep 130 2.35 2.6 Sept ember average Figure 5: P&L of the trade for data points over the recent past CDS spread (bp) 380 330 CDS spread (bp) 350 300 LOSS Breakeven 2.65 Stock price (€) July-August Source: Bloomberg.504. Hedging smarter redux We have previously suggested that investors concerned about poor economic growth could generate counter-cyclical alpha by shorting selected names within strongly pro-cyclical sectors (see European Credit Alpha.7 2. we highlight Healthcare. is to be long counter-cyclical sectors versus the index.5 250 2. the stock should eventually catch up. CDS would have more room for positive adjustment compared to the already strong stock.6 2. but a negative beta (Figure 6). Indeed. On the other hand. which we discuss here. A number of sectors have a strong correlation to GDP growth over the economic cycle. 1 October 2010 7 . Sainsbury’s and Carrefour. The choice of the notional sizes of the trade legs was based on last three months’ relationship between the CDS spread and the stock price.Barclays Capital | European Credit Alpha Figure 4: Sovereign weakness drives EDP wide but the stock price barely budged Stock pri ce (€) 2. 24 September 2010).3 2. if the weakness in Portugal sovereign CDS continues and ELEPOR keeps trading wide. their counter-cyclical performance is clear over several cycles (Figure 7 and Figure 8). An alternative approach.4 2. Barclays Capital CDS (rhs) September Current Given the unusual strength of credit-equity dislocation and the fact that we hold a positive fundamental view on the credit.6mn worth of ELEPOR stock Note: At the time of writing EDP 5y CDS was trading at 305bp bid and EDP stock price was at €2. If the sentiment around Portugal improves. we recommend the following normalisation trade idea: Trade idea: Sell €2mn protection on ELEPOR 5y CDS Sell short €0. 00 Danone 1 October 2010 0.5% 1. and Unilever.00 0.% -5% GDP -40% 0. Roche. and that trade relatively wide versus the index (Figure 9). Barclays Capital 8 . As constituents for our basket. Source: Bloomberg.0% 2. Barclays Capital Figure 7: Retail is a strongly counter-cyclical sector… Sector performance vs Index 25% 20% 15% 10% 5% 0% -5% -10% -15% Sep 06 Mar 07 Sep 07 Mar 08 Sep 08 Mar 09 Sep 09 Performance vs Index Source: Bloomberg.3 Source: Markit.3 -1.5% 4.0% 0. Barclays Capital GDP We suggest investors implement this view by selling protection on a basket of names from these sectors versus iTraxx Main – in a carry neutral format.7 0.5% GDP growth 3% 2% 1% 0% -1% -2% -3% -4. +/-2 quarters.4 0.9 Note: Regressions based on three-quarter performance of index versus SXXP against the percentage growth in GDP over the same three quarters.5% 2.Barclays Capital | European Credit Alpha Figure 6: Healthcare.8 0. Barclays Capital Figure 8: … as is Healthcare Sector performance vs Index 40% 30% 20% 10% 0% -10% -20% -30% GDP growth 4.0% 1. Tesco. 4% 3% 2% 1% 0% -1% -2% -3% -4% -5% Jan 07 Jul 07 Jan 08 Jul 08 Jan 09 Jul 09 Jan 10 Jul 10 0.6 yoy GDP growth.50 0.75 0.0% 3. Figure 9: The names we select are relatively far off their historical tights versus the index Ratio of CDS spread to iTraxx Main 1. Sodexo.5% 3.25 0. The period used are recessions as defined by the NBER. In particular we highlight Danone. Retail.5 Figure 10: This basket has outperformed during previous periods of poor growth Spread ratio of basket to 0.0% Jun 99 Dec 99 Jun 00 Dec 00 Jun 01 Dec 01 Jun 02 Performance vs Index Source: Bloomberg. and Food & Beverages outperform the index in a downturn Rsq of sector performance versus index to GDP during downturn Bond sector STOXX Index 2007-09 2001 1991 Average RSQ Average Beta Healthcare Retail Food & Beverage SXDP SXRP SX3P 52% 32% 45% 27% 68% 25% 71% 25% 50% 50% 32% -6.5 -6. Barclays Capital Roche Sodexo Tesco Unilever Basket Source: Markit. we prefer names our analysts hold a neutral or sell-protection view on. the institutions that hold them may have to crystallise losses. with many 2006-07 vintage LBOs also having fewer covenants to breach. We believe some defaults and restructurings that should have happened could yet still occur. So far. By contrast. High-yield and leveraged loans There is a potential catalyst for growth in the distressed debt market as borrowers begin to hit maturity/amortisation points. If investors want to enter a special situation. The 2004-07 boom in lending showed falling underwriting standards. banking syndicates may prefer to restructure credits for a higher recovery now rather than risk a lower one in the future. we expect these factors to change and the market to grow. they have to sign a confidentiality agreement to access private information and then learn about the situation before being able to build a position. This has reduced potential defaults. it remains to be seen if such measures continue. For example. European bankruptcy rules may change. which we would expect to increase the chance of future restructurings. European distressed debt is much more of a private market compared with the US. So far. Over time. Banking syndicates have also been generous in forbearance measures (covenant amendments and waivers). 1 October 2010 9 .regis@barcap. Sourcing distressed debt We see the shakeout of the European leveraged finance market and forced sale of banking assets as the key contributors to the growth in distressed debt.Barclays Capital | European Credit Alpha DISTRESSED DEBT MARKETS Time to grow Eugene Regis +44 (0) 20 7773 9169 eugene.com We expect the European distressed debt market to grow as the result of excessive corporate leverage from the past decade. RBS benefitting from the APS scheme and Spanish Cajas benefitting from the FROB recapitalisation scheme) to be run off gradually. While defaults have increased into 2009. the distressed asset class has been slow to evolve despite the economic downturn for several reasons: Some countries have created government sponsored frameworks to handle distressed assets in banks. we would argue that decisions taken by banks and investors suppressed the default rate. A functioning distressed market is needed to value securities potentially seen as stressed and restructure stressed corporates. This reduces potential recovery rates. European bankruptcy regimes tend to encourage liquidation of corporate rather than restructuring. For more distressed assets to be released to investors. taxpayers may be reluctant to underwrite bad banks at a time of fiscal austerity. the US market is more public with investors being able to obtain public information more easily and start building a position ahead of any restructuring. but if growth remains slow and corporates do not delever further. NAMA in Ireland) or to keep them on balance sheet (eg. This has allowed banks to either hive off bad loans into a government guaranteed “bad bank” (eg. but this should be balanced against regulatory capital being freed up as well as the prospect of achieving a higher price now and not being part of a restructuring process. the actual outflow of distressed assets from banks and non-bank investors has been relatively muted compared with the size of lending markets. 000 4. percentage of HY issuance 35% 30% 25% 20% 3. including private loan structures refinancing in the public bond markets.7bn of CCC product issued out of a €37bn total. While refinancing conditions were easy (eg. Barclays Capital Source: S&P LCD. the unwinds of TRS structures.4bn in an €18. with an increasing proportion of deals being struck at leverage above 5x from 2001 to 2007.000 5. Figure 14 shows the change in maturity profiles. Furthermore. market value CLOs and subsequent lack of new CLO issuance created difficulties for some borrowers.000 1. €mn (RHS) 6.000 2.000 15% 10% 5% 0% 97 98 99 00 01 02 03 04 05 06 07 08 09 10 Source: Barclays Capital 3.70% of 2010 high yield bond issuance is for refinancing purposes.0-5. Prior issuance resulted in the maturity profile seen in Figure 13.000 0 Figure 11 shows covenant deterioration amongst leveraged loan borrowers. 2005-06 vintage loan deals being refinancing in 2006-07).9x 6x and above CCC% CCC Issuance. For 2006. Loans HY Bonds 14 15 16 17 18 19 20 >20 10 11 12 13 14 15 16 17 >17 Source: S&P LCD.Barclays Capital | European Credit Alpha Figure 11: Distribution of deals by debt/EBITDA levels <3x 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 97 98 99 00 01 02 03 04 05 06 07 08 091H10 Source: S&P LCD Figure 12: CCC volumes.0-3. the highest quality borrowers were able to refinance in the bond markets or even pursue IPOs. The extension in the loan space has been small relative to Figure 13: Leveraged loan/high yield bond maturities (€bn) Figure 14: Change in maturity profiles December 2008 to August 2010 (€bn) 14 12 10 8 6 4 2 0 -2 -4 -6 Loans Bonds 50 45 40 35 30 25 20 15 10 5 0 10 11 12 13 Institutional Lev. The falling proportion in 2006-07 should be taken in the context of record years for issuance 1 .9x 4. for loans.1bn total for the year. there was €5. c.0-4.9x 5. for high yield bonds. Barclays Capital 1 CCC issuance in 2005 was €5. 1 October 2010 10 . this maturity mountain was easy to manage. So far. However. CCC issuance also grew (Figure 12) as a proportion of the total. Similarly. The shutdown in the primary markets of 2008 halted this. leads to higher default count Many restructurings to avoid a bankruptcy filing or avoid eventual liquidation. are now higher in Europe than the US (Figure 16). This has led to a situation in which the volumes of amendments. Some credits have bought time due to forbearance measures granted by lending syndicates (Figure 15). US bankruptcy regimes US Bankruptcy regime Europe Single legal regime. By contrast. we would not expect banking syndicates to keep amending.18 months to agree to a restructuring after being unable to meet amortisation payments) means that some borrowers may not have enough time to survive ahead of amortisations/final maturities. Some equity/ management friendly – France. Some secured lender friendly – UK. Gala took c. Spain. The length of time it takes to enter negotiations (eg. which would lower recoveries. No restructuring on CDS in SNACs Easier to file and come through bankruptcy.Barclays Capital | European Credit Alpha bonds. though falling. 1 October 2010 11 . restructurings 60 50 40 30 20 10 0 2004 Source: S&P LCD Figure 16: Issuers seeking covenant relief amendments 70 60 50 40 30 20 10 US Europe Defaults Restructurings Amendments 2005 2006 2007 2008 2009 JanAug 10 0 1Q08 2Q08 3Q08 4Q08 1Q09 2Q09 3Q09 4Q09 1Q10 2Q10 Source: S&P LCD Forbearance measures were granted because projected recoveries on credits would have been very low during the recession. Figure 17: European vs. but it can be unfriendly to unsecured Ease of coming through bankruptcy Well rehearsed. given that banks are under pressure to hold onto more regulatory capital under Basel 3. UK – most favourable CDS trades with MMR under STEC Difficult to file and come through so constrains default rate count 2 Chapter 7 in the US is rarely used s it results in a liquidation and cessation of operations. the US has chapter 11 bankruptcy laws that keep a business as a going concern and arguably make the bankruptcy process faster than Europe 2 . “Going concern” driven regime Not easy. especially for names close to maturity/amortisation payments. hence. It is the poorer quality borrowers that have yet to extend their maturity profiles in the bond or loan markets who may face problems as they begin to hit maturity/amortisation deadlines. A summary of US and European bankruptcy rules is in Figure 17. European bankruptcy regulations make the default process more complex and generally results in a liquidation. they may be more incentivised to push names into restructuring and extract a higher recovery rather than seeing them default. Figure 15: Distressed credits – defaults vs. Chapter 11 leads to reorganisation under operation as a going concern No single bankruptcy provision. If companies do not turn around after receiving covenant amendments. Liquidation is frequent Out of court restructuring CDS consequences Default count consequences Source: Barclays Capital Rare as restructuring is done through a bankruptcy court under Chapter 11. Also. The current rate is still well below the average for 2005-07 when such deals actually priced. Anglo Irish) The bank thinks the ultimate recovery could be lower than the current value. leveraged loan issuance €bn 200 180 160 140 120 100 80 60 40 20 0 2006 0 2007 2008 2009 2010 2 1 5 4 3 Non-Institutional Euribor (RHS) Euribor Average by Year (RHS) Institutional ECB Repo (RHS) % 6 Source: Bloomberg.Barclays Capital | European Credit Alpha We also see increased Euribor rates as potentially prompting more restructurings. Most loan debt is held on the banking book and generally marked at par. however. Not all banks will want to sell given the capital hit they may have to take. we believe some companies that are still underperforming by then may not be able to refinance if Euribor increases. such as commercial property. Figure 18: European rate history vs. Despite this. 1 October 2010 12 . A recent example was Technicolor. The bigger opportunity will be non-levered assets sitting on bank books from sectors affected in the recession. Capital will come from a combination of new capital. importantly. Given that our rate strategists expect Euribor to increase to 2. given that they are floating rate assets. shedding assets. a senior loan is seen as intrinsically worth €70 and is still paying its coupon. which we discuss in the appendix.05% by Q4 11 and growth to be slow. By restructuring credits rather than liquidating them. Barclays Capital However. there are certain circumstances that could force a bank to sell: A bank itself becoming distressed (eg. We note. distressed investors can extract value from stressed assets that previous classes of disparate holders could not. the leveraged finance market – at about €500bn outstanding – is only a part of the future distressed debt market. that the 8-year implementation plan should mean fewer immediate forced sales of assets. If. for example. Distressed loans at European banks We see potentially increased amounts of distressed debt coming from the banks over the medium term. increased deposits and. thereby enabling more of them to be held to expected maturity if projected recovery prospects improve. The holders of such assets do not necessarily have the expertise to restructure them. The end result of Basel 3 is that bank assets will have to be better maturity matched with longer-term liabilities. reducing the need to fund them. The low rate of Euribor has helped keep some credits alive. a bank will not force a €30 loss by pushing for a restructuring – they would give a credit time to get a higher recovery or par. For example.€500bn of this is from peripherals including Greece and Spain. facing such rules as limits on CCC-rated positions and upcoming reinvestment period deadlines. For example. The size of the distressed market depends on the willingness of current asset holders to continue to crystalise losses now or later. Italy and Spain have €2.Barclays Capital | European Credit Alpha The reorganisation of the European banking sector is already resulting in asset sales for some banks across the EU. due to banks needing to delever and shed some assets ahead of Basel 3 implementation and bad banks potentially unwinding holdings of distressed loans. Existing CLOs will be hampered from investing in distressed. In the recent announcement. Conclusion A true market in European distressed securities will develop over time. existing CLOs will have a lot less flexibility as they are forced to pay down their liabilities from loan amortisations and recoveries. Such deteriorating asset quality could either lead to assets being forced out of banks without going to NAMA or to NAMA not being able to fund more impaired assets. The cost of bailing out the Irish banking system to the government has risen combined with continuing austerity measures. Conditional upon receiving EC approval of state aid. the haircuts offered by NAMA over the first two tranches have increased from 55% to c. in our opinion. Ireland. According to current plans. with c. Banks are trying varied solutions to dispose of distressed assets outside of traditional distressed debt investors. The larger proportion of distressed debt will be par bank loans. in the case of Anglo Irish Bank.7%. some banks are shrinking via asset sales over specified time frames. a level not seen since 1997. The government created the National Asset Management Agency (NAMA) to buy bank assets at a discount in exchange for government guaranteed bonds (which are eligible ECB collateral). The current yield on the Irish 10-year benchmark has risen to c. the haircut increased again to 67%. we expect German landesbanks to be broadly net sellers of assets because they used their guaranteed balance sheets to build up assets in the past decade – some of which ended up as distressed. The bad banks created to buy banking assets at a discount need capital. 1 October 2010 13 . France. A recent result of this is more active distressed trading in Irish credits such as Quinn Group. Once past reinvestment periods. NAMA is expected to buy €81bn in impaired Irish bank assets.62%. a recent CLO from ICG saw €1. The evolving textbook example is in Ireland following the end of the Celtic Tiger growth period. yet there is already evidence of continued deteriorating asset quality in the banking system since the bailout began. For example. We expect some poorly performing companies to be unable to refinance maturities and lending groups to be less willing to use forbearance measures. Germany. NAMA can manage the assets to maturity or gradually sell them down to thirdparty investors.5trn in maturities to fund through to 2015.6. Assets from “bad banks” The bailout of the banking system with public money could come under pressure. Public finances across most EU nations have come under pressure. The ability of countries to bail out their banking systems will link the creditworthiness of the banks to that of sovereigns. For example.4bn of loans of varying credit quality exit the books of RBS with the equity tranche taking up 41% of the total deal. Barclays Capital | European Credit Alpha Appendix: Thomson/Technicolor case study The Technicolor restructuring is an interesting case study in how distressed credits can restructure after insolvency. Debt and equity/ORA/Disposal notes valued at the same share as this class of debt in the old company. The restructuring saw debt and hybrid holders lose principal. agreed to a restructuring proposal with its creditors. TL E+500bp. Following this. A debt for equity swap.€0. with debt holders also getting extended and equity holders accepting dilution. In July 2009. Issued to creditors to be repaid from disposal proceeds and in shares of Thomson (300) options at prevailing market price.55% $/£/€ amortising (new PPN) Sr notes 9%/9. Barclays Capital 1 October 2010 14 . 1550 ORA (D/E swap) 2010/11 10% PIK Disposal note (10% PIK Dec 2010) Reinstated debt 2015 Note: * Restructuring plan used March 2009 FX rates.3bn in debt. Issued to credits as debt for equity swap maturing in December 2010 and December 2011 with holders option to extend by one (639) year. which differ from balance sheet numbers.05 2828 € Cash PF net debt Rights issue (D/E swap) Equity price was €0.35%/9.66 a share and fully underwritten by creditors via debt sell-off. Figure 19: Technicolor (old Thomson) restructuring Pre restructuring €mn Reinstated debt post restructuring €mn Syndicated facility 1739 Holders of the old syndicated facility received new loans and received equity/ORA Instruments/DPN rights. the company. which had been seen as in distress for some time. Debt and equity/ORA/Disposal notes valued at the same share as this class of debt in the old company. hybrid equity and equity. the subsequent issue of an ORA instrument that payable as equity in December 2010 and 2011 and the pledging of disposal proceeds as a disposal note wiped out c. Source: Company filings. The pre and post restructuring capital structure is displayed in Figure 19. Holders of the old PPNs received new PPN 1100 notes and equity/ORA Instruments/DPN rights. the new debt saw extended terms through to 2015-16 and the hybrid was stripped of its coupons after a payment of c.35%/9. Company can repay 23% in cash.55% $/£/€ bullet (new PPN) 643 194 Old private placement notes (PPN) 407 1550 (400) 1150 Total debt* Cash Hybrid Net debt Post restructuring Equity New term loans and senior notes are senior 2839 unsecured and pari passu Total reinstated debt (511) 500 Hybrid bought out at €0. 7% amortising 306 TL E+600bp 2016 bullet Sr notes 9%/9.05 to note holders (the hybrid is redeemed in full on any liquidation of the company and the payment avoided nuisance lawsuits).€1. It involved multiple classes of debt. (350) open to shareholders. Technicolour also pledged disposal proceeds from various asset sales that were repayable in equity/cash. Though interest cover has fallen to 3. the restructuring has resulted in a more sustainable capital structure.Barclays Capital | European Credit Alpha Debt recovery in this case (at time of restructuring) was c. reducing net debt to EBITDA to 2. this reflects the current higher cost of capital traded off for a longer life of debt.8x from 6. ORA and rights to disposal proceeds under the disposal note. Rather than push through a liquidation that may have destroyed value. down from 4x at end-2008. However. also giving it extra ability to refinance further on if the credit story continues to improve.55% for both the RCF and PPN holders (they were pari) before factoring in equity received under the debt for equity swap.4x. 1 October 2010 15 .3x over the same period. the original equity holders have been heavily diluted by the debt holders who received equity. The ultimate recovery for debtholders is also dependent on future equity and debt price direction in the restructured entity. 4 2.2 42.9 0.35 0. Index curves steepened. Insurance.27 0. but average by historical measures – and significantly below the USD100bn of new deals in the US market.5 1.5 2.5 5.5 2.4 2.02 0. while sterling cash underperformed and now looks cheap to USD-credit on a historical basis.4 3.5 2.5 1.9 2. our measure of the cash-CDS basis was broadly unchanged as single-name contracts lagged the index tightening.Barclays Capital | European Credit Alpha CREDIT AT A GLANCE Zoso Davies +44 (0) 20 7773 5815 zoso.14 1.EUR30bn of debt and non-financials c. Despite this.8 2.72 0.0 2.09 0.5 0.4 1. Indices were marginally tighter week on week. was the top performing sector this month – utilities underperformed.8 3.2 3.com Dominik Winnicki +44 (0) 20 3134 9716 dominik.5 -0.3 2.7 0.6 1.EUR5bn was hybrid debt.3 3.7 3.7 10.2 1.6 1.7 0.9 5.7 1. of which c.EUR20bn.3 1.19 0.30 0.88 0.49 0.0 1.9 51.0 4. Figure 20: Barclays Capital Euro Aggregate Corporate Index performance by sector Sector MTD excess returns (%) 29 Sept OAS(bp) 21 Aug OAS(bp) m/m spread change Excess return Excess return 3mth (%) 12mth (%) % of index Insurance Brokerage Banking Senior Lower Tier 2 Upper Tier 2 Tier 1 2.4 1. September issuance was the third highest monthly total this year.hagemans@barcap. led by financials – in particular the Tier 1 part of the capital structure.2 1. but only marginally.35 0.0 Transportation Capital Goods Basic Industry REITS Finance Companies Other Finance Technology Consumer Cyclical Natural Gas Consumer Non-Cyclical Other Utility Energy Communications Electric Utility Financials Corporates Source: Barclays Capital 1 October 2010 16 .08 0.77 0.16 0.6 6.82 0.51 0.00 -0. and slopes remain near the top of their historical [email protected] 0.34 0.9 1.8 12.4 [email protected] Corporates generated just over 50bp of excess returns in September.com Rob Hagemans +44 (0) 20 7773 6509 rob.5 0.0 1.6 4.3 1. Financials issued c.1 0.7 5.9 2.2 1.2 3.2 0.2 0. while investment grade cash was wider.0 1. High-yield cash continues to outperform.0 0.4 5.52 331 266 231 173 292 441 603 203 173 152 172 203 265 148 143 148 112 134 104 161 140 240 193 367 280 244 172 324 447 705 214 179 158 177 206 279 149 140 143 110 122 101 158 126 254 199 -36 -13 -13 1 -32 -6 -102 -11 -6 -6 -5 -3 -14 0 3 5 2 12 3 3 14 -14 -6 4.9 0.01 -0.13 0.9 0.2 1.8 3.1 0.7 2.2 8.4 100.7 1.98 5.77 0. This drove the skew more negative on Main and Crossover.9 8.5 2. which is more heavily weighted towards Tier 1 than banking.1 30.0 9. IG spreads widened while HY tightened w/w spread change. L-OAS 700 600 500 400 300 200 100 0 2007 EUR (155) GBP (199) USD (172) 2008 2009 2010 Source: Barclays Capital 17 . OAS (bp) 400 350 300 250 200 150 100 50 Sub. Sen. bp 30 25 20 15 10 5 0 -5 -10 -15 -20 Sen. HY. Financials (173) Non-Fins (144) Sub. historically BBB non-financials. bp 14 Note: Spread changes measured between Wednesdays.Barclays Capital | European Credit Alpha Figure 21: CDS indices tightened this week… w/w spread change (bp) 2 1 0 -1 -2 -3 -4 -5 -6 Main HiVol Crossover Sen. Source: Markit Figure 23: In cash. SovX. Fins (144) HiVol (175) Crossover (513) 2008 2009 2010 Note: Spread changes measured between Wednesdays. Fins Sub.Fin. Fins. Source: Barclays Capital Figure 25: USD cash outperformed this week w/w spread change. Source: Barclays Capital LT2 Tier 1 Financials Industrials Utilities Pan Eur HY Index HY 3% ex. Fins 12 10 8 6 4 2 0 EUR GBP USD 1 October 2010 Figure 24: HY spreads have outperformed year to date IG. HiVol (bp) 600 Main (111) 500 400 300 200 100 0 2007 Source: Markit Crossover (bp) 1200 1000 800 600 400 200 0 2011 SovX (156) Sen. OAS (bp) 1800 1600 1400 1200 1000 800 600 400 200 0 0 2007 2008 2009 2010 2011 Sen. Fins SovX Figure 22: … but spreads remain in a narrow range Main. Financials (377) Pan Eur HY Index (525) Source: Barclays Capital Note: Spread changes measured between Wednesdays. Fins Figure 26: Sterling credit looks cheap to USD. bp 100 50 0 -50 -100 -150 -200 Aug 09 48 9 4 -16 -27 Nov 09 Feb 10 May 10 Aug 10 High Yield € IG non-fins £ IG non-fins Source: Markit. Barclays Capital 1 October 2010 Aug Apr Oct Jul . Issuance MTD Dec May Nov Jan Sep Feb Jun Mar 2009 Financials Source: Dealogic. EUR bn 120 100 80 60 40 20 0 Sep* Dec May Nov Sep July Jan Jun Mar Aug Feb Apr Oct *. Barclays Capital € IG sen-fins € IG sub-fins Source: Markit. €bn 10 8 6 4 2 0 Sep* 18 *. Barclays Capital 2010 Monthly Issuance Figure 31: Implied vol took another leg lower this week Figure 32: Realised volatility continues to trend lower 140% 120% 100% 80% Main (43%) HiVol (52%) Crossover (50%) SenFin (57%) SovX (52%) Annualised 110% 100% 90% 80% 70% 60% 50% 40% 30% Apr 09 Jul 09 Oct 09 Realised Jan 10 Apr 10 Jul 10 Implied 43% 66% 60% 40% 20% Jan 09 Apr 09 Jul 09 Oct 09 Jan 10 Apr 10 Jul 10 Source: Markit. Barclays Capital 2010 Non-Financials Source: Dealogic.Barclays Capital | European Credit Alpha Figure 27: Basis was broadly unchanged this week w/w change in basis. Issuance MTD Figure 30: HY issuance picked up late in the month HY Issuance. Barclays Capital Figure 29: September has been the third busiest month. bp 10 8 6 4 2 0 -2 -4 -6 -8 € IG nonfins £ IG nonfins € IG subfins € IG senfins High Yield Figure 28: The sub-fins basis continues to normalise cash-CDS Basis. Barclays Capital Source: Markit. YTD IG Issuance. Barclays Capital Source: Markit.2 0.05 0.Barclays Capital | European Credit Alpha Figure 33: Curves steepened marginally this week w/w spread change.15 0. Agg. Corp OAS to SX5P Source: Markit.10 -0. cash index (OAS) 500 2000-2007 Cycle 2007-2009 Cycle 2009-Present Current Spread Figure 36: Cross-asset beta and correlation have stabilised 90 day Beta of w/w changes 0. Bloomberg.00 -0. bp 150 100 50 Main slope.4 -0. Corp OAS to SX5P Correlation. Barclays Capital Figure 37: Skew on Main turned negative this week… Skew (bp) 16 14 12 10 8 6 4 2 0 -2 -4 Apr 10 May 10 -3.05 -0. bp 30 20 10 0 -10 XO 3s5s (76) XO 5s10s (5) Main 3s5s (27) Main 5s10s (11) -20 -30 -40 -50 2011 2 0 -50 1 -100 -150 -200 0 Main 3s5s Main 5s10s XO 3s5s XO 5s10s -250 2006 2007 2008 2009 2010 Source: Markit. Barclays Capital Source: Markit. Agg. Barclays Capital Figure 35: Credit and equities remain range-bound Stoxx 50 5600 5100 4600 4100 3600 3100 2600 2100 1600 1100 600 0 100 200 300 400 Barclays Capital EUR Agg. Barclays Capital 25 day rolling average 1 October 2010 19 . Barclays Capital 25 day rolling average Skew of Crossover Source: Markit. BarCap Eur.6 -0. bp 3 Figure 34: Index 3s5s remain near historical highs Crossover slope.0 2004 2006 2008 2010 Beta.6 Jun 10 Jul 10 Aug 10 Sep 10 Skew of Main Source: Markit.0 -0.0 Jun 10 Jul 10 Aug 10 Sep 10 Figure 38: … but was broadly unchanged on Crossover Skew (bp) 50 40 30 20 10 0 -10 Apr 10 May 10 9.15 2002 90 day Correlation of w/w changes 0.2 -0.10 0. BarCap Eur.8 -1. Bloomberg. Barclays Capital | European Credit Alpha Returns Figure 39: Barclays Capital Euro Aggregate Corporate Index – performance by rating and maturity MTD excess returns (%) Rating 29 Sept OAS(bp) 21 Aug OAS(bp) m/m spread change Excess return 6mth (%) Excess return 12mth (%) % of index Aaa Aa A Baa Maturity 0.3 -0.4 3.0 7.6 10.1 20.8 3.7 17.5 49. Pan European 3% High Yield Index Total return (%) Jul Overall 3.4 6.3 25.3 -0.8 24.3 31.9 3.16 0.2 3.9 4.8 5.9 2.2 3.6 5.3 5.1 3.7 3.02 0.3 4.9 11.8 32.7 4.1 23.1 5.6 11.3 96 152 281 575 238 175 146 91 66 79 91 153 115 149 145 152 130 84 142 56 -65 -93 -159 -243 -100 -86 -126 -37 -64 -125 -138 -123 -109 -92 -81 -91 -92 -64 -73 11 12 16 -198 168 -125 -37 34 -18 27 -76 -71 -112 -32 36 -140 15 -21 -8 -69 130 57.4 21.2 24.0 3.8 1.8 -1.4 3.9 OAS/ MAD 134 Δ OAS (bp) Jul -84 2010 YTD -38 % Credit index 100.5 12.8 4.7 27.9 OAS (bp) 530 MAD (yrs) 3.9 24.72 -0.6 24.7 27.4 21.7 2.9 1.1 21.7 2.3 6.1 29.6 1.5 62.9 8.7 0.2 1.6 10.4 7.0 4.3 3.8 15.46 0.3 11.49 0.6 13.7 5.8 8.9 4.8 16.3 15.4 5.6 -0.3 6.0 1.6 1.7 11.0 9.6 -0.1 4.66 0.7 2.7 18.7 0.3 20.6 -0.1 0.9 8.95 100 137 188 267 99 136 196 275 1 1 -8 -8 0.2 388 620 902 2065 439 584 639 490 463 417 472 605 458 663 543 606 513 366 482 350 4.3 19.1 14.4 3.9 1-3y 3-5y 5-7y 7-10y 10y+ Source: Barclays Capital 0.6 5.50 0.2 2010 YTD 9.3 8.3 0.8 2.0 10.8 29.0 4.3 13.9 1.6 -2.7 1.5 51.7 24.1 1.9 4.4 8.2 26.1 8.4 25.9 0.4 24.0 Ba B Caa Ca-D 0-3y 3-4y 4-5y 5-6y 6y+ Energy Electric Technology Transportation Communications Basic Industry Other Industrial Capital Goods Consumer Non-Cyclical Consumer Cyclical Other utility Source: Barclays Capital 2.4 5.2 1 October 2010 20 .5 3.7 5.4 18.8 20.1 3.9 Figure 40: 2010 July total returns.2 22.1 25.01 185 195 199 214 158 196 200 202 215 157 -12 -4 -3 -1 1 0.0 6.3 LTM 23.5 15.0 18.1 2.7 2.9 2.6 3.8 10. 15 0.9 0.05 Skew 0.9 1.5 Jan Feb Mar Main Apr May Jun Jul Aug Sep EURUSD SX5E EUSW5 Volatility (%) 80 75 70 65 60 55 50 30/08/2010 IG 3M ATM (rhs) Note: 3m implied volatilities normalised to 4 January level.5 1. EUSW5 represents 3m implied volatility in 5y swaptions. Barclays Capital 1 October 2010 21 .7 0.Barclays Capital | European Credit Alpha Index volatility update Figure 41: Realised volatility keeps dropping Volatility (%) 160 140 120 100 80 60 40 20 0 Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug 3M Implied Source: Barclays Capital Figure 42: Payer skew edged up as risk aversion increased Volatility 120% 100% 80% 60% 40% 20% 0% Sep 09 Nov 09 Jan 10 Mar 10 May 10 Jul 10 0.1 3M Realised 1M Realised 3M ATMf implied vol Payer skew (rhs) Source: Barclays Capital Receiver skew (rhs) Figure 43: Main volatility still in a downward trend Volatility (%) 90 85 80 75 70 65 60 55 50 30/06/2010 30/07/2010 Main 3M ATM Source: Barclays Capital Figure 44: Credit volatility falling versus other asset classes Normalized volatility 1.7 1.2 0. Source: Bloomberg.05 0 -0.1 0.3 1. com Neil Beddall Utilities +44 (0)20 7773 9879 neil.morton@barcap. European Fundamental Credit Research +44 (0)20 7773 9857 robert.com Zoso Davies Credit Strategy +44 (0)20 7773 5815 zoso.com Justin Ong Consumer/Retail +44 (0)20 3134 9687 justin.com Robert Jones Head.com Arup Ghosh Credit Strategy +44 (0)20 7773 6275 [email protected]@[email protected] Aziz Sunderji Credit Strategy +44 (0) 20 7773 7881 [email protected]@[email protected]@barcap.com Daniel Rekrut High Yield TMT +44 (0)20 7773 5980 daniel.com 1 October 2010 22 .Barclays Capital | European Credit Alpha EUROPEAN CREDIT RESEARCH ANALYSTS Matthew Leeming Credit Strategy +44 (0)20 7773 9320 matthew.com Sam Morton High Grade TMT +44 (0)20 7773 7844 [email protected]@ [email protected]@barcap.com Eugene Regis Credit Strategy +44 (0)20 7773 9169 eugene.com Darren Hook Industrials +44 (0)20 7773 8970 darren.com Dominik Winnicki Credit Strategy +44 (0)20 3134 9716 [email protected]@[email protected] Rob Hagemans Credit Strategy +44 (0)20 7773 6509 rob.com Brian Monteleone Insurance +44 (0)20 3134 9685 brian.com Jeroen Julius Banks +44 (0)20 3134 9642 jeroen. which may pose a conflict with the interests of investing customers. some or all of which may have changed since the publication of this document. or the Pan-European High Yield 3% Issuer Capped Credit Index excluding Financials. the Pan-European High Yield 3% Issuer Capped Credit Index excluding Financials. the firm makes no representation that it is accurate or complete. and the potential interest of the firms investing clients in research with respect to. Eugene Regis. Explanation of the High Yield Sector Weighting System Overweight: Expected six-month total return of the sector exceeds the six-month expected total return of the Barclays Capital U.S. The firm's proprietary trading accounts may have either a long and / or short position in such securities and / or derivative instruments.S.S. as applicable. High Yield 2% Issuer Capped Credit Index. NY 10019 or refer to. fundamental analysis. the profitability and revenues of the Fixed Income Division and the outstanding principal amount and trading value of. Market Weight: Expected six-month total return of the sector is in line with the six-month expected total return of the Barclays Capital U. Any reference to Barclays Capital includes its affiliates. and trade ideas. The firm's fixed income research analyst(s) receive compensation based on various factors including. As a result. methodologies. the overall performance of the firm (including the profitability of the investment banking department). Underweight: Expected six-month total return of the sector is below the six-month expected total return of the Barclays Capital U. 2% Issuer Capped High Yield Credit Index. bank loans. equity-linked analysis. depending on the company under analysis. the Pan-European High Yield 3% Issuer Capped Credit Index excluding Financials. Matthew Leeming. Aziz Sunderji and Dominik Winnicki. or the EM Asia USD High Yield Corporate Credit Index. New York. 2% Issuer Capped High Yield Credit Index. .pl or call 212-526-1072. Market Weight: The analyst expects the six-month total return of the rated debt security or instrument to be in line with the six-month expected total return of the Barclays Capital U. Where permitted and subject to appropriate information barrier restrictions.Analyst Certification(s) We. quantitative analysis. please send a written request to: Barclays Capital Research Compliance. Rating Suspended (RS): The rating has been suspended temporarily due to market events that make coverage impracticable or to comply with applicable regulations and/or firm policies in certain circumstances including where Barclays Capital is acting in an advisory capacity in a merger or strategic transaction involving the company. prices or spreads. To the extent that any historical pricing information was obtained from Barclays Capital trading desks. the Pan-European High Yield 3% Issuer Capped Credit Index excluding Financials. hereby certify (1) that the views expressed in this research report accurately reflect our personal views about any or all of the subject securities or issuers referred to in this research report and (2) no part of our compensation was. or otherwise. Important Disclosures For current important disclosures regarding companies that are the subject of this research report.barcap. Barclays Capital and/or an affiliate thereof (the "firm") regularly trades.S. 17th Floor. 2% Issuer Capped High Yield Credit Index. as applicable. as applicable. Barclays Capital produces a variety of research products including. 745 Seventh Avenue. may be applied to either some or all of the company's debt securities. High Yield 2% Issuer Capped Credit Index or the Pan-European High Yield 3% Issuer Capped Credit Index excluding Financials. the asset class covered by the analyst. Overweight: The analyst expects the six-month total return of the rated debt security or instrument to exceed the six-month expected total return of the Barclays Capital U. as applicable. Barclays Capital does and seeks to do business with companies covered in its research reports. investors should be aware that Barclays Capital may have a conflict of interest that could affect the objectivity of this report. High Yield 2% Issuer Capped Credit Index or the Pan-European High Yield 3% Issuer Capped Credit Index excluding Financials. All levels. Recommendations contained in one type of research product may differ from recommendations contained in other types of research products. prices and spreads are historical and do not represent current market levels. as applicable. but not limited to. generally deals as principal and generally provides liquidity (as market maker or otherwise) in the debt securities that are the subject of this research report (and related derivatives thereof). Zoso Davies. but not limited to. Not Rated (NR): An issuer which has not been assigned a formal rating. Explanation of the High Yield Research Rating System The High Yield Research team employs a relative return based rating system that.S.com/research/cgibin/all/disclosuresSearch. Underweight: The analyst expects the six-month total return of the rated debt security or instrument to be below the six-month expected total return of the Barclays Capital U. is or will be directly or indirectly related to the specific recommendations or views expressed in this research report. the firm's fixed income research analysts regularly interact with its trading desk personnel to determine current prices of fixed income securities. as applicable. or the EM Asia USD High Yield Corporate Credit Index.S. or other instruments. the quality of their work. the profitability of. Arup Ghosh. or the EM Asia USD High Yield Corporate Credit Index. whether as a result of differing time horizons. Please review the latest report on a company to ascertain the application of the rating system to that company. accounting. New York. tax. Any U.: 13/952/2008. Broker License #177-11850-100000. you should seek advice based on your particular circumstances from an independent tax advisor. Gauteng 2196. 1026167. Dealer License #177-11855-010000.S. It is directed at. Doha. which may differ substantially from those reflected. Hamdan Street. PO Box 2734. tax-related penalties. and/or one or more of its affiliates as provided below. Barclays Capital recommends that investors independently evaluate each issuer. Minato-ku.: 13/1844/2008. West Bay. is distributing this material in the United States and. Barclays Capital Japan Limited is a joint-stock company incorporated in Japan with registered office of 6-10-1 Roppongi. This material is distributed in Malaysia by Barclays Capital Markets Malaysia Sdn Bhd. and Business Customers as defined by the QFCRA. Office 1002. Barclays Bank PLC-DIFC Branch. Diplomatic Area. nor any of their respective officers. Registered Office: Al Jazira Towers. and such opinions were prepared independently of any other interests.finra. 0060) is regulated by the Dubai Financial Services Authority (DFSA). The securities discussed in this publication may not be suitable for all investors.. foreign exchange research reports are prepared and distributed by Barclays Bank PLC Tokyo Branch. Barclays Bank PLC in the Dubai International Financial Centre (Registered No. Absa Capital is an affiliate of Barclays Capital. in connection therewith accepts responsibility for its contents. This material is distributed in Canada by Barclays Capital Canada Inc.A. investment. Barclays Bank PLC. 10th Floor. a registered investment dealer and member of IIROC (www. Registered Office: Building No.S. and cannot be used. by you for the purpose of avoiding U. New York 10019. Non-U.ca). Johannesburg. The views in this publication are those of Barclays Capital and are subject to change. . is distributing this material in South Africa.org). 143. Related financial products or services are only available to Professional Clients as defined by the DFSA. at 745 Seventh Avenue.: 1986/004794/06). In Japan. Japan. US registered broker/dealer and member of FINRA (. 6. legal. persons who have professional experience in matters relating to investments. or employees accepts any liability whatsoever for any direct or consequential loss arising from any use of this publication or its contents. The value of and income from any investment may fluctuate from day to day as a result of changes in relevant economic markets (including changes in market liquidity).iiroc. The information in this publication is not intended to predict actual results. Registered office Al Faisaliah Tower | Level 18 | Riyadh 11311 | Kingdom of Saudi Arabia. No part of this publication may be reproduced in any manner without the prior written permission of Barclays Capital or any of its affiliates. Commercial Registration Number: 1010283024. This information is distributed in Dubai. Burj Dubai Business Hub. 1st Tverskaya-Yamskaya str. may only undertake the financial services activities that fall within the scope of its existing DFSA licence. Dubai City) and Abu Dhabi (Licence No. partners. 15 Alice Lane. It is a subsidiary of Barclays Bank PLC and a registered financial instruments firm regulated by the Financial Services Agency of Japan. This communication is being made available in the UK and Europe primarily to persons who are investment professionals as that term is defined in Article 19 of the Financial Services and Markets Act 2000 (Financial Promotion Order) 2005. IRS Circular 230 Prepared Materials Disclaimer: Barclays Capital and its affiliates do not provide tax advice and nothing contained herein should be construed to be tax advice. Barclays Bank PLC in the Qatar Financial Centre (Registered No. Sheikh Zayed Road. It is not the intention of the Publication to be used or deemed as recommendation. QFC Tower. This publication is provided to you for information purposes only. Hong Kong Branch is distributing this material in Hong Kong as an authorised institution regulated by the Hong Kong Monetary Authority. PO Box 15891. Cheung Kong Center. London. Registered Office: 41/F. Barclays Bank PLC in the UAE is regulated by the Central Bank of the UAE and is licensed to conduct business activities as a branch of a commercial bank incorporated outside the UAE in Dubai (Licence No.S. option or advice for any action (s) that may take place in future. 00018) is authorised by the Qatar Financial Centre Regulatory Authority (QFCRA). Barclays Saudi Arabia is a Closed Joint Stock Company. and therefore should only be relied upon by. Barclays Bank PLC-QFC Branch may only undertake the regulated activities that fall within the scope of its existing QFCRA licence. This publication is not. in the U.S. 37 of 2002. Additional information regarding this publication will be furnished upon request. the UAE and Qatar by Barclays Bank PLC. This material is distributed in Russia by Barclays Capital. nor any affiliate. or any other financial. Registered Number: Kanto Zaimukyokucho (kinsho) No. an authorised financial services provider (Registration No. Authorised and regulated by the Capital Market Authority. affiliated company of Barclays Bank PLC. advice as defined and/or contemplated in the (South African) Financial Advisory and Intermediary Services Act. trading. registered and regulated in Russia by the FSFM. Past performance is not necessarily indicative of future results. including those of Barclays Capital and/or its affiliates. The investments to which it relates are available only to such persons and will be entered into only with such persons. Other research reports are distributed to institutional investors in Japan by Barclays Capital Japan Limited. Barclays Capital Inc. 2 Queen's Road Central. © Copyright Barclays Bank PLC (2010). actuarial or other professional advice or service whatsoever. Tokyo 106-6131. Registered address in Russia: 125047 Moscow. This material is distributed in Saudi Arabia by Barclays Saudi Arabia ('BSA'). Barclays Bank PLC is registered in England No. Absa Bank Limited is regulated by the South African Reserve Bank. This material is distributed in Brazil by Banco Barclays S. Absa Capital. security or instrument discussed in this publication and consult any independent advisors they believe necessary. Abu Dhabi). but Barclays Capital does not represent or warrant that it is accurate or complete. the information contained in this publication has been obtained from sources that Barclays Capital believes to be reliable. EU14878 E14 5HP. persons should contact and execute transactions through a Barclays Bank PLC branch or affiliate in their home jurisdiction unless local regulations permit otherwise. Barclays Capital is authorized and regulated by the Financial Services Authority ('FSA') and member of the London Stock Exchange. Principal place of business in Qatar: Qatar Financial Centre. Any South African person or entity wishing to effect a transaction in any security discussed herein should do so only by contacting a representative of Absa Capital in South Africa. and (ii) was written to support the promotion or marketing of the transactions or other matters addressed herein. tax matters contained herein (including any attachments) (i) is not intended or written to be used. Other than disclosures relating to Barclays Capital. Hong Kong. Accordingly. retirement. directors. (CMA License No.This publication has been prepared by Barclays Capital. nor is it intended to be. Barclays Bank PLC Frankfurt Branch is distributing this material in Germany under the supervision of Bundesanstalt fuer Finanzdienstleistungsaufsicht (BaFin). Neither Barclays Capital. The analyst recommendations in this report reflect solely and exclusively those of the author(s).. person wishing to effect a transaction in any security discussed herein should do so only by contacting a representative of Barclays Capital Inc. Registered office 1 Churchill Place. All rights reserved. Prices shown in this publication are indicative and Barclays Capital is not offering to buy or sell or soliciting offers to buy or sell any financial instrument. and Barclays Capital has no obligation to update its opinions or the information in this publication. the Investment Banking Division of Absa Bank Limited. Please be advised that any discussion of U. 21. Sandton. 09141-37). Subject to the conditions of this publication as set out above. the investment banking division of Barclays Bank PLC. Qatar.
https://www.scribd.com/document/45657141/BarCap-on-Distressed-Debt-Markets
CC-MAIN-2017-13
refinedweb
11,172
56.55
How to Make a Waiting Game Like Farmville with SpriteKit and Swift In this SpriteKit tutorial, you’ll learn how to make your very own waiting game — just like Farmville — with SpriteKit and Swift. Popular “waiting games” like Clash of Clans and Farmville are played in short, frequent bursts, punctuated by periods of waiting — plant a crop in seconds, then check back at a later time to reap what you’ve sown. While every game is different, most are built around the following basic concepts: - State machines - Timed events - Local notifications With this SpriteKit tutorial, you’ll dive right into these concepts, as you continue to build out Kookie Kiosk – a simple waiting game where players vie for riches by buying and selling tasty treats. Yum! Note: This tutorial assumes you have working knowledge of SpriteKit and Swift. If you’re new to SpriteKit, check out our SpriteKit Swift Tutorial for Beginners or our full book, 2D Apple Games by Tutorials. For an introduction to Swift, check out this beginner Swift tutorial here. Getting Started Download the template project, then open the KookieKiosk-Swift.xcproject project file with Xcode. Build and run the app in the iPad Air 2 simulator: Don’t worry — you’ll soon clean shop. Take a quick look at the files in this project: - GameScene.swift: Contains the core logic for the game. It maintains a list of all items in the kiosk and how much money you have. At startup, all on-screen elements are added to GameScene. This class also loads and saves a .plist file containing important data about the items available in the kiosk. Finally, but most importantly, it responds to changes in the game state. - StockItem.swift: Represents a single item in the kiosk. It stores important properties of the item like type, flavor and amount. There’s also a few constants like the maximum amount the player can have of an item, the position on the screen, prices for restocking, selling the item and even the speed at which the item is stocked and sold. The class contains two helper methods which draw the price label and the timer while the item is stocking. - Customer.swift: Draws a customer on-screen and stores the item the customer wants to buy. - GameDelegate.swift: Defines methods that StockItem.swift will call to perform game logic. - Constants.swift: The enum ZPositionensures game layers are drawn in the correct order. TimeScalecontrols game speed. - AppDelegate.swift: The only change to the default AppDelegate is a preload of two sound files. - GameViewController.swift: Loads the initial scene. Implementing the State Machine Kookie Kiosk players earn money by selling cookies and shakes. But they can’t sell something they don’t own (out-of-cookies exception), nor restock a loaded platter (cookie-overflow exception). Keep things running smoothly with a state machine to define states and transitions between those states. Kookie Kiosk’s state machine can be represented by the following diagram: Each item can be in one of four states: - Empty: This item is out-of-stock. Players can buy more by tapping the item. - Stocking: Once the player buys more, the item will begin stocking. - Stocked: Ready to sell. - Selling: Once stocked, the player taps an item to start selling. When an item is sold out, it returns to Empty. Here are images that correspond to each state: Enough background — it’s time to code. Add the following to the bottom of Constants.swift: This defines an enum with a value for each of the four states. Next, add the following property at the end of the list of properties in StockItem.swift: And add the following lines to init(stockItemData:stockItemConfiguration:gameDelegate:) just before super.init(): In the code above you retrieve the state from the dictionary stockItemData, which contains all information about the stock item loaded from gamedata.plist shown below: In the code above, you retrieve the value from stockItemData stored under the key “state” and then cast it to an Int. Then, you map the value of the Int to the corresponding value of the enum State and assign the result to the state. Now that you can load the state of the item, make sure the state can be saved. Add the following line to data() in StockItem.swift, right before the return statement: This line sets the value for the key “state” to the raw Int value of the state for the stock item. That takes care of loading and storing the states. Next, you’ll add code for displaying the state changes. Cleaning up the Interface Add the following method to StockItem.swift: This method contains a switch statement that distinguishes between the four states of your stock item. For each state, it sets isHidden appropriately for the stocking timer, sell button and price tag. For example, when the price tag is visible in the empty state; you hide the stocking timer and the sell button. The other states follows the same logic. Add the following line to the end of init(stockItemData:stockItemConfiguration:gameDelegate:): This will initialize the stock item to the state loaded from gamedata.plist. Build and run your project. All stock items should now start off in the empty state. They should also now display their price tag, ready and waiting for the player to act: Switching States Add the following method to StockItem.swift: This method operates on the two states that allow user interaction: empty and stocked. When empty, you first attempt to update the player’s money through gameDelegate. If the player has enough money to make the purchase, you then call switchTo(state:) to change the item’s state to stocking. If the player is short on funds, you let the player know by playing a sound effect and shaking the price tag. To handle stocked, you simply call switchTo(state:) with the selling state. There are no additional conditions that need to be met for this transition, and this puts the item in a state where it will update over time. Updating States Over Time To update an item over time, you’ll first need to know the last time the state changed to calculate how much time has passed and how far along the stocking or selling process should be. Add the following property to StockItem.swift, right after the state property: You’ll use CFAbsoluteTime to refer to a specific point in time. Even if the player restarts the game you still need to know exactly when that event happened in order to update stock properly. Add the following line to init(stockItemData:stockItemConfiguration:gameDelegate:) just before super.init(), to load the time of the last state change: And add the following line to data(), right before the return statement: This line adds an entry for the last state-switch time to the data dictionary stored in gamedata.plist. Now you need to make sure that lastStateSwitchTime is assigned the proper value while the game is running. Add the following line of code to the beginning of switchTo(state:): This ensures that you’ve actually changed states. If so, then update lastStateSwitchTime to the current time. You can always get the current time using the ever-helpful CFAbsoluteTimeGetCurrent(). Stocking Your Items You can use the absolute time of the last state-switch to show some progress indicators to your player. Start by updating the countdown that shows the player how long they need to wait for a purchased item to complete stocking. Add the following method to StockItem.swift: In this method, you set the text of the stockingTimer to the time remaining until stocking is complete. To get this value, you first calculate the amount of time it takes to fully stock the item. You do so by multiplying stockingSpeed and the maximal amount of the stock item and then cast it to CFTimeInterval. Next, you store the current time in a temporary variable to calculate how much time has passed since the last state change. The time to restock the item is now simply the total time minus the time that has passed to this point: Finally, you set the text to the remaining time, so the user can see when the item will be fully stocked. Since you only want to display whole seconds to your player you use the format specifier %.0f, which tells Swift to display a float variable with zero digits after the decimal. Add the following method to StockItem.swift to update the display of the item during stocking and selling: First, calculate how much time has passed since the last state-switch. If the item’s current state is stocking, you call the helper method updateStockingTimerText(). Next, you update the item amount which is simply the time elapsed divided by the stocking speed. Of course, the player can never stock more items than maxAmount, so you use min to limit the amount to maxAmount. Finally, you check whether the new amount is equal to maxAmount. If so, then change the state to stocked. The only thing left to do is call update() for every stock item. Add the following method override in GameScene.swift at the bottom of GameScene class as follows: Build and run your project. Tap on a stock item and you’ll see the timer count down to zero. Then the item switches to the stocked state. That coin in front of the cookies indicates that they are ready to be sold. Selling Items As soon as an item is fully stocked, players can start selling it. Add the following code to update() in StockItem.swift, inside the case statement right before the default case: First, you store the current amount of the item in previousAmount. You then calculate the new amount by subtracting the quotient of timePassed and sellingSpeed from maxAmount. Again, you need to limit the number of items that can be sold to maxAmount. Now, the number of items sold is simply the difference between the previous amount and the new amount. In order to limit the number of calls to progressBar and gameDelegate, you check whether at least one item has been sold since the last call to update. If so, notify gameDelegate about the change in the player’s funds, then set the progress bar to the value of the amount sold divided by the maximum amount available. This change in money will always be positive, so you can ignore the result of updateMoney(by:) here. Finally, you check whether the stock item sold out by comparing the amount remaining to 0. When the item is sold out, set the state back to empty. Your state machine is now complete. Build, run and buy some cookies! Click on the coin to start selling. You’ll see your cookies sell over time, fattening your wallet: w00t I sold a cookie — I’m rich! Introducing Customers Many waiting games have events that trigger at random points. In Tiny Towers, specialists make the occasional appearance to dramatically boost progress. In Kookie Kiosk, your player will have to serve demanding customers who randomly appear. Add the following property to GameScene.swift, right below the moneyLabel property: This stores the current customer using the Customer class already implemented in the starter project. For the moment you’ll only serve one customer at a time. Now you need to handle the timing of your customers’ arrivals. Add the following properties to GameScene.swift, right below the customer property: Then add the following method: And call it at the bottom of didMove(to:): The time the last customer appeared will initially be the startup time of the app since no customers have appeared yet. You also store a time interval that indicates how many seconds it will take for the next customer to appear. For this, you use a random value between 15 and 30 seconds. You then multiply this interval by TimeScale so you can control the rate at which customers appear. Add the following code to the end of update() in GameScene.swift: This is a lot of code, but the logic is straightforward: - First check how much time has passed since the last customer appeared. If it’s greater than the generated time interval, it’s time to spawn a new customer. - Customer’s wishes are limited to the types and flavors of items currently in-stock and not sold out. Add all items that match this criteria to the list of potential wishes. - Select a random index from the list of potential wishes, then create a new customer that wishes for the type and flavor of the randomly selected item. - Finally, make the customer appear from the right border of the screen. Using a simple SKActionyou move it from the outside of the screen just until it’s entirely on screen. Build and run your app. When you have items available, a customer will appear randomly and place an order at your kiosk. Next, you’ll add code to serve the customer. Declare the following method in GameDelegate.swift: This changes the protocol and every class conforming to it should now complain that they no longer conform to the protocol. You should see an error in Xcode stating that Type ‘GameScene’ does not conform to protocol ‘GameDelegate’. To fix the error, implement the missing method inside the extension at the bottom of GameScene.swift as follows: Take a look at what happens here: - First check if the type and the flavor of the item correspond to what the customer desires. If so, add $50 to the player’s funds and play a sound effect. Otherwise, play a sound effect indicating that you haven’t satisfied this customer’s wish. That sound will also play if there’s no customer at the current time. - Next, remove the customer sprite using an instance of SKActionthat moves the customer off to the right and off the screen. As soon as the customer sprite is off the screen, remove the sprite from the scene and set it to nil. - As soon as the customer leaves the scene you also need to schedule the time when the next customer will arrive. Set the time of the last customer to the current time, and determine how long it will be until the next customer appears. All that’s left to do is to call the new method from touchesBegan(_:with:) in StockItem.swift like so (add this inside the case statement right before the default case): To try this out, build and run and buy some cookies. When a customer arrives, tap the cookies once to start selling, and then again to give a cookie to the customer. Serving your customers quickly is the key to success! Sending User Notifications Your game now looks and plays great, but eventually the player will leave the game. Enticing the player with a one-liner notification should lure them back to your game: Receiving a notification Players can see a list of missed notifications by pulling down the context menu from the top: Hey you missed something! You can also update the badge icon: Local vs. Remote Notifications There are two different ways to inform players about background changes in an app: local notifications and remote notifications. While both look and sound the same, there are major differences from the viewpoint of a developer: - Local Notifications are triggered directly on the device and are easy to implement. - Remote Notifications are triggered by a remote server, and are more complicated. Local notifications will work just fine for Kookie Kiosk. Asking for User Permission The first step is to ask the player to authorize your app to use notifications. Open AppDelegate.swift and add the following to the import statements at the top of the file: Then add the following to application(_:didFinishLaunchingWithOptions:): The UserNotifications framework is new in iOS 10, replacing previous platform-specific interfaces for local and remote notifications. Using UNUserNotificationCenter, you ask for permission to send an alert, play a sound and add a badge number to the Kookie Kiosk app icon. The Build and run, and you’ll see the following dialog: Tap OK to allow notifications. (If you tap Don’t Allow, notifications will not appear). Note that after the first run, this dialog won’t show up again. Instead, the app will use the value stored in the Settings app. Scheduling Notifications Since most of your notifications will be similar in structure, you’ll create a small helper method to schedule a notification. Open GameScene.swift and add another import statement: Then add the following method: The above method builds the notification from a message, a time interval, and the updated badge number as follows: - First, you create an empty notification using UNMutableNotificationContent(). - Then, set the properties of your notification, which in this tutorial, means the body text, a sound alert and badge number. (You could also include a title, subtitle and media if so inclined.) - Define a trigger for your notification based on the timeInterval. - Finally, you can schedule the notification! Calling your new scheduleNotificationWith(body:intervalInSeconds:badgeNumber:) method lets you easily schedule a single local notification. However, you’ll want to schedule a notification for the state change of every stock item. For this, you’ll need two things: notification text and the time to show the notification. Add the following method to StockItem.swift: In the above method, you implement a switch on the state of the stock item. Then for each state you formulate a message that gives details about the current state and includes the flavor and type of the item. Although there are four states in your app, only two of them — selling and stocking — are dependent on time. The other two states depend on user interaction, so they don’t need scheduled notifications. Now add the following method to calculate the time until the next state-switch: In this method, you determine how long it takes to complete selling or stocking an item. You can now schedule a notification for every stock item. Add the following method to GameScene.swift: First, you sort the notifications by their notificationTime. Why is the order relevant? You can use it to manage the badge number, since this doesn’t happen automatically. Next you iterate over the list of stock items, and for each item you retrieve the appropriate notification message. If the message is not nil, then schedule the notification. With every notification you send, increase the count of the badge number accordingly. This finishes off the method that schedules a notification for every stock item. You still need a way to call it when the app enters the background. There is only one tiny problem here: only the AppDelegate knows that your app is entering the background, and it doesn’t know about your GameScene. A great solution for this problem is to use NotificationCenter, which provides you with a mechanism to broadcast information within your app. Open AppDelegate.swift and add the following code to applicationDidEnterBackground(_:): This will broadcast out a notification through the NotificationCenter when your app enters the background state. All you need to do now is listen for this notification. Open GameScene.swift and add the following code to the end of didMove(to:): This registers the GameScene as an observer — you tell NotificationCenter to call scheduleNotifications() when an event with the name “scheduleNotifications” triggers. Build and run, start making some cookies, and hit Command-L a couple times (if using the simulator) to go to the lock screen. When the cookies finish baking, you should see a notification appear on the screen: You can swipe the notification to return to the app, and see your finished cookies! Resetting the App Badge You’re nearly done — all that’s left to do is cancel all notifications and set the number of the badge to zero when the player resumes their game. You don’t want to pester the player with the same information twice. Open AppDelegate.swift and add the following lines of code to applicationDidBecomeActive(_:): First, we need a reference to UNUserNotificationsCenter, then we can clear old notifications. The last line sets applicationIconBadgeNumber to zero. And that’s it — your kiosk game is complete! Where to Go From Here? You can download the completed project for this tutorial here. This game provides a great foundation for waiting games. It also scratches the surface of state machines and user notifications. For more on those topics, you can: - Dive into GameplayKit and GKStateMachine - Learn about the new User Notifications capabilities in iOS 10 - Learn how to monetize your game with (clever) In-App Purchases Or you can stick with what you already learned and add fun features to the game, such as: - Charge the player rent on the kiosk. - Add more types and flavors of tasty snacks… - … but have them spoil if they sit around too long before going on sale. - Add patience levels to your customers — leave them hanging for too long they’re liable to storm off and leave catty Yelp reviews! :[ If you have any questions or comments about this tutorial, please join the forum discussion below! Team Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are: - Author Kevin Colligan - Tech Editor Kyle Gorlick - Final Pass Editor Chris Language - Team Lead Tammy Coron
https://www.raywenderlich.com/143258/make-waiting-game-like-farmville-spritekit-swift
CC-MAIN-2017-13
refinedweb
3,588
63.39
#if (C# Reference) When the C# compiler encounters an #if directive, followed eventually by an #endif directive, it compiles the code between the directives only if the specified symbol is defined. Unlike C and C++, you cannot assign a numeric value to a symbol. The #if statement in C# is Boolean and only tests whether the symbol has been defined or not. For example: #if DEBUG Console.WriteLine("Debug version"); #endif You can use the operators == (equality) and != (inequality) only to test for true or false. True means the symbol is defined. The statement #if DEBUG has the same meaning as #if (DEBUG == true). You can use the operators && (and), || (or), and ! (not) to evaluate whether multiple symbols have been defined. You can also group symbols and operators with parentheses. Remarks . By then using the symbol as the expression passed to the #if directive, the expression evaluates to true. You can also define a symbol with the -define compiler option. You can undefine a symbol with #undef. A symbol that you define with -define or with #define doesn. The build system is also aware of predefined preprocessor symbols representing different target frameworks in SDK-style projects. They're useful when creating applications that can target more than one .NET implementation or version. Note For traditional .NET Framework). Examples } //... } See also Feedback
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/preprocessor-directives/preprocessor-if
CC-MAIN-2019-47
refinedweb
221
60.41
I usually use fvwm, but sometimes I like to switch to enlightenment. The problem was maintaining two sets of menus. So, with a little tweaking, I now use the gnome menu editor to maintain both. Enlightenment is simple. The menu.cfg can read gnome's menus with: BEGIN_NEW_GNOME_MENU("GNOME_USER_SUBMENU", "ROOT", HOME_DIR"/.gnome/apps") ADD_MENU_TITLE("User Apps") END_MENU Fvwm was a bit tricky. I probably am not using the best possible solution, but it works for me. I recompiled wmconfig that comes with RedHat 6.0. In wmconfig.c there is a section that begins #if HAVE_GNOME. I changed this to read my ~/.gnome/apps directory: ret = parse_gnome_files("/home/tmoran/.gnome/apps", NULL); Then I made a little shell script to run wmconfig and clean up some unneeded files: #!/bin/sh wmconfig --output=fvwm2 --no-sysdir -- .fmenu find /home/tmoran -name .order -exec rm -f {} \; find /home/tmoran -name .directory -exec rm -f {} \; Finally, my .fvwm2rc contains: AddToMenu RootMenu + "&Rxvt" Exec exec rxvt + "" Nop Read /home/tmoran/.fmenu AddToMenu RootMenu + "" Nop + "&Fvwm Modules" Popup Module-Popup + "" Nop + "Refresh Screen" Refresh + "" Nop + "&Exit Fvwm" Popup Quit-Verify A simple way to mount your ATAPI Zip drive is to: Hello guys, I just make up my mind to write a short shell-script to do some fun. It is for making funny signatures with 'fortune'. You can use it with your mailing software that can handle signatures. I tested it with Netscape and Pine, with a Mandrake Linux distro. You need: #!/bin/sh # # sigchange # # A simple shell script to get your .signature file looking more funny.... # # Written by Csaba Feher ([email protected]) # # First, if .signature exists, we just remove it, in order to start with an empty one if [ -f $HOME/.signature ]; then rm -f $HOME/.signature fi #Then, make some good-sounding signature with the help of 'fortune'. #The -s option is because of Netscape, it says that the estimated length of the signature was 4 lines. #You may alter the categories to suit your needs. I prefer these two... /usr/games/fortune -s linuxcookie computers > $HOME/.signature S=$(cat $HOME/.signature) #Take a short look at your basic signature file, #which you may want to appear at the end of all newly-made signature. #Create & edit as you like. But, I suggest to keep it short. O=$(cat $HOME/.signature.basic) #Now put the whole stuff to the usual place echo -e "$S\n $O" > $HOME/.signature Usage: sigchangeThis is /etc/rc.d/rc.sysinit for Mandrake or Red Hat; it may be different in other distributions. You should check and find the script that initializes and boots up your system. The .signature will change each time you reboot your Linux box. sigchangeto /etc/rc.d/rc (Mandrake/Red Hat). It starts 'sigchange' each time the runlevel changes. Changes are made at the next reboot /runlevel change. Feel free to use and enjoy it! Any comments are welcome! p.s.: my recent signature is made with this method... -- But what can you do with it? -- ubiquitous cry from Linux-user partner. (Submitted by Andy Pearce, [email protected]) ### Keep on running LINUX! # Csaba Feher # [email protected] ### I find that searching howtos is easier if you use a script. I was inspired by another program to write a semi-smart howto script. You use it like: howto lilo and it searches for lilo, Lilo, LILO etc in the HOWTO tree, and then finds LILO. If something is not found, it lists suggestions. - Matt Willis #!/bin/csh # HOWTO Database searcher with limited smarts setenv HOWTOBASE /usr/doc/HOWTO setenv HOWTOPATH `find $HOWTOBASE -type d -print` setenv FOUND 0 setenv NAME1 $1 setenv NAMELC `echo $1 | tr 'A-Z' 'a-z'` setenv NAMEUC `echo $1 | tr 'a-z' 'A-Z'` setenv NAMEPC `echo $1 | awk '{print toupper(substr($1,1,1)) substr($1,2)}'` foreach NAME ($NAME1 $NAMELC $NAMEUC $NAMEPC) foreach k ($HOWTOPATH) if (-f $k/$NAME-HOWTO) then echo $k/$NAME-HOWTO less -r $k/$NAME-HOWTO setenv FOUND 1; break; break else if (-f $k/$NAME) then echo $k/$NAME less -r $k/$NAME setenv FOUND 1; break; break else if (-f $k/$NAME-HOWTO.gz) then echo $k/$NAME-HOWTO.gz gunzip -c $k/$NAME-HOWTO.gz | less -r setenv FOUND 1; break; break else if (-f $k/$NAME.gz) then echo $k/$NAME.gz gunzip -c $k/$NAME.gz | less -r setenv FOUND 1; break; break endif end end if ($FOUND == 0) then echo "Was unable to find '$1' .. possible matches:" # use case-insensitive name search (iname) setenv MATCH `find $HOWTOBASE -iname ''\*$1\*'' -print` if ("$MATCH" == "") then echo "Nothing (sorry)!" else foreach k ($MATCH) echo $k | sed 's/^.*\// /' end endif endif I'd like to truncate a 3MB file so that I can put it on floppy disks. The file is already compressed. Is there a Linux instruction or a software that can do such a thing ? Every Linux comes with the GNU utilities. One of these is "split" which will do the job. Read man split or info split. To split a file into floppy sized files split -b1440k a_whopping_big_file chunk which produces chunkaaa, chunkaab, chunkaac etc. Use mcopy to copy to/from floppy. To re-create a_whopping_big_file do cat chunk* > a_whopping_big_file Hope this helps. Finn Martin Benthues <[email protected]> suggests: The required task is rather easy to be achieved if both source and target system are linux and have GNU tar installed. Assume floppy drive is a 3.5" drive at /dev/fd0 Copy to disk: tar -c -f /dev/fd0 -L1440K -M Copy from disk: tar -x -f /dev/fd0 -L1440K -M tar will prompt the user to enter a new disk when ever it made one full. Note: The floppy disks will be overwritten without warning. Any old content is lost. No useable file system is installed. The disks are treated as a "tape" containing a set of blocks. For any later use with an operating system (DOS, Linux) the disks need to be reformatted. Best regards, Martin Benthues Brian <[email protected]> says: Short explanation: If you use the 'split' command, you can split a file up into chunks. Once onto a floppy, you can transport the file. When you want to reclaim the files, you can simply copy them back to hard drive and use 'cat' to put them back together. Long (full) explanation: I have a 292529 byte file named lasg-0-0-9.pdf on my hard drive, and I want to save it in chunks (or less) so I can put it on floppy for saving... You can see that no chunk is larger than 1K, as specified by the -C1k option to 'split'. The second option un this example is the name of the original file, and the third option in this example is the name of the output file prefix. The prefix is followed up by a unique string which ensures that when concatenated in a sorted order that you get the same file back. I tested this with the command cat lasg-0-0-9.pdg[a-z][a-z]* > tmp.lasg-0-0-9.pdfand the resulting file tmp.lasg-0-0-9.pdf was identical to the original file. % split -C1k lasg-0-0-9.pdf lasg-0-0-9.pdf %ls -al Total 655 drwxrwxr-x 2 vocalist users 9216 Aug 21 08:53 . drwxr-xr-x 20 vocalist users 2048 Aug 21 08:50 .. -rw-rw-r-- 1 vocalist users 0 Aug 21 08:53 data -rw-rw-r-- 1 vocalist users 292529 Aug 21 08:50 lasg-0-0-9.pdf -rw-rw-r-- 1 vocalist users 898 Aug 21 08:52 lasg-0-0-9.pdfaa -rw-rw-r-- 1 vocalist users 738 Aug 21 08:52 lasg-0-0-9.pdfab -rw-rw-r-- 1 vocalist users 1024 Aug 21 08:52 lasg-0-0-9.pdfac -rw-rw-r-- 1 vocalist users 1024 Aug 21 08:52 lasg-0-0-9.pdfad [Lots and lots of lines not shown. -Ed.] -rw-rw-r-- 1 vocalist users 1020 Aug 21 08:52 lasg-0-0-9.pdfno -rw-rw-r-- 1 vocalist users 1000 Aug 21 08:52 lasg-0-0-9.pdfnp -rw-rw-r-- 1 vocalist users 118 Aug 21 08:52 lasg-0-0-9.pdfnq Jimmy O'Regan <[email protected]> chimes in: You can find out more by typing "man split" or "info split". But in your case you'd probably want to try$ split -b 1380k your.file your.file.So it'll split the file "your.file" into files of 1.38m in size (ideal for floppies), named your.file.aa, your.file.ab and your.file.ac (etc if you use a different size). You can rejoin them with$ cat your.file.aa your.file.ab your.file.ac & your.file J. Remco Schellekens <[email protected]> suggests: dd will do the trick. Use it in the form:dd if=your-input-file of=first-out-file skip=0 count=2840 dd if=your-input-file of=second-out-file skip=2840 count=2840 dd if=your-input-file of=third-out-file skip=5680 count=2840...and so on. Assuming blocksizes are 512 bytes, so the count of 2840 is approx. 1.4 Mb To get the file back just use cat command:cat first-out-file > your-file cat second-out-file >> your-file... and so on of course it will be a bit easier if you make a shell script of it. Thats it. Roland Smith <[email protected]> also suggests: 3) Zip the file and use zipsplit to split it into files that will fit on a floppy. Hope this helps. Tue, 03 Aug 1999 22:43:10 +0100Tue, 03 Aug 1999 22:43:10 +0100 ANSWER: Formating drivesANSWER: Formating drives From: Murray Gibbins <[email protected]> Hi, if e.g. your LS120 is on /dev/hdb trymksf -t ext2 /dev/hdb or some variant thereof. Yours wibble Tue, 03 Aug 1999 22:48:24 +0100Tue, 03 Aug 1999 22:48:24 +0100 ANSWER: Kodak ProblemsANSWER: Kodak Problems From: Murray Gibbins <[email protected]> Investigate using Perl and ImageMagick, it will automate the proccess and do everything you want. Yours Wibble Zak <[email protected]> responds: I would like to thank each and every person who responded to my question about using gimp with the subject disk. The problem was that when I put the images on the HD (using Mcopy, since the Kodak(c) Picture Disk is a DOS/'doze thingee), they were upside-down (and I later found *were* left to right), and that I couldn't figure out how gimp worked. I use RH 5.1. One of the replies suggested I try 'xv', so I looked in my RH5.1 manual, and found on page 6 that it had been replaced. I checked out my 5.1 CD (I bought the 'official box') and guess what...it's there! I tried it and it not only flipped the images 'rightsideup', but can also flip 'em right-to-left, and much more. Now here's the kicker: about two weeks ago I put another roll of film in and requested the disk when it was processed. When I got the pix and disk back, I stuffed the disk into FRED and called up 'xv' so that I could flip the pix again. They were all set up correctly on the disk! I didn't have to do anything with them. Do you think maybe someone from Kodak reads Linux Gazette, too? Zak Tue, 10 Aug 1999 17:13:27 +0200Tue, 10 Aug 1999 17:13:27 +0200 ANSWER: Installation problemsANSWER: Installation problems From: Michael Planes <[email protected]> Hi, I hope you already had an answer from the net. Otherwise, I just got last week the very same problem with a quadri Xeon and 1GB of RAM:scsi : 0 hosts scsi : detected total Partition check VFS: Cannot open device 08:21 Kernel panic: VFS: Unable to mount root fs on 08:21Many reboots later (and many configuration changes) I finally succeeded when I added an option when booting linux kernel:linux mem=128M expert I checked on the web that other people already had this problem and fixed it (removing memory, adding boot option, etc...) It would have been nice if such an information had been clearly available at RedHat. I hope it will be. bye, Tue, 10 Aug 1999 17:13:27 +0200Tue, 10 Aug 1999 17:13:27 +0200 ANSWER: DNS on the flyANSWER: DNS on the fly From: Jim Bradley -- Maryville, MO USA <[email protected]> I have my laptop configured to plug into the network of my employer. At home, I plug my laptop into my home network, and dial out from another machine setup for dial on demand. Unfortunately, if it takes forever waiting for the two timeouts when trying to connect to the employer's DNS servers from home, and if I change the DNS order, it takes just as long for the timeout error when attempting to connect to my ISP's DNS from my employer's network. Is there an easy way to change the DNS servers when needed? It's easy enough to change IP addresses with the ifconfig command, is there a similar means for changing the DNS? Or, should I just bite the bullet and setup BIND on the laptop? My apologies if it doesn't - I've made enough changes to my setup that I could have automated it myself. I don't keep multiple copies of /etc/resolv.conf yet get the correct DNS server. I know for certain that DHCP makes the update. The only change I _remember_ making that's related to this is I generate the hosts file at bootup, since the only thing I keep in my hosts file is localhost and the hostname. Primarily because the long hostname can change based on which network I'm connected to (or even not connected at all). Ted C Ted Wise <[email protected]> replies: Your DNS server(s) are defined in the /etc/resolv.conf file. If you're using DHCP or PPP, this should be updated automagically with the correct server. If you're defining everything by hand, you'll need to modify this file to reflect the correct DNS server. Changes to the file may not be reflected in already running programs (daemons). If you're not running a DHCP server on your home network, consider one, it will greatly ease the pain of moving the laptop between locations. The Linux Gazette Editor writes to Ted: Since when did Linux PPP start automagically updating the DNS server? Windows does this, but in my experience Linux does not. You have to have several /etc/resolv.conf files, and use a symbolic link to point to the "current" one. You can have a shell script that does this and calls pppd. Ted responds: I've been using the KPPP dialer under KDE and that one definitely makes changes to the /etc/resolv.conf file. When it makes a connection, it modifies the file - commenting out the existing lines and adding ones applicable to the PPP connection. When you exit the application, it restores the original file. From the comments in the file, it looks to be behavior specific to KPPP. This is where my confusion came in. My home server is running DNS and masqs through to the PPP interface so the /etc/resolv.conf file doesn't change based on the PPP connection. It's only my laptop that needs this and I was making the assumption (I know, I know) that KPPP was just a pretty face on PPPD.[Conclusion: pppd knows nothing about /etc/resolv.conf. KPPP rewrites it on the fly. DHCP does the Right Thing somehow automatically. Any more comments? -Ed.] Ernst-Udo Wallenborn <[email protected]> suggests: i use the SCHEMES facility of the PCMCIA package to solve a related problem: how to use a laptop in two LANs with different IP addresses, different domains, and (naturally) different DNS servers. Basically you set up a file /etc/pcmcia/network.opts which contains all network options, esp. something likecase "$ADDRESS" in home,*,*,*) [snip] SEARCH="domain.com" DNS_1="1.2.3.4" DNS_2="" DNS_3="" [snip] ;; work,*,*,*) [snip] SEARCH="work.com" DNS_1="5.6.7.8" DNS_2="" DNS_3="" [snip] Then, when booting with lilo you can append SCHEME=home or SCHEME=work, or better write this into /etc/lilo.conf directly. and type 'home' or 'work' at the lilo prompt. The whole procedure is described in detail here: and the PCMCIA package is available here: hyper.stanford.edu/~dhinds/pcmcia/pcmcia.html hope this helps -- Ernst-Udo Wallenborn Laboratorium fuer Physikalische Chemie ETH Zuerich Tue, 10 Aug 1999 15:19:38 -0600Tue, 10 Aug 1999 15:19:38 -0600 ANSWER: ipchainsANSWER: ipchains From: Warren Young <[email protected]> "Martin L. Ferguson" wrote:I saw your "$0.02 Tip" response in Linux Gazette (copied below), but the URL for the "scipts" section was not included. Could you send it to me - I would like to look at a comprehensive ipchains configuration. Thanks. [snip] From the Gazette: I think perhaps you are missing a few important rules, such as rules to allow DNS replies. My own script for enabling masquerading and firewalling is available at the URL below, in the "scripts" section. The URL my message mentions was in the signature. It is apparently the policy of the Linux Gazette to chop signatures from messages, so it didn't appear in the Gazette. (I'm Cc'ing this message to the Gazette's editor, in case they want to fix that page.) The URL is The scripts I refer to in my original message are "firewall-enable" and "firewall-disable".[Added the URL to the previous article. The directory also contains some other scripts to tar a directory, replace tabs with spaces, do a hex dump, etc. And yes, I do chop off signatures. :) -Ed.] Fri, 13 Aug 1999 00:25:55 +0200 (CEST)Fri, 13 Aug 1999 00:25:55 +0200 (CEST) ANSWER: gcc will not workANSWER: gcc will not work From: Roland Smith <[email protected]> Hi, I saw your e-mail in the Mailbag section in the Linux Gazette. I think you have not installed the binutils package. This contains some programs that gcc needs to make executables out of a bunch of object files. Hope this helps. Fri, 13 Aug 1999 00:33:38 +0200 (CEST)Fri, 13 Aug 1999 00:33:38 +0200 (CEST) ANSWER: DHCPANSWER: DHCP From: Roland Smith <[email protected]> Concerning your question of adding a Linux workstation to a network that uses DHCP: You probably have a mini-HOWTO available (if you have installed them, which is wise): /usr/doc/HOWTO/mini/DHCPcd.gz You'll need the DHCP client. Check if it is available in your distribution, or get it at from the directory /pub/PC-UNIX/Linux/network/dhcp If you're using Red Hat you can use the control-panel to set up eth0 for DHCP. Hope this helps! Fri, 13 Aug 1999 00:43:42 +0200 (CEST)Fri, 13 Aug 1999 00:43:42 +0200 (CEST) ANSWER: ppp connectionANSWER: ppp connection From: Roland Smith <[email protected]> You wrote:I have an interesting problem. I have configured ezppp, kppp, and gnome-ppp to connect to my mindspring account, and all three seem to be doing so just fine. However, when I startup netscape or any other internet application, I can't access the internet. It just sits there trying to lookup the host. No error message. No nothing. I am completely stumped. Any help would be appreciated. Some things you could look at: - Do you actually make a connection with ppp before starting netscape? You'll need additional software if you want to be able to dial in automagically every time you try to reach the internet. - Have you added your ISP's name-server to /etc/resolv.conf? - he /etc/host.conf file configured correctly? It should consist of these two lines:order hosts,bind multi on Hope this helps! Fri, 13 Aug 1999 01:34:06 +0200 (CEST)Fri, 13 Aug 1999 01:34:06 +0200 (CEST) ANSWER: ASCII to speachANSWER: ASCII to speach From: Roland Smith <[email protected]>for.for. From your mail in the Linux gazette it is not clear what you're looking Your subject line says you're looking for ascii to speech conversion. For that you could try "rsynth" or "festival". (look at freshmeat.net for URL's) Voice recognition software for Linux is something else altogether. I think IBM has released some software for this.[Thanks, Roland, for all your answers. -Ed] Jimmy O'Regan <[email protected]> adds: ASCII to speech and speech recognition are complete opposites, but you can find most software available for Linux for either purpose at the BLINUX site - or at J. Thu, 19 Aug 1999 16:08:23 -0700Thu, 19 Aug 1999 16:08:23 -0700 ANSWER: 2GB Limit in LINUXANSWER: 2GB Limit in LINUX From: Greg Morse <[email protected]> This limit does not apply even if the BIOS does not support the large drive. I recently added an 8GB drive to an old Compaq prolinea 133. the DOS fdisk could see the drive but not use it. Linux (RH6.0) however happily created an 8GB filesystem on the drive. JGM Niels Elgaard Larsen <[email protected]> writes: 1. It is _not_ down to the BIOS. You can just give the geometry as arguments to 'fdisk' 2. Do not trust the information from the maker of the disk. I installed a 10 GB IBM disk. The geometry printed on the disk and shown on IBM's web-side was identical to the geometry of their 8GB disk. I think they rely on special software to make it work on Windows. I assumed sectors and # of heads were correct and computed # of cylinders from the capacity and fed it to cfdisk. It works. Sat, 21 Aug 1999 08:39:45 -0400Sat, 21 Aug 1999 08:39:45 -0400 ANSWER: Windows 98 inside LinuxANSWER: Windows 98 inside Linux From: Brian <[email protected]> Dear Markus, If you'll visit, you'll see their product VMWare - written up in the July 1999 issue of Linux Journal, or for subscribers interactive.linuxjournal.com). VMWare will allow you to run a number of different Virtual Machines on top of your Linux box. See the web page for the details. Oh - the price is $299, I'm considering it so that I can test Java on a Windows platform without actually having Windoze in control of my hardware. :) Wed, 25 Aug 1999 01:18:59 -0400Wed, 25 Aug 1999 01:18:59 -0400 ANSWER: DistributionsANSWER: Distributions From: Steve Wolfe <[email protected]> Someone more knowledgeable than I could probably tell you the exact differences between Suse and Red Hat. There is more activity on Red Hat. A more important question is, "What Distribution are people around you using?" Unix/Linux is not too hard to learn - but it is different than windows. Try to find a User's group and/or computer store that has Linux around and then you can see it and talk to people. I liked the book "Life with Unix" to give a good grounding on what to expect. The O'Reilly books are invaluable. I also like the printed collections of the Linux Documentation project containing the System's Administrator's Guide and the Network Administrators Guide and the How-tos. I used to the idea of a dual boot system - so you can use either Linux or Win xx, but not at the same time. This works for a while. The drawback is that you cannot use both at the same time. The best order of events (briefly) is 1) partition, 2) install Winxx 3) install Linux and Lilo. Two machines are better yet. With a small LAN you can then telnet or use an X-windowing program to use both machines - This is what I do now. Several X-server programs exist for Windows. Hummingbird's Exceed, Reflection's X-windows suite, and StarNet's (x-win32) are three good ones and I have used all three. I have heard some use of virtual machines (vmware makes one) in which you can use two operating systems at the same time on the same machine - but I have not tried it. After reading the documentation you can mount a FAT partition from Linux and use it as a way to transfer data back and forth between machines. If you have two machines the network takes care of the file transfers. Remember that a Unix/Linux text file has only linefeeds and a PC-DOS text file has Carriage return and Linefeed. -Cheers, Steve Wolfe Thu, 01 Jul 1999 14:14:54 -0400Thu, 01 Jul 1999 14:14:54 -0400 ANSWER: WORD to PostscriptANSWER: WORD to Postscript From: Reno Derosier <[email protected]> In regards to the .prn file you might try downloading Adobe's Postscript drivers for windows at. Reno Derosier Wed, 4 Aug 1999 00:13:59 -0700 (MST)Wed, 4 Aug 1999 00:13:59 -0700 (MST) ANSWER: g++ and including files that use templatesANSWER: g++ and including files that use templates From: Rachael A. Ludwick <[email protected]> jac <[email protected]> writes:Hello, my question is about the g++ compiler.I want to know if is able to link files that use templates and then aren't included in the main program (they are in different files, and only the header files are in the main program). I have try as:------------------------------------------------------------------------gcc main.cxx libro.cxx * main: #incude "libro.h"... * libro.cxx: #include "libro.h"...But the compiler of Red Hat 6.0 gives me an error. Could you help me? Thanks. Juan J.Alejandro ([email protected]) Girona (Spain) I'm not sure if this will help, but here goes... An annoying thing about C++ is that templated classes require that both the declaration and the implementation be in the same file. Instead of actually putting both in one file, often the header file will have the line:#include "class.cc" // or whatever the implementaion file is callednear the end of the file (after the declaration of the class). In this case though, the implementation usually goes in a file called "class.template" and #include "class.template" will be put in "class.h" somewhere near the end of the file (after the complete declaration of the file). Also, don't forget to guard your class from multiple include-ing by putting something like this as the first two lines:#ifndef MYCLASS_H #define MYCLASS_Hand something like this as the last line:#endif // MYCLASS_H Anyway, if you have the implementation and declaration in separate files, and you don't have this yet, than this may be your problem. Assuming this will fix your problem and you have a "main.cc" and templated "class.h" and "class.cc", you should be able to compile with:g++ -I. main.cc (of course assuming main.cc has #include "class.h"). Tell me if this helps... Rachael Ludwick -- [email protected] "Go Ye Lemmings into the World and Propagate!" This page written and maintained by the Editor of Linux Gazette, [email protected] Published in Issue 45 of Linux Gazette, September 1999 Published in Issue 45 of Linux Gazette, September 1999
http://ldp.indosite.co.id/LDP/LGNET/issue45/lg_tips45.html
crawl-003
refinedweb
4,617
75.3
Ok so I stumbled on to this subject the other day and thought it was worth noting. Take these simple classes: public class First { public String FirstOutput { get; set; } } public class Second : First { public String SecondOutput { get; set; } } public class Third : Second { public String ThirdOutput { get; set; } } So from this you can see that Third inherits Second which in turns inherits First. By terminology this would mean that Third is “smaller” than Second and First is “larger” than both. Here’s an example of Covariance: public class Covariance { public Covariance() { Func<First> returnFirstFunc = ReturnFirst; //This works since the Func has a return type of First Func<Second> returnSecondFunc = ReturnThird; Second secondTest = returnSecondFunc(); secondTest.FirstOutput = "First"; secondTest.SecondOutput = "First"; //This works since the Func has a return type of Third which is smaller //that Second. Therefore anyone using this Func will expect a Second to //be returned and will only use the methods/properties that a Second object //would have. Methods/Properties that Third has by inheritance. Func<Third> returnThirdFunc = ReturnSecond; //THIS WILL NOT WORK //Due to Covariance, the return of the method must be equal or smaller //that the expected type. returnThirdFunc expects a Third or smaller object //but the ReturnSecond method returns a Second which is not smaller than Third. //Afterall, Third : Second // //Third thirdTest = returnThirdFunc(); //Is the same as: //Third thirdTest = new Second(); } private First ReturnFirst() { return new First(); } private Second ReturnSecond() { return new Second(); } private Third ReturnThird() { return new Third(); } } Basically what this all means is that with return types, the return type must be smaller or equal to the field it’s being set to. When you are dealing with Funcs, the return type must be smaller or equal to the return type for the method it’s being set it. Why is that? Well think of it like this: It’s your first day on the job and some guy tells you to write something with whatever returnFirstFunc() returns. Now you have no way to look at the code, so you can only know that it returns First. For all you know, it could return First, Second, or Third. So you would do this: First someFirst; someFirst = returnFirstFunc(); //Could return anything smaller than First someFirst.FirstOutput; //Completely legal and safe But would you do this? someFirst.ThirdOutput; Of course not since you only can assume it is a First. Now let’s do this in reverse. Say from the above example you were allowed to do this: Func<Third> returnThirdFunc = ReturnSecond; Could you do this? Third third; third = returnThirdFunc(); third.ThirdOutput; Yeah you can’t since the Second type doesn’t have the ThirdOutput property. In short Covariance is the allowance of Smaller types or equal. If a method returns back Third, then you can use that method for anything that is Third or Smaller (Second, First, Object) but not for something Larger (Fourth, Fifth, ect).
https://byatool.com/uncategorized/covariance-versus-contravariance/
CC-MAIN-2021-31
refinedweb
481
61.16
AUTHOR: Ivan Stefanov DATE MODIFIED : January 08, 2018 DATE POSTED: July 09, 2014 With Q1 2014 of Telerik Reporting the style manager is no longer supported and instead implicit styling has to be used. There are few key things that you have to know which will help you when you add a Telerik Reporting report viewer to your applications: By default Telerik Reporting gets installed with several xaml files and assemblies. Those files are needed for the report viewer and the controls it internally uses. The files (xaml, assemblies) that get installed with Telerik Reporting are interchangeable with the ones that get installed with Telerik UI for WPF/Silverlight, given that they are the same version. You are free to add those files(xaml, assemblies) either from Telerik Reporting or from Telerik UI for WPF/Silverlight. If you add them from Telerik UI please make sure that you have also added the Telerik.ReportViewer.Wpf.dll and Telerik.ReportViewer.Wpf.xaml files. Please also make sure that they are the same version as the one required by Telerik Reporting or that you are using binding redirects if the version is newer. If you add them from Telerik Reporting please have in mind that the controls provided in those assemblies are intended only to be used with the Telerik report viewer and any other usage would be violation of the terms of use agreement. If a xaml exception is thrown when you are building your application or the behaviour is unexpected there are a few things that you might want to check in order to resolve the issue: Check if you have properly added all the xaml files needed. A list can be found here (Silverligh) and here (WPF). Check if you have added all the assemblies required. Again for a list check here (Silverligh) and here (WPF). Check if the assemblies and/or xaml files are up to date (i.e. their version is equal or higher compared to the one required by Telerik Reporting). Check if the version of the assemblies matches the version of the xaml files. In case they do not match exceptions may be thrown since some properties might be missing in an older version for example. Check if you have binding redirects - they are needed if you are using a newer version of Telerik UI for WPF/Silverlight compared to the one distributed with Telerik Reporting. For more information on binding redirects please check here. Possible exceptions: The type reference cannot find a public type named 'PickerToggleButton' - can happen when the version of the assemblies and the xaml files is different (they must match). The tag 'LogicalOperatorToCheckedConverter' does not exist in XML namespace 'clr-namespace:Telerik.Windows.Controls.Data.DataFilter;assembly=Telerik.Windows.Controls.Data' - can happen when the version of the assemblies and the xaml files is different (they must match). Resources Buy Try
https://www.telerik.com/support/kb/reporting/details/wpf-silverlight-report-viewers-troubleshooting
CC-MAIN-2018-05
refinedweb
480
59.74
Alan:> > I can do this, although it will mean renaming all the uses of the semid_ds> > and shmid_ds structures to something like kernel_semid_ds,> > kernel_shmid_ds, etc. (my proposed change would make semid_ds and shmid_ds> > private to the kernel, as msg_queue and shmid_kernel are now)> > Call them semid64_ds etc - thats not bad is it I think I'll go with new_semid_ds, new_shmid_ds, new_msqid_ds, andnew_ipc_perm as suggested by Manfred. The kernel private structures willbe shmid_kernel, msg_queue, sem_array, and kern_ipc_perm.I was just a little concerned about a patch that would rename a bunch ofkernel structures :)> > > #else> > /* Make libc5 happy for compiles */> > #define semid_ds old_user_semid_ds> > > > #endif /* __KERNEL__ */> > Thats a posix namespace violation. You can try and go this way but it just> gets uglier and uglier. I made the mistake myself before now 8)I should have made it more clear that that was really a halfheartedsuggestion... my real question was whether or not we had to support libc5compiles at all. If we do, then we can't rename any structures in thekernel include files. If we don't, then whatever is in the kernel sourcecode shouldn't matter, because it won't ever be seen when compiling anyportable software. (glibc should wrap everything specified by POSIX,right?)I'll re-do the patch taking this and Manfred's suggestions into account.(the kernel private structures will be in ipc/{msg,sem,shm}.c, and theuser visible structures will be in include/linux/{msg,sem,shm}.h)Thanks for the input,Chris [email protected] unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] read the FAQ at
https://lkml.org/lkml/1999/12/1/134
CC-MAIN-2014-15
refinedweb
277
64.41