text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
All about Splash Screens in WP7 – Creating animated Splash Screenpublished on: 3/11/2011 | Views: N/A | Tags: Animation windows-phone by WindowsPhoneGeek In this article I am going to talk about Splash Screen in Windows Phone 7. Generally when developing a WP7 application you can : - Use an Image as a Splash Screen - Use animated Splash Screen - Use no Splash Screen By default when starting a Windows Phone 7 application takes a little time to show the app form. That is why it is a good practice to show your own custom splash screen. Use a static Image as Splash Screen By default when creating a Windows Phone 7 application project it generate a default SplashScreenImage.jpg image: In order to add a custom splash screen to you WP7 Application the easiest way is just to replace the existing image with a new one. You can just follow the steps: 1). Add an Image file to your project and name it SplashScreenImage.jpg (NOTE: The name is important!) 2). The Image size have to be width : 480px, Height 800px : (480 x 800). 3) Set the Image Build Action to Content NOTE: In order to see the splash screen on the emulator your graphics cards need to be WDDM 1.1 or higher. Creating an animated Splash Screen To begin with at first I need to mention that you can use a BackgroundWorker to Sleep the thread for the time you want. Basically the BackgroundWorker is class enables you to run an operation on a separate background thread while leaving the UI thread available. In WP7's rendering thread, this becomes useful in case you want a responsive user interface. You can listen for events that report the progress of your operation and signal when your operation is completed. To start the background operation, call the RunWorkerAsync method. NOTE: You must be careful not to manipulate any user-interface objects in your DoWork event handler. Instead, communicate to the user interface through the ProgressChanged and RunWorkerCompleted events. NOTE: For more information you can take a look at the MSDN Documentation of the BackgroundWorker class. Now lets start with the real implementation of the animated splash screen. You can follow the steps: 1.) Create a Windows Phone 7 application project 2.) Add new UserControl to the project called AnimatedSplashScreen.xaml 3.) Include the following namespaces in MainPage.xaml.cs: using System.Threading; using System.Windows.Controls.Primitives; 3.). Add the following code into MainPage.xaml.cs: Note that in order to show the splash screen we will add a Popup that will appear as soon as the MainPage is loaded. We will run the BackgroundWorker in the constructor as well. When RunWorkerCompleted the popup will be closed, so the MainPage will become visible. So here is how our Splash Screen functionality works - At first a Popup appears with the Splash Screen into it - Then We launched a background thread that does some work (Thread.Sleep(5000); ). - When the RunWorkerCompleted , our popup that's covering the home screen is closed (myPopup.IsOpen = false). - You will then see your MainPage.xaml screen. public partial class MainPage : PhoneApplicationPage { BackgroundWorker backroungWorker; Popup myPopup; // Constructor public MainPage() { InitializeComponent(); myPopup = new Popup() { IsOpen = true, Child = new AnimatedSplashScreen() }; backroungWorker = new BackgroundWorker(); RunBackgroundWorker(); } private void RunBackgroundWorker() { backroungWorker.DoWork += ((s, args) => { Thread.Sleep(5000); }); backroungWorker.RunWorkerCompleted += ((s, args) => { this.Dispatcher.BeginInvoke(() => { this.myPopup.IsOpen = false; } ); }); backroungWorker.RunWorkerAsync(); } } 4.) Go back to AnimatedSplashScreen.xaml. Here we will add some more element to our splash screen : TextBlock, Image and a PerformanceProgressBar. <StackPanel x: <TextBlock Text="WindowsPhoneGeek Sample Splash Screen" x: <Image Source="logo.png" x: <Image.Projection> <PlaneProjection/> </Image.Projection> </Image> <toolkit:PerformanceProgressBar </StackPanel> NOTE: Form more information about the PerformanceProgressBar from the Silverlight for WP7 Toolkit take a look at our in depth article:WP7 PerformanceProgressBar in depth. NOTE: Do not forget to set any Background, Height and Width to the LayoutRoot or the UserControl otherwise your control will be transparent and the main page will be visible! The next step is add a more complex animation with Perspective. We will animate the Image element using some kind of flipping animation that is why we will need to add PlaneProjection to the Image element. In order to have a flipping effect around the X axis all we need t o do is just to change the rotation angel to 360 and make sure that CernerofRotationX and CenterofRotationY are set to 0.5 (this is the default value you do not need to change anything). We will also animate the Foreground color of the TextBlock. Here is how the animation should looks like: <UserControl.Resources> <Storyboard x: <DoubleAnimationUsingKeyFrames Storyboard. <EasingDoubleKeyFrame KeyTime="0" Value="0"/> <EasingDoubleKeyFrame KeyTime="0:0:1" Value="1"/> <EasingDoubleKeyFrame KeyTime="0:0:2" Value="360"/> </DoubleAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard. <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <SolidColorBrush Color="White"/> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> <DiscreteObjectKeyFrame KeyTime="0:0:2"> <DiscreteObjectKeyFrame.Value> <SolidColorBrush Color="Green"/> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> </Storyboard> </UserControl.Resources> An finally we have to start the animation in the constructor of our UserControl: public AnimatedSplashScreen() { InitializeComponent(); Storyboard flippingAnimation = this.Resources["flippingAnimation"] as Storyboard; flippingAnimation.Begin(); } 5.) Build and run the project. The final result should be: Use no Splash Screen If you do not want your app to have any splash screen just remove the file SplashScreenImage.jpg from your project. What not to do when implementing Splash Screen You are probably asking yourself why do not we simply create a new Page and set it as a startup page in the WMAppManifest.xml? Well, splash screen is never on the backstack so the user can never return to it. That is why using s start up page for splash screen is definitely not the best approach (you will need to handle the Navigation on your own). For things like the splash screen or other transient UI, the Silverlight Popup control is a great solution for showing content that (partially) covers the screen without doing a full navigation. NOTE: Peter Torr has a really cool post where he explains the concept of "Places" in WP7: Introducing the concept of "Places". According to this Places concept, Transient UI like Splash Screen, Login Window, etc. are not Windows Phone Pages! NOTE: You can also use the Codinf4Fun ProgressOverlay control : WP7 ProgressOverlay control in depth: features and customization That was all about the Splash Screens in Windows Phone 7 in depth. You can find the full source code here: I hope that the article was helpful. You can also follow us on Twitter: @winphonegeek for Windows Phone; @winrtgeek for Windows 8 / WinRT Nice work posted by: Benjamin on 3/11/2011 2:56:05 PM Thank you for writing this article. Now I am completely sure how to implement my splash screen. Ditto posted by: dSharp on 3/17/2011 7:56:31 PM Excellent resource here Excellent idea, more suggestions posted by: theIndent on 4/4/2011 1:16:51 AM The best part about your approach using the main page with a popup is that you get the chance to use some animation . So excellent idea. Couple of additional thoughts though: If you don't need a background worker thread to do a lot of work at startup - e.g. an app that has a mainpage menu so it doesn't need to start off a major download etc, then: no need to create an extra usercontrol (AnimatedSplashScreen) when simply adding a popup plus image to the main page xaml would do? The bonus is that you can skip the background worker and just use the storyboard to control the splash screen? also, there is still a time gap between starting the app and the mainpage loading which really needs to be covered with some form of splash screen if you use a 'mainpage popup' approach then, you can: - still have a SplashScreenImage.jpg which auto loads immediately (480x800px) and is visible while your app start up code in app.xaml.cs is being executed and mainpage is being constructed - use the same image (but now with 480x768px, no stretch, top margin set to -32 to allow for status bar) inside your mainpage popup image - standard splash screen will auto load and cover the time gap while the app initialises and if you get the dimensions correct then it gets invisibly replaced by the mainpage (IsOpen=true) popup image when mainpage is loaded and displayed - add a storyboard (similar to your example) whose main job is to animate the image away and then close the popup (or maybe animates some additional textblock etc first) - call the storyboard begin() from mainpage loaded event handler and then unhook the event handler so it doesn't get repeated when you navigate back (or set a flag if you need to keep the handler in place for other reasons). - if you have transitions e.g. turnstiling on your pages (and you do this from style templates rather than code) then you need to 'save and null' the inbound transition after initialise() in you mainpage constructor and then restore it later - easiest way is to hook up to the storyboard completed event which you can also use to clear down the popup/image if memory usage is critical in your app. See for the couple of lines of code to manipulate the page transitions in code. Thanks again for an excellent post thats opened the door for some serious tinkering. Use 2 different static Spash Screens posted by: Skaldic on 5/26/2011 2:30:21 PM Hello, Is there a way, using the resourcedictionary class , that I can load two different static screen based on compilation flags? I want 2 diff versions of my app, and each one should have a different static splashscreen. The Pop-up approach does not work, because I have a message box which needs to be displayed once the mainpage was loaded and the pop-up automatically cancels the messagebox! Thanks! Where to put loading code posted by: Devboy on 7/17/2012 5:55:59 PM Hi, Thank you for this tutorial, where can I place/ call my loading code eg. for settings and the localization-appbar code? Excellent posted by: Nickm2 on 7/21/2012 6:07:23 AM Great tutorial. Helped me a lot. Thanks a ton! Simply Amazing posted by: Nitin Panchal on 1/24/2013 7:26:11 AM Thanks for the tutorial.You are simply amazing ! Thank a
http://www.geekchamp.com/articles/all-about-splash-screens-in-wp7-ndash-creating-animated-splash-screen
CC-MAIN-2014-35
refinedweb
1,741
54.02
Hot questions for Using Neural networks in hdf5 Question: I want to use caffe with a vector label, not integer. I have checked some answers, and it seems HDF5 is a better way. But then I'm stucked with error like: accuracy_layer.cpp:34] Check failed: outer_num_ * inner_num_ == bottom[1]->count()(50 vs. 200) Number of labels must match number of predictions; e.g., if label axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}. with HDF5 created as: f = h5py.File('train.h5', 'w') f.create_dataset('data', (1200, 128), dtype='f8') f.create_dataset('label', (1200, 4), dtype='f4') My network is generated1, num_output=4, weight_filler=dict(type='xavier')) n.accuracy = L.Accuracy(n.ip3, n.label) n.loss = L.SoftmaxWithLoss(n.ip3, n.label) return n.to_proto() with open(PROJECT_HOME + 'auto_train.prototxt', 'w') as f: f.write(str(net('/home/romulus/code/project/train.h5list', 50))) with open(PROJECT_HOME + 'auto_test.prototxt', 'w') as f: f.write(str(net('/home/romulus/code/project/test.h5list', 20))) It seems I should increase label number and put things in integer rather than array, but if I do this, caffe complains number of data and label is not equal, then exists. So, what is the correct format to feed multi label data? Also, I'm so wondering why no one just simply write the data format how HDF5 maps to caffe blobs? Answer: Answer to this question's title: The HDF5 file should have two dataset in root, named "data" and "label", respectively. The shape is ( data amount, dimension). I'm using only one-dimension data, so I'm not sure what's the order of channel, width, and height. Maybe it does not matter. dtype should be float or double. A sample code creating train set with h5py is: import h5py, os import numpy as np f = h5py.File('train.h5', 'w') # 1200 data, each is a 128-dim vector f.create_dataset('data', (1200, 128), dtype='f8') # Data's labels, each is a 4-dim vector f.create_dataset('label', (1200, 4), dtype='f4') # Fill in something with fixed pattern # Regularize values to between 0 and 1, or SigmoidCrossEntropyLoss will not work for i in range(1200): a = np.empty(128) if i % 4 == 0: for j in range(128): a[j] = j / 128.0; l = [1,0,0,0] elif i % 4 == 1: for j in range(128): a[j] = (128 - j) / 128.0; l = [1,0,1,0] elif i % 4 == 2: for j in range(128): a[j] = (j % 6) / 128.0; l = [0,1,1,0] elif i % 4 == 3: for j in range(128): a[j] = (j % 4) * 4 / 128.0; l = [1,0,1,1] f['data'][i] = a f['label'][i] = l f.close() Also, the accuracy layer is not needed, simply removing it is fine. Next problem is the loss layer. Since SoftmaxWithLoss has only one output (index of the dimension with max value), it can't be used for multi-label problem. Thank to Adian and Shai, I find SigmoidCrossEntropyLoss is good in this case. Below is the full code, from data creation, training network, and getting test result: main.py (modified from caffe lanet example) import os, sys PROJECT_HOME = '.../project/' CAFFE_HOME = '.../caffe/' os.chdir(PROJECT_HOME) sys.path.insert(0, CAFFE_HOME + 'caffe/python') import caffe, h5py from pylab import * from caffe import layers as2, num_output=4, weight_filler=dict(type='xavier')) n.loss = L.SigmoidCrossEntropyLoss(n.ip3, n.label) return n.to_proto() with open(PROJECT_HOME + 'auto_train.prototxt', 'w') as f: f.write(str(net(PROJECT_HOME + 'train.h5list', 50))) with open(PROJECT_HOME + 'auto_test.prototxt', 'w') as f: f.write(str(net(PROJECT_HOME + 'test.h5list', 20))) caffe.set_device(0) caffe.set_mode_gpu() solver = caffe.SGDSolver(PROJECT_HOME + 'auto_solver.prototxt') solver.net.forward() solver.test_nets[0].forward() solver.step(1) niter = 200 test_interval = 10 train_loss = zeros(niter) test_acc = zeros(int(np.ceil(niter * 1.0 / test_interval))) print len(test_acc) output = zeros((niter, 8, 4)) # The main solver loop for it in range(niter): solver.step(1) # SGD by Caffe train_loss[it] = solver.net.blobs['loss'].data solver.test_nets[0].forward(start='data') output[it] = solver.test_nets[0].blobs['ip3'].data[:8] if it % test_interval == 0: print 'Iteration', it, 'testing...' correct = 0 data = solver.test_nets[0].blobs['ip3'].data label = solver.test_nets[0].blobs['label'].data for test_it in range(100): solver.test_nets[0].forward() # Positive values map to label 1, while negative values map to label 0 for i in range(len(data)): for j in range(len(data[i])): if data[i][j] > 0 and label[i][j] == 1: correct += 1 elif data[i][j] %lt;= 0 and label[i][j] == 0: correct += 1 test_acc[int(it / test_interval)] = correct * 1.0 / (len(data) * len(data[0]) * 100) # Train and test done, outputing convege graph _,') _.savefig('converge.png') # Check the result of last batch print solver.test_nets[0].blobs['ip3'].data print solver.test_nets[0].blobs['label'].data h5list files simply contain paths of h5 files in each line: train.h5list /home/foo/bar/project/train.h5 test.h5list /home/foo/bar/project/test.h5 and the solver: auto_solver.prototxt train_net: "auto_train.prototxt" test_net: "auto_test.prototxt" test_iter: 10 test_interval: 20 base_lr: 0.01 momentum: 0.9 weight_decay: 0.0005 lr_policy: "inv" gamma: 0.0001 power: 0.75 display: 100 max_iter: 10000 snapshot: 5000 snapshot_prefix: "sed" solver_mode: GPU Converge graph: Last batch result: [[]] [[ 1. 0. 0. 0.] [ 1. 0. 1. 0.] [ 0. 1. 1. 0.] [ 1. 0. 1. 1.] ... [ 1. 0. 0. 0.] [ 1. 0. 1. 0.] [ 0. 1. 1. 0.] [ 1. 0. 1. 1.]] I think this code still has many things to improve. Any suggestion is appreciated. Question: I am finetuning a network. In a specific case I want to use it for regression, which works. In another case, I want to use it for classification. For both cases I have an HDF5 file, with a label. With regression, this is just a 1-by-1 numpy array that contains a float. I thought I could use the same label for classification, after changing my EuclideanLoss layer to SoftmaxLoss. However, then I get a negative loss as so: Iteration 19200, loss = -118232 Train net output #0: loss = 39.3188 (* 1 = 39.3188 loss) Can you explain if, and so what, goes wrong? I do see that the training loss is about 40 (which is still terrible), but does the network still train? The negative loss just keeps on getting more negative. UPDATE After reading Shai's comment and answer, I have made the following changes: - I made the num_output of my last fully connected layer 6, as I have 6 labels (used to be 1). - I now create a one-hot vector and pass that as a label into my HDF5 dataset as follows f['label'] = numpy.array([1, 0, 0, 0, 0, 0]) Trying to run my network now returns Check failed: hdf_blobs_[i]->shape(0) == num (6 vs. 1) After some research online, I reshaped the vector to a 1x6 vector. This lead to the following error: Check failed: outer_num_ * inner_num_ == bottom[1]->count() (40 vs. 240) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}. My idea is to add 1 label per data set (image) and in my train.prototxt I create batches. Shouldn't this create the correct batch size? Answer: Since you moved from regression to classification, you need to output not a scalar to compare with "label" but rather a probability vector of length num-labels to compare with the discrete class "label". You need to change num_output parameter of the layer before "SoftmaxWithLoss" from 1 to num-labels. I believe currently you are accessing un-initialized memory and I would expect caffe to crash sooner or later in this case. Update: You made two changes: num_output 1-->6, and you also changed your input label from a scalar to vector. The first change was the only one you needed for using "SoftmaxWithLossLayer". Do not change label from a scalar to a "hot-vector". Why? Because "SoftmaxWithLoss" basically looks at the 6-vector prediction you output, interpret the ground-truth label as index and looks at -log(p[label]): the closer p[label] is to 1 (i.e., you predicted high probability for the expected class) the lower the loss. Making a prediction p[label] close to zero (i.e., you incorrectly predicted low probability for the expected class) then the loss grows fast. Using a "hot-vector" as ground-truth input label, may give rise to multi-category classification (does not seems like the task you are trying to solve here). You may find this SO thread relevant to that particular case. Question: I have a big dataset (300.000 examples x 33.000 features), which of course does not fit the memory. The data are saved in HDF5 format. The values are mostly zeros (sparse data). They look like this: Attr1 52 52 52 52 52 52 52 52 ... Attr2 umb umb umb umb umb umb umb umb ... CellID TGC-1 TGG-1 CAG-1 TTC-1 GTG-1 GTA-1 CAA-1 CAC-1 ... Acc Gene ... 243485 RP11-.3 0 0 0 0 0 0 0 0 ... 237613 FAM138A 0 0 0 0 0 0 0 0 ... 186092 OR4F5 0 0 0 0 0 0 0 0 ... 238009 RP11-.7 0 0 0 0 0 0 0 0 ... 239945 RP11-.8 0 0 0 0 0 0 0 0 ... 279457 FO538.2 0 0 0 0 0 0 0 0 ... 228463 AP006.2 0 0 0 0 0 0 0 0 ... ... ... ... ... ... ... ... ... ... ... I have done the following that works, to load the whole dataset in TensorFlow ( loompy is just a package using hdf5 on the background): import tensorflow as tf import numpy as np import loompy as lp batch_size = 1000 with loompy.connect(filename, 'r') as ds: ds_shape = (batch_size, ds.shape[0]) ds_dtype = ds[0:1, 0:1].dtype labels = np.asarray([ds.ca.CellID, ds.ca.Attr1]).T labels_shape = (batch_size, 1) data_placeholder = tf.placeholder(ds_dtype, ds_shape) labels_placeholder = tf.placeholder(labels[:,1].dtype, labels_shape) dataset = tf.data.Dataset.from_tensor_slices((data_placeholder, labels_placeholder)) dataset = dataset.prefetch(batch_size) iterator = dataset.make_initializable_iterator() next_element = iterator.get_next() with tf.Session() as sess: with loompy.connect(filename, 'r') as ds: for i in range(0, ds.shape[1], batch_size): batch = ds[0 : ds_shape[1], i : i + batch_size].T batch_labels = np.asarray([ds.ca.CellID[i : i + batch_size], ds.ca.Attr1[i : i + batch_size]]).T[:,1] sess.run(iterator.initializer, feed_dict = {data_placeholder: batch, labels_placeholder: batch_labels.reshape(batch_size, 1)}) for _ in range(batch_size): print(sess.run(next_element)) Output: (array([0, 0, 0, ..., 0, 0, 0], dtype=int32), array([b'52'], dtype=object)) (array([0, 0, 0, ..., 0, 0, 0], dtype=int32), array([b'52'], dtype=object)) ... This way however, I am not able to split my data in train, test and evaluation sets. Also, I can only shuffle them inside each batch, which is not effective since most times the data on a batch belong to the same class. How do I manipulate this kind of data to be able to load them as train, test, evaluation sets, and perform shuffling etc. (preferably by utilizing my TitanX GPU as much as possible)? Answer: You should definitely try Dask, it allows you to work with data not fitting in memory and it paralyzes computation so that you can use all cores of your cpu. Also I recommend moving your data from hdf to parquet, it allows concurrent reads and writes which speeds things up. Please see the link where Wes McKinney (pandas creator) goes in depth and compares it with other formats. You could prepare snippets in Dask that prepare train, test and validation sets and read them without exceeding available memory. Question: I have a dataset where the images have VARYING number of labels. The number of labels is between 1 and 5. There are 100 classes. After googling, it seems like HDF5 db with slice layer can deal with multiple labels, as in the following URL. The only problem is that it supposes a fixed number of labels. Following this, I would have to create a 1x100 matrix, where entry value is 1 for the labeled classes, and 0 for non-label classes, as in the following definition: layers { name: "slice0" type: SLICE bottom: "label" top: "label_matrix" slice_param { slice_dim: 1 slice_point: 100 } } where each image contains a a label looking like (1,0,0,...1,...0,....,0,1) where the vector size is 100 dimension. Now, I apologize that my question becomes somehow vague, but is this a feasible idea? I.e., is there a better approach to this problem? Answer: I get that you have 5 types of labels that are not always present for each data point. 1 of the 5 labels is for 100-way classification. Correct so far? I would suggest always writing all 5 labels into your HDF5 and use a special value for when the label is missing. You can then use the missing_value option to skip computing the loss for that layer for that iteration. Using it requires add loss_param{ ignore_label = Y } to the loss layer in your network prototxt definition where Y is a scalar. The backpropagated error will only be a function of labels that are present. If input X does not have a valid value for a label, the network will still produce an estimate for that label. But it will not be penalized for it. The output is produced without any effect on how the weights are updated in that iteration. Only outputs for non-missing labels contribute to the error signal and the weight gradients. It seems that only the Accuracy and SoftmaxWithLossLayer layers support missing_values. Each label is a 1x5 matrix. The first entry can be for the 100-way classification (e.g. [0-99]) and entries 2:5 have scalars that reflect the values that the other labels can take. The order of the columns is the same for all entries in your dataset. A missing label is marked by a special value of your choosing. This special value has to lie outside the set of valid label values. This will depend on what those labels represent. If a label value of -1 never occurs you can use this to flag a missing label. Question: I have the following h5 files listed in train.txt which I am giving to the hdf5 data layer. /home/foo/data/h5_files/train_data1.h5 /home/foo/data/h5_files/train_data2.h5 /home/foo/data/h5_files/train_data3.h5 /home/foo/data/h5_files/train_data4.h5 /home/foo/data/h5_files/train_data5.h5 I have 3 datasets - X, Meta and Labels in these files. Initially, I kept all these in 1 h5 file, but since caffe can't handle h5 files bigger than 2 GB, I had to divide X (say X consists of 5000 samples), in 5 parts. In the first h5, I have Meta and Labels stored along with the first part, i.e; 1000 samples of X, and in the remaining 4 h5 files, I have 1000 samples each. When I start finetuning, caffe crashes with the following error message 0111 07:46:54.094041 23981 layer_factory.hpp:74] Creating layer data net.cpp:76] Creating Layer data net.cpp:334] data -> X net.cpp:334] data -> Labels net.cpp:334] data -> Meta net.cpp:105] Setting up data hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: /home/foo/hdf5_train.txt hdf5_data_layer.cpp:80] Number of HDF5 files: 5 hdf5_data_layer.cpp:53] Check failed: hdf_blobs_[i]->num() == num (5000 vs. 1000) *** Check failure stack trace: *** @ 0x7f1eebcab0d0 google::LogMessage::Fail() @ 0x7f1eebcab029 google::LogMessage::SendToLog() @ 0x7f1eebcaaa07 google::LogMessage::Flush() @ 0x7f1eebcad98f google::LogMessageFatal::~LogMessageFatal() @ 0x7f1ef18ff045 caffe::HDF5DataLayer<>::LoadHDF5FileData() @ 0x7f1ef18fdca4 caffe::HDF5DataLayer<>::LayerSetUp() @ 0x7f1ef196bffc caffe::Net<>::Init() @ 0x7f1ef196e0b2 caffe::Net<>::Net() @ 0x7f1ef18cf3cd caffe::Solver<>::InitTrainNet() @ 0x7f1ef18cfa3f caffe::Solver<>::Init() @ 0x7f1ef18cfe75 caffe::Solver<>::Solver() @ 0x40a3c8 caffe::GetSolver<>() @ 0x404fb1 train() @ 0x405936 main @ 0x3a8141ed1d (unknown) @ 0x4048a9 (unknown) The main thing, according to me is 'Check failed: hdf_blobs_[i]->num() == num (5000 vs. 1000)' From which I assume that caffe is reading only the first h5 file, how can I make it read all 5 h5 files? Please help! Answer: How do you expect caffe synchronize all your input data across all the files? Do you expect it to read X from the second file and Meta from the first? If you were to implement "HDF5Data" layer, how would you expect the data to be laid out for you? The way things are implemented in caffe at the moment, ALL variables should be divided between the HDF5 files in the same manner. That is if you decided that X will be divided into 5 files, with e.g., 1000 samples in the first file, 1234 samples in the second etc. Then you must divide Meta and Labels in the same manner: train_data1.h5 will have 1000 samples of X, Meta and Label; train_data2.h5 will have 1234 samples of X, Meta and Label and so forth. Caffe does not load all the data into memory, it only fetches the batch it needs for the SGD iteration. Therefore, it makes no sense to split the variables across different files. Moreover, It might help if you make the number of samples stored at each file an integer multiplicity of your batch_size. Question: I have a hdf5 layer that read the information from the list.txt as layer { name: "data" type: "HDF5Data" top: "data" top: "label" include { phase: TEST } hdf5_data_param { source: "./list.txt" batch_size: 4 shuffle: true } } where list.txt contains two path files /home/user/file1.h5 /home/user/file2.h5 while the batch size is 4. What is happen with above code? Can the data choose 4 files to feed the network? Answer: You have two hdf5 files, but each file may contain more than a single training example. Thus, effectively, you may have far more than batch_size: 4 examples. Caffe does not really cares about the actual number of training examples: when it finishes to process all the examples (aka "epoch") it simply starts over reading the samples again. Caffe cycles through all the samples until number of training/testing iteration is reached. Question: I am trying to test my caffe model by feeding a blob with all ones in it. So I form a hdf5 file by: import h5py, os import numpy as np SIZE = 227 # fixed size to all images X = np.ones((1, 3, SIZE, SIZE), dtype='f8') with h5py.File('test_idty.h5','w') as H: H.create_dataset('img', data=X ) with open('test_h5_idty_list.txt','w') as L: L.write( '/home/wei/deep_metric/test_idty.h5' ) Then, I change my caffe prototxt to be: layer{ name:"data" type:"HDF5Data" top:"img" include:{ phase:TEST } hdf5_data_param{ source:"/home/wei/deep_metric/test_h5_idty_list.txt" batch_size:1 } } Then, I try to make sure my data is fed correctly by: net = caffe.Net(Model,Pretrained,caffe.TEST) data = net.blobs['img'].data.copy() However, this gives me all zeros in the matrix. Any idea on how to solve it? Appreciated! Answer: In order for "HDF5Data" layer to read it's first batch you need to call net.forward() first. Once a forward pass is done, the tops of the layer has the data read from files. import numpy as np import os text = 'train' text_dir = text + '.txt' data = np.genfromtxt(text_dir, delimiter=" ", dtype=None) h = h5py.File(text + '.hdf5', 'w') h.create_dataset('data', data=data[:1]) h.create_dataset('label', data=data[1:]) with open(text + "_hdf5.txt", "w") as textfile: textfile.write(os.getcwd() + '/' +text + '.hdf5') But this does not work! Any ideas what could be wrong? Answer: It does not work because your 'data' is /path/to/image instead of the image itself. See this answer for more information., os import caffe import numpy as np SIZE = 256 with open( 'train.txt', 'r' ) as T : lines = T.readlines() count_files = 0 split_after = 1000 count = -1 # If you do not have enough memory split data into # multiple batches and generate multiple separate h5 files X = np.zeros( (split_after, 3, SIZE, SIZE), dtype='f4' ) y1 = np.zeros( (split_after, 1), dtype='f4' ) y2 = np.zeros( (split_after, 1), dtype='f4' ) for i,l in enumerate(lines): count += 1 sp = l.split(' ') img = caffe.io.load_image( sp[0] ) img = caffe.io.resize( img, (3, SIZE, SIZE) ) X[count] = img y1[count] = float(sp[1]) y2[count] = float(sp[2]) if (count+1) == split_after: with h5py.File('train_' + str(count_files) + '.h5','w') as H: H.create_dataset( 'X', data=X ) # note the name X given to the dataset! H.create_dataset( 'y1', data=y1 ) H.create_dataset( 'y2', data=y2 ) X = np.zeros( (split_after, 3, SIZE, SIZE), dtype='f4' ) y1 = np.zeros( (split_after, 1), dtype='f4' ) y2 = np.zeros( (split_after, 1), dtype='f4' ) with open('train_h5_list.txt','a') as L: L.write( 'train_' + str(count_files) + '.h5') # list all h5 files you are going to use count_files += 1 count = 0 In fact I want to estimate angles. That means I have two classes one for vertical angles one for horizontal angles. The first class ranges from 0-10 degrees the second from 10-20 and so on (for both horizontal and vertical angles). How would the .prototxt look like? Here are my last layers layer { name: "fc8" type: "InnerProduct" bottom: "fc7" top: "fc8" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 36 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8" bottom: "y" top: "loss" } Answer: You also need to modify the input layer: now you have three tops: layer { type: "HDF5Data" name: "data" top: "X" top: "y1" top: "y2" # ... params and phase } Now, the top of your fc7 serves as a "high level descriptor" of your data, from which you wish to predict y1 and y2. Thus, after layer fc7 you should have: layer { type: "InnerProduct" name: "class_y1" bottom: "fc7" top: "class_y1" #... params num_output: 36 } layer { type: "SoftmaxWithLoss" # to be replaced with "Softmax" in deploy name: "loss_y1" bottom: "class_y1" bottom: "y1" top: "loss_y1" # optionally, loss_weight } And: layer { type: "InnerProduct" name: "class_y2" bottom: "fc7" top: "class_y2" #... params num_output: 36 } layer { type: "SoftmaxWithLoss" # to be replaced with "Softmax" in deploy name: "loss_y2" bottom: "class_y2" bottom: "y2" top: "loss_y2" # optionally, loss_weight } Question: I want to train a net to recognize some RGB in image. (input: 256X256 images and some RGB value). I wrote a script that creates HDF5 file for float multi-labels: import h5py, os import caffe import numpy as np SIZE = 256 # images size with open( '/home/path/images ): lines = T.readlines() X = np.zeros( (len(lines), SIZE, SIZE, 3), dtype='f4' ) r = np.zeros( (len(lines),1), dtype='f4' ) g = np.zeros( (len(lines),1), dtype='f4' ) b = np.zeros( (len(lines),1), dtype='f4' ) for i,l in enumerate(lines): sp = l.split(' ') img = caffe.io.load_image( sp[0] ) # img = caffe.io.resize( img, (3, SIZE, SIZE) ) # resize to fixed $ print img X[i] = img # print X[i] r[i] = float(sp[1]) g[i] = float(sp[2]) b[i] = float(sp[3]) print "R" + str(r[i]) + "G" + str(g[i]) + "B" + str(b[i]) with h5py.File('/home/path/train.h5','$ H.create_dataset('X', data=X) H.create_dataset('r', data=r) H.create_dataset('g', data=g) H.create_dataset('b', data=b) with open('/home/path/train_h5_list.tx$ L.write( 'train.h5' ) # list all h5 files Im using a multi-label regression net. when i run TRAIN on this net with my dataset (HDF5) i got this error: name: "FKPReg" state { phase: TRAIN } layer { name: "fkp" type: "HDF5Data" top: "data" top: "label" include { phase: TRAIN } hdf5_data_param { source: "/home/path/train_h5_list.txt" batch_size: 64 } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 32 kernel_size: 11 stride: 2 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" convolution_param { num_output: 64 pad: 2 kernel_size: 7 group: 2 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2" lrn_param { local_size: 3 alpha: 5e-05 beta: 0.75 norm_region: WITHIN_CHANNEL } } layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3" convolution_param { num_output: 32 pad: 1 kernel_size: 5 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" convolution_param { num_output: 64 pad: 1 kernel_size: 5 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" convolution_param { num_output: 32 pad: 1 kernel_size: 5 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 4 stride: 2 } } layer { name: "drop0" type: "Dropout" bottom: "pool5" top: "pool5" dropout_param { dropout_ratio: 0.5 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool5" top: "ip1" inner_product_param { num_output: 100 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu4" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "drop1" type: "Dropout" bottom: "ip1" top: "ip1" dropout_param { dropout_ratio: 0.5 } } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" inner_product_param { num_output: 3 bias_filler { type: "constant" value: 0.1 } } } layer { name: "relu22" type: "ReLU" bottom: "ip2" top: "ip2" } layer { name: "loss" type: "EuclideanLoss" bottom: "ip2" bottom: "label" top: "loss" } I1106 11:47:52.235343 28083 layer_factory.hpp:74] Creating layer fkp I1106 11:47:52.235384 28083 net.cpp:90] Creating Layer fkp I1106 11:47:52.235410 28083 net.cpp:368] fkp -> data I1106 11:47:52.235443 28083 net.cpp:368] fkp -> label I1106 11:47:52.235481 28083 net.cpp:120] Setting up fkp I1106 11:47:52.235496 28083 hdf5_data_layer.cpp:80] Loading list of HDF5 filenames from: /home/path/train_h5_list.txt I1106 11:47:52.235568 28083 hdf5_data_layer.cpp:94] Number of HDF5 files: 1 HDF5-DIAG: Error detected in HDF5 (1.8.11) thread 140703305845312: #000: ../../../src/H5F.c line 1586 in H5Fopen(): unable to open file major: File accessibilty minor: Unable to open file #001: ../../../src/H5F.c line 1275 in H5F_open(): unable to open file: time = Sun Nov 6 11:47:52 2016 , name = 'train.h5', tent_flags = 0 major: File accessibilty minor: Unable to open file #002: ../../../src/H5FD.c line 987 in H5FD_open(): open failed major: Virtual File Layer minor: Unable to initialize object #003: ../../../src/H5FDsec2.c line 343 in H5FD_sec2_open(): unable to open file: name = 'train.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0 major: File accessibilty minor: Unable to open file F1106 11:47:52.236398 28083 hdf5_data_layer.cpp:32] Failed opening HDF5 file: train.h5 *** Check failure stack trace: *** @ 0x7ff809dfcdaa (unknown) @ 0x7ff809dfcce4 (unknown) @ 0x7ff809dfc6e6 (unknown) @ 0x7ff809dff687 (unknown) @ 0x7ff80a194406 caffe::HDF5DataLayer<>::LoadHDF5FileData() @ 0x7ff80a192c98 caffe::HDF5DataLayer<>::LayerSetUp() @ 0x7ff80a173be3 caffe::Net<>::Init() @ 0x7ff80a175952 caffe::Net<>::Net() @ 0x7ff80a15bbf0 caffe::Solver<>::InitTrainNet() @ 0x7ff80a15cbc3 caffe::Solver<>::Init() @ 0x7ff80a15cd96 caffe::Solver<>::Solver() @ 0x40c5d0 caffe::GetSolver<>() @ 0x406611 train() @ 0x404bb1 main @ 0x7ff80930ef45 (unknown) @ 0x40515d (unknown) @ (nil) (unknown) Aborted (core dumped) What am I doing wrong? thanks Answer: A few comments caffe.io.resize( img, (3, SIZE, SIZE) )- this is WRONG. you need to resize to (SIZE, SIZE)and the transposeto (3, SIZE, SIZE). resizeshould affect only the spatial dimensions of the image, and transposeshould take care of arranging the channeldimension before height and width. Consequently, the shapeof Xshould be (len(lines), 3, SIZE, SIZE). If your HDF5file has datasets X, r, gand b, then your "HDF5Data"layer can have "top"s "X", "r", "g"and/or "b". You cannot have "data"or "label"as "top"s since there are no such datasets in the input hdf5 file. The error message you got states (quite clearly) that error message = 'No such file or directory' This usually means that train.h5is not in the search path. Try writing full path to /home/path/train_h5_list.txt. Question: I'm preparing to train in Caffe using data in a hdf5 file. This file also contains the per-pixel mean data/image of the training set. In the file 'train_val.prototxt' for the input data layer in the section 'transform_params' it is possible to use a mean_file to normalize the data, usually in binaryproto format, for example for the ImageNet Caffe tutorial example: transform_param { mirror: true crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } For per-channel normalization one can instead use mean_value instead of mean_file. But is there any way to use mean image data directly from my database (here hdf5) file? I have extracted the mean from the hdf5 to a numpy file but not sure if that can be used in the prototxt either or converted. I can't find info about this in the Caffe documentation. Answer: AFAIK, "HDF5Data" layer does not support transformations. You should subtract the mean values yourself when you store the data to HDF5 files. If you want to save a numpy array in a binaryproto format, you can see this answer for more details.
https://thetopsites.net/projects/neural-network/hdf5.shtml
CC-MAIN-2021-31
refinedweb
4,916
59.19
tsleep, msleep, rwsleep, wakeup, wakeup_n, wakeup_one— #include <sys/param.h> #include <sys/systm.h>int tsleep(void *ident, int priority, const char *wmesg, int timo); int msleep(void *ident, struct mutex *mtx, int priority, const char *wmesg, int timo); int rwsleep(void *ident, struct rwlock *rwl, int priority, const char *wmesg, int timo); void wakeup(void *ident); void wakeup_n(void *ident, int count); void wakeup_one(void *ident); tsleep(), msleep() and rwsleep() are used throughout the kernel whenever processing in the current context cannot continue for any of the following reasons: wakeup(), wakeup_n(), and wakeup_one() functions are used to notify sleeping processes of possible changes to the condition that caused them to go to sleep. Typically, an awakened process will -- after it has acquired a context again -- retry the action that blocked its operation to see if the “blocking” condition has cleared. The tsleep() function takes the following arguments: wakeup() to get the process going again. ident should not be NULL. PCATCHis OR'ed into priority the process checks for posted signals before and after sleeping. p_wmesg) for user level utilities such as ps(1). timo/hzseconds. If this amount of time elapses and no wakeup(ident) has occurred, and no signal (if PCATCHwas set) was posted, tsleep() will return EWOULDBLOCK. msleep() function behaves just like tsleep(), but takes an additional argument: PNORELOCKflag is set in the priority argument. rwsleep() function behaves just like tsleep(), but takes an additional argument: PNORELOCKflag is set in the priority argument. wakeup() function will mark all processes which are currently sleeping on the identifier ident as runnable. Eventually, each of the processes will resume execution in the kernel context, causing a return from tsleep().. The wakeup_n() and wakeup_one() functions behave similarly to wakeup() except that only count or one process, respectively, is marked runnable. tsleep(), msleep() and rwsleep() return 0 if they return as a result of a wakeup(). If they return as a result of a signal, the return value is ERESTARTif the signal has the SA_RESTARTproperty (see sigaction(2)), and EINTRotherwise. If they return as a result of a timeout, the return value is EWOULDBLOCK.
https://man.openbsd.org/wakeup.9
CC-MAIN-2018-34
refinedweb
352
51.38
Images,.xml), an internationalization string table (Strings.xml) and some icons (drawable/Icon.png) would keep its resources in the "Resources" directory of the application: Resources/ Drawable/ Icon.png Layout/ Main.axml Values/ Strings.xml In order to get the build system to recognize Android resources, the build action should be set to "AndroidResource". Resource class would expose: public class Resource { public class Drawable { public const int Icon = 0x123; } public class Layout { public const int Main = 0x456; } public class String { public const int FirstString = 0xabc; public const int SecondString = 0xbcd; } } You would then use Resource.Drawable.Icon to reference the Drawable/Icon.png file, or Resource.Layout.Main to reference the Layout/Main.axml file, or Resource.String.FirstString to reference the first string in the dictionary file Values/Strings.xml.)
https://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=504011&zep=FoodTypeSectionList%2FFoodTypeSectionList%2FResources%2FAboutResources.txt&rzp=%2FKB%2Fandroid%2F504011%2F%2FFoodTypeSectionList_4.0.3.zip
CC-MAIN-2017-22
refinedweb
131
53.58
Joe is a programmer at a major hardware vendor. He is a graduate of Georgetown University and currently lives in the Washington, D.C. area. Joes can be contacted at [email protected]. In my article, "Exception Handlers and Windows Applications" (Dr. Dobb's Sourcebook of Windows Programming, Fall 1994), I discussed issues relative to Windows exception handlers, including the System VM, DPMI, and numerous protected-mode concepts such as selectors. In doing so, I presented TrapMan, a Windows debugging tool for analyzing exceptions in Windows applications. In this article, I'll enhance TrapMan by adding features to the trap handlers for displaying exception registers, dumping the exception stack, identifying the faulting application, and more. A (usually) nonfatal exception is Interrupt 11 (Trap B) or Segment Not Present. The Windows kernel processes this message when demand loading segments of Windows applications. TrapMan watches these segments because it can be useful when you need to wait until a segment is loaded to set a breakpoint for debugging. It also gives you some idea of how often Windows environments are processing exceptions under the covers, even in small applications. Note that Interrupt 11 is very different from Interrupt 14 (Page Fault). Page Faults are handled by WIN386, while Segment Not Present faults are handled by KRNL386.EXE and friends. I will not discuss Page Faults here. TrapMan is a Windows application that should run in any protected-mode version of the 16-bit Windows environment, including Win-OS/2 2.1 and 2.11. I've even run it under NT's WOW, the multithreaded DOS-box subsystem for 16-bit apps. Additionally, if you wish to debug a currently faulting application caught in one of TrapMan's handlers, you will need to be running with a debugger capable of processing unowned Int 3hs in code, as Trapman uses the Intel INT3 instruction to return control to a waiting debugger. I prefer Nu-Mega's (Nashua, NH) Soft-Ice for Windows for debugging DOS-based versions of the Windows environment and the OS/2 kernel debugger for debugging OS/2-based versions (I use both on almost a daily basis). Unlike some debuggers, both of these can handle an Int 3h instruction that they themselves did not place in the code. Intel documentation often refers to exceptions as "interrupts," Windows refers to them as "faults," and OS/2, as "traps." Interrupt is followed by a decimal number, trap, by a hex number, and fault is usually preceded by the name of the exception. In other words, Interrupt 13, General Protection Fault, and Trap D are all the same thing when discussing exceptions. For this article, I'll generally use the Windows versions of these names to avoid questions about the base of numbers given. TrapMan Background I developed TrapMan with Microsoft C 6.x, the Windows 3.1 SDK, and a MASM 5.1-compatible assembler. It will run under Windows 3.0 but requires COMMDLG for its SAVEAS and OPEN dialogs. TrapMan will show you how to use DPMI calls to replace the default Windows and Win-OS/2 handlers in order to provide an application-specific level of depth in debugging information while running under retail Windows or Win-OS/2. All of the source code, including related files and executables, is available electronically; see "Availability," page 3. Fatal exceptions such as Stack Faults generally cause the operating system to terminate the task producing the exception. A nonfatal exception permits the task to continue at the instruction causing the fault after the operating system has processed the fault so that the current instruction will no longer cause an exception. An example would be a Segment Not Present fault. It is possible to write an exception handler in C, as in Listing One. This handler simply calls the Windows API DebugBreak() to interrupt to a waiting debugger (if available), and then calls FatalExit() to exit the faulting task. Notice that the handler does not attempt any access to program data. You can see why by looking at the mixed listing file created by the compiler for the handler in Listing Two. Notice that no segment registers are set in this routine. While CS is set through the action of calling this handler, no other segment registers are valid. These segment registers are set during C initialization and are not normally changed during the "life" of a Windows program. If the handler attempted to access C data, the handler would very likely GPFault (as DS is a random, possibly completely invalid value), which would cause the handler to be called repeatedly. (For more details, see Windows Internals, by Matt Pietrek, Addison-Wesley, 1993.) The HANDLER example in the Windows SDK shows one way to make sure DS is valid. HANDLER sets an interrupt handler and guarantees accessibility to DS by exporting the interrupt handler. You need to be sure that your C compiler is generating correct code for a Windows prolog so that the Windows loader will set DS correctly (or you may need to call MakeProcInstance() yourself to force DS to be correct). The easier way to make sure DS is valid is simply not to use DS! This is the technique used in TrapMan's handlers, which store information that they need to access at exception time inside the handler code segments. In other words, TrapMan is self-modifying code (even though the changes are mostly data). As code segments are shared across multiple instances, any modifications made by the second or greater instances would modify the data for all instances, including the first; therefore, only one instance of TrapMan is permitted. In order to make TrapMan's exception handlers as flexible as possible, I wrote the handlers in assembly language. The other modules (window functions and the like, the scaffolding of TrapMan) are written in C to make that code as uncomplicated as possible. Lastly, TrapMan makes extensive use of 16-bit DPMI services to monitor exceptions, and will only work in those systems that supply a 16-bit DPMI host of at least the 0.90 level (as do all 16-bit versions of Windows and Win-OS/2). Of course, such techniques should also work in protected-mode DOS applications, provided a DPMI host meeting these requirements is available, but such applications will not be discussed in this article. Why Exception Handlers? While Windows (and Win-OS/2) already supply their own exception handlers, these handlers do not help you gather information on the fault. In many cases, only the address of the faulting instruction is available. While helpful, a raw address is not very useful when taken out of context. Wouldn't it be nice to have registers, flags, or a stack dump--even without a debugger? TrapMan will help you do this. Is it dangerous to set exception handlers directly from an executable? The Windows 3.1 Guide to Programming states: Because interrupts can occur at any time, not just during the execution of the application that is using the device, device interrupt-handling code must be in a fixed segment. This also applies to exception handlers. Likewise, the Windows 3.1 Multimedia Programmer's Reference notes that interrupt (and so, exception) handlers "must reside in a DLL," and the handler data and code segments "must be specified as FIXED." The danger is that the code or data needed for the exception handler might have been discarded. While Windows 386 Enhanced mode and OS/2 both support paging, their underlying Windows systems do not. Windows and Win-OS/2 use segment-level linear memory; code and data are loaded on a per segment basis (via Interrupt 11, Segment Not Present fault). The Intel documentation specifically states that any two Contributory Exceptions (Divide By Zero, Segment Not Present, Stack Fault, or GP Fault) will generate a Double Fault, and the processor will enter shutdown mode. (See the i486 Processor Programmer's Manual, Table 9-4.) In other words, if an application GPFaults and the processor attempts to demand load the handler segment, a double fault would result. If it really worries you, put your exception handlers in a DLL with FIXED segments as Microsoft requires. Another possibility is to page lock the memory via GlobalPageLock(), but this function is only available in Windows Enhanced mode and is not available in all versions of the Windows operating environment. I have left TrapMan's handlers in an application instead of a library for simplicity's sake, with the code PRELOAD and NONDISCARDABLE. (For a very readable discussion on memory-segment attributes like PRELOAD, refer to Pietrek.) You'll see sections of TrapMan's handlers where the handlers check for code movement (because application code is always MOVEABLE), but this isn't too time-consuming. Should you use exception handlers all the time? No. It is probably best to leave the current Windows handlers alone, especially if you are shipping a retail version of your application. The Windows handlers, while not useful for debugging, are generic and work well with all Windows applications. If you are writing a debugging tool such as TrapMan, you will probably want to replace the Windows handlers. TrapMan will (at the user's option) either replace the Windows handler or hook it. Replacing a Windows exception handler means that only TrapMan's handler will be active. Hooking the Windows handler means that TrapMan will call the original Windows handler after first preprocessing the Windows exception. Of course, replacing a Windows or Win-OS/2 handler does not remove the handler from memory; the DPMI host simply calls us instead of them. If you write a regular (nondebug) application, you must be careful to only install your handlers by user choice. It should then be perfectly okay to ship exception handlers in your application (in testing TrapMan, I've run into at least one mainstream Windows application that did so). However, I would certainly not leave them in by default. I envision the following scenario: A user calls your support line to report a problem; the support team has the customer turn on exception handling and reproduce the problem. The customer ships you a file with exception information. You fix the bug! Easy, huh? Exception handling is not something you'd want to leave on all the time. If everybody replaced the Windows handlers unnecessarily, who knows what would happen? Setup for the Handlers One of the first things TrapMan does at startup is set the default values for this debugging session (see Listing Three). This routine sets up handlers for the exceptions our user wishes to watch. These values are currently hard-coded, but the code could easily be changed to read defaults from an .INI file. The options are as follows: - Save trap settings. Exiting TrapMan will cause this session's settings to become the default. - Nuke app. Trapping applications will be terminated. Don't turn this option off unless you are calling the Windows handlers, or a faulting application will fault endlessly as the trap handlers aren't recovering from the trap. - Call PrevHandler. Calls the handler of a fault that was active when TrapMan added its own handlers. This usually means the handlers in the Windows kernel will be called. Note that this is post-processing. TrapMan's handlers will have already processed the exception by the point at which we call the previous handler. - Break on fault. TrapMan will attempt to break to a debugger via Int 3h at fault time. Application fault registers are preserved except for CS:IP and SS:SP (which are available on the DPMI exception frame). - Beep on fault. Beeps to let you know that a fault has occurred. This reminds you to look at your debugger. If you're using the Break on Fault option, TrapMan will be unable to paint the edit control with the debug information until you've released your debugger with a GO command. - Intercept OutputDebugString(). Allows TrapMan to avoid the need for a serial connection to write debug information with OutputDebugString()--no more CANNOT WRITE TO DEVICE AUX messages! Note that this is a replacement and not a hook; the original Windows kernel routine is not called (although the original call will be restored if you uncheck this option from the menu). - Add CRLF to ODS() strings. Adding a carriage return/line feed to OutputDebugString() arguments improves readability in the edit control. If the arguments already have CRLFs, then this option is unnecessary and should not be used. - DebugBreak() on <PrntScrn>. This would be a nice hook into a debugger, but I haven't had time to implement it. For debugging purposes, TrapMan also can launch a single application from the command line. For convenience, TrapMan uses standard C argument processing to extract these values (see Listing Four). This code may be specific to your C library startup source code. Please check your compiler for more information. The current implementation works with Microsoft C 6.x. Using SendMessage() guarantees that our exception handlers are set before any application the user gave on the command line is launched (thus, any faults that occur on launch of the application are caught by TrapMan). You should do something similar in your application to ensure that your handlers are available as soon as needed. Remember that SendMessage() is processed through an immediate call to your window procedure, while PostMessage() messages are handled later. As mentioned previously, handlers store information in their code segments that is needed during exception processing. You'll find TrapMan's SetVars() routine in Listing Five. Currently, TrapMan creates and stores a DS alias to our HANDLER code segment and also stores the DS value for TrapMan. Both of these values will be accessed via CS from within exception handlers. Handlers for Fatal Exceptions Fatal exceptions include GPFault, Stack Fault, Invalid Op Code fault, and Divide by Zero. Note that the code in TrapMan's fatal exception handlers could be rewritten to take up less space, if necessary. You could assign specific entry points to each fatal exception and have them display exception-specific information (a text string, for example). The specific entry points could then jump to a generic handler for the rest of the exception. TrapMan's handlers are small enough that I felt that optimizing them would make them harder to understand, and only slightly more efficient. Fatal exception handlers begin with a call to the TELLDEBUGGER macro (see Listing Eight). This macro is responsible for the majority of information output to the user. For the moment, I'll discuss the Invalid OpCode exception handler found in Listing Six. The TELLDEBUGGER macro first saves the processor flags and calls the SAVEREGS macro, which saves all general 16-bit registers except the SP, IP, and CS registers. The SP register will be preserved through the normal maintenance of the stack in the handler; the CS and IP registers are not saved due to their nature--CS can only be changed through RET/ JMP/CALL instructions and the like. One reason for using the SAVEREGS macro is to save the registers in a more understandable format than the PUSHA instruction, which saves the general-purpose registers in the order of AX, CX, DX, BX according to the Intel i486 programmer's guide. This macro saves ten (decimal) words on the stack. The registers are restored by a call to the UNSAVEREGS macro. Next, the TELLDEBUGGER macro checks to see if TrapMan is to call the Windows MessageBeep() function to notify the user of an error. If the wBeepOnTrap flag is set, then MessageBeep() is called. After this, the trap message is displayed in the edit control. (This is done by calling our replacement procedure for the Windows OutputDebugString() API, which can be called regardless of whether or not we are currently replacing the OutputDebugString() API.) In this case, the message is "Trap 6!". At this point, the TELLDEBUGGER macro isolates the exception to a task. This is done through a call to GetCurrentTask(). The GetCurrentTask() API returns a Task Data Base (TDB). We'll extract the Module Data Base (MDB) from the TDB and then extract the fully qualified path to the executable, which is then displayed in the edit control. (See Undocumented Windows, by Andrew Schulman et al.) Next, the TELLDEBUGGER macro displays the DPMI exception frame in the edit control (through a call to _PrintOutFaultFrame()). Note that the argument to this procedure is a near pointer to the beginning of the fault frame (which is assumed to be on the stack and thus relative to SS). At this point, the TELLDEBUGGER macro restores the application registers through a call to UNSAVEREGS in preparation for dumping the registers to the edit control. Of course, as the act of dumping the registers might destroy some of them, we do an immediate SAVEREGS before calling _PrintOutPointerData() to display the application stack. This procedure simply takes a far pointer (which must be valid!) that will be displayed in the edit control. We then call the UNSAVEREGS macro, restore the processor flags, and the TELLDEBUGGER macro is done. This is the most complicated portion of the handler; at this point only a few things remain to be done. For starters, the macro BREAKIFUSERWANTS is called. This macro will result in an Int 3h instruction (to break to a waiting debugger) if the Break On Fault option is checked. Then the value of Call Prev Handler is tested; if checked, the previous handler is called and our processing of this exception ends. Finally, if we did not need to call the previous handler, then TrapMan is responsible for bringing down the faulting task. It does this by a call to the NUKEAPPCHECK macro, which checks the state of the wNukeApp variable. The NUKEAPPCHECK macro will call FORCEAPPEXIT to bring down the current task if the wNukeApp variable is set. It does this by resetting the Faulting CS and Faulting IP fields of the DPMI exception frame to point to the ThisAppIsHistory() procedure in TrapMan. The task will be terminated when control is returned to DPMI. Be careful! TrapMan will permit you to disable faulting-application termination without calling a previous handler (in other words, both the Call Prev Handler and Nuke App settings are unchecked). If an application faults while these settings are in effect, the faulting application will fault continuously. Someone must always process the exception! GPFaults (and most other exceptions) are restartable. Upon your handler's return to DPMI, Windows will begin execution at the current CS:IP, which will be whatever instruction is faulting, unless a handler resets it. Handlers for Nonfatal Exceptions Nonfatal exceptions, which are normal in the course of program execution, can be thought of as "requests for work" by the operating system. Two normal requests for work would be Interrupt 14 (Page Fault) and Interrupt 11 (Segment Not Present). While 16-bit Windows doesn't really concern itself with page faults (which are the responsibility of a ring 0 VxD under DOS Windows or the OS/2 kernel under OS/2), Segment Not Present faults are frequent. Problems in demand loading segments lead to the infamous "SEGMENT LOAD FAILURE" message from the Windows kernel. Nonfatal exceptions require different processing than fatal exceptions. TrapMan does not process these nonfatal exceptions itself. The hooks are only for informational purposes. The original (Windows or Win-OS/2) handler is always called, and all registers (including flags) must be preserved in our handlers for these exceptions. As an example of how a nonfatal handler might be written, let's take a look at TrapMan's Segment Not Present fault (Trap B) handler (see Listing Seven). The first important section of code resets the base of the data alias to the HANDLER code segment. You ensure that the alias points to the same address (that is, has the same base address) as the HANDLER code segment by simply setting the base of the alias to that of the code segment. (This is only necessary because our code is in an executable and cannot be FIXED. If Windows decides to move the location of the HANDLER segment after SetVars() allocates the alias, then the alias will be out of sync with the original selector, and nothing good will result.) Unlike other (fatal) fault handlers, this nonfatal fault handler does not begin with a call to the TELLDEBUGGER macro (Listing Eight). The TELLDEBUGGER macro dumps out information to the user such as fault location, stack, and DPMI-exception frame. In the case of a nonfatal exception, most of this information is not necessary, and only part of the TELLDEBUGGER function is used (in-line) in this procedure. The most important portion of the handler is that it is responsible for copying the value of _Prev11 (a variable in TrapMan's auto DS) to the variable MyFarProc in HANDLER's CS. This is necessary because only CS will be valid when you call the previous Windows handler to process the Segment Not Present fault. All other registers will be those of the application causing the fault. None of this would be necessary at fault time if _Prev11 and other variables were CS variables and directly accessible to the handlers; see Listing Nine. After all of this, TrapMan passes the nonfatal exception on to Windows by jumping to the contents of MyFarProc (which contains the address of the appropriate Windows or Win-OS/2 handler) with the jmp dword ptr cs:MyFarProc instruction. There are two important points here: - All registers at this point (except CS:IP, which will be set by the JMP instruction) must be set at the values they contained when DPMI called TrapMan. - The stack pointer (SP) must point to the beginning of the exception frame from DPMI on entrance to the native handler (this is why you must JMP to the previous handler; a CALL FAR PTR would have pushed the return address of the handler onto the stack below the DPMI exception frame and the native handler would have failed). You'll probably notice that Windows parameter-validation faults are caught by TrapMan as straight GPFaults. By default, debug information is taken, and the application causing the parameter-validation fault is terminated. For now, if you need to bypass parameter-validation errors (by passing them on to the Windows kernels), simply make sure that Call Prev Handler is checked. Windows or Win-OS/2 will then process the parameter validation normally. References The DPMI Committee. DOS Protected Mode Interface (DPMI) Specification, ver. 0.9. Intel Corp., 1990. Duncan, Ray. Power Programming with Microsoft Macro Assembler. Redmond, WA: Microsoft Press, 1992. Guide to Programming. Microsoft Corp., 1992. i486 Processor Programmer's Reference Manual. Intel Corp., 1990. Lafore, Robert. Assembly Language Primer for the IBM PC & XT. New York, NY:New American Library, 1984. Multimedia Programmer's Reference. Microsoft Corp., 1992. Pietrek, Matt. Windows Internals. Reading, MA: Addison-Wesley, 1993. Schulman, A., D. Maxey, and M. Pietrek. Undocumented Windows. Reading, MA: Addison-Wesley, 1992. Socha, John, and Peter Norton. Assembly Language for the PC, Third Edition. Carmel, IN: Brady, 1992. Thielen, David, and Bryan Woodruff. Writing Windows Virtual Device Drivers. Reading, MA: Addison-Wesley, 1994. Virtual Device Adaptation Guide. Microsoft Corp., 1992. Listing One #include <windows.h> void _far MyGPProc() // handler for Trap D (13 decimal) { DebugBreak() ; // break to a waiting debugger // Equivalent to _asm int 3h FatalExit( 13 ) ; // exit the faulting task } Listing Two ;|*** #include <windows.h> ; Line 1 ;|*** ;|*** void _far MyGPProc() // handler for Trap D (13 decimal) ;|*** { ; Line 4 PUBLIC _MyGPProc _MyGPProc PROC FAR ;|*** DebugBreak() ; // break to a waiting debugger ; Line 5 *** 000000 9a 00 00 00 00 call FAR PTR DEBUGBREAK ;|*** // Equivalent to _asm int 3h ;|*** FatalExit( 13 ) ; // exit the faulting task ; Line 7 *** 000005 b8 0d 00 mov ax,13 *** 000008 50 push ax *** 000009 9a 00 00 00 00 call FAR PTR FATALEXIT ;|*** } ; Line 8 *** 00000e cb ret *** 00000f 90 nop _MyGPProc ENDP Listing Three #define OPTIONMENU 2 // the THIRD pull down #define TRAPMENU 1 // the SECOND pull down (0 is first) hwndOptionMenu = GetSubMenu(GetMenu(hwnd), OPTIONMENU) ; hwndTrapMenu = GetSubMenu (GetMenu(hwnd), TRAPMENU); SetJumpToPrevHandler( 0 ) ; // DON'T jump to the previous fault handler SetVars ( hInstance ) ; SendMessage( hwnd, WM_COMMAND, IDM_NUKEAPP, 0L) ; // DO nuke faulting app SendMessage( hwnd, WM_COMMAND, IDM_BREAKONTRAP, 0L); // DO break to debugger SendMessage( hwnd, WM_COMMAND, IDM_BEEPONTRAP, 0L) ; // DO beep on faults SendMessage( hwnd, WM_COMMAND, IDM_DEFAULT, 0L) ; // Watch default exceptions SendMessage( hwnd, WM_COMMAND, IDM_ODSCRLF, 0L) ; Listing Four // The following is for standard C main() args using MSC 6. Please review // your compiler startup source code to see what is the proper name for // the argc/argv globals for your compiler #define argc __argc #define argv __argv extern int argc ; extern char **argv ; if (argc > 1) { int rc ; rc = WinExec(argv[1], SW_SHOW) ; if (rc < 0x20) { /* WARNING: As wsprintf is a vararg call, it cannot be ** fully prototyped. If you give an argument that should ** be a far pointer, you must cast it (as we use LPSTR below) ** to force the compiler to pass the argument as a far ** pointer. Otherwise you're likely to get garbage or trap ** instead of the text in szBuffer that you wanted */ wsprintf( szBuffer, "Cannot load '%s', error code=%d", (LPSTR) argv[1], rc) ; MessageBox(NULL, szBuffer, "TrapMan", MB_ICONEXCLAMATION | MB_OK) ; } } Listing Five ; RW == read/write, RE = read/execute _SetVars proc far push es cmp cs:wHandlerDS, 0 ; if (0 == wHandlerDS) jnz @F CREATEALIASDESCRIPTOR cs ; then jc SV_done ; allocate a data (RW) selector to our ; code segment (RE) via DPMI call to mov es, ax ; CreateAliasDescriptor assume es:handler mov ES:wHandlerDS, ax ; save in CS variable wHandlerDS jmp SV_DSset @@: ; else mov ax, cs:wHandlerDS mov es, ax assume es:handler SV_DSset: ; endif mov ax, ds mov es:wTrapManDS, ax ; save TrapMan DS in CS variable wTrapManDS SV_done: assume es:nothing pop es ret _SetVars endp Listing Six ;int _far MyInvalidOpProc() _MyInvalidOpProc proc far TELLDEBUGGER <cs>, <msgTrap6> BREAKIFUSERWANTS JmpPrevHandler <_GetJumpToPrevHandler>, <_Prev6> NUKEAPPCHECK ret _MyInvalidOpProc endp Listing Seven ;void _far MySegNotPresentProc() _MySegNotPresentProc proc far pushf ; save flags SAVEREGS ; save registers of faulting process ; the following is a modified version of the TELLDEBUGGER macro ; As SegNotPresent is a non-fatal exception, register and stack ; dumps are not necessary and will not be done push ax mov ax, offset msgTrapB push cs push ax call far ptr MyODS ; inform user of SegNotPresent fault pop ax UNSAVEREGS ; we've now reset ALL REGS (except CS:IP/SS:SP) ; to their values at the time of the fault ; Flags are still on the stack push ds ; save DS register push bx mov bx, wTrapManDS ; get access to our data segment mov ds, bx assume ds: data pop bx push ax push bx push cx push dx mov bx, wHandlerDS ; is our CS alias set? cmp bx, 0 ; 0 = NO, so skip this! jz @F ; wHandlerDS is non-zero GETSEGMENTBASEADDRESS <cs> ; Get CS base address SETSEGMENTBASEADDRESS <bx>, <cx>, <dx> ; make sure our alias points ; to CS base in case CS has ; moved. Otherwise our data ; will be unaddressable. @@: mov bx, offset _Prev11 ; dword (Windows Interrupt B handler) mov ax, word ptr DS:[bx][2] ; Sel of Windows handler mov bx, word ptr DS:[bx] ; Offset of Windows handler ; AX:BX = windows handler mov cx, wHandlerDS mov ds, cx ; DS now our CS alias assume ds: handler push bx pop cx ; now AX:CX = Windows handler mov bx, offset MyFarProc ; DS:BX now points to MyFarProc mov word ptr DS:[bx][2], ax ; save Sel of Windows handler mov word ptr DS:[bx], cx ; save Offset of Windows handler pop dx pop cx pop bx pop ax pop ds popf jmp dword ptr cs:MyFarProc ; SP now points to the DPMI exception frame ; that we had on entry retf ; this retf will never get executed as ; the Windows kernel RETF at the end of ; the handler will return to DPMI for us _MySegNotPresentProc endp Listing Eight ; the major work of the handlers -- includes the following steps: a) save ; registers of faulting process, b) call messagebeep(), c) display type of ; fault message, d) get faulting module, e) dump DPMI exception frame, ; f) dump registers, g) dump stack TELLDEBUGGER MACRO sel, var local around, arounddata, datalabel pushf SAVEREGS ; save registers of faulting process push ax call far ptr _GetMessageBeep ; see if user wants us to beep cmp ax, 0 pop ax jz around ; if 0, then don't beep push ax xor ax, ax push ax call far ptr MessageBeep pop ax around: ;; Second, display message in edit control... remember to set DS push ax mov ax, offset var ; sel:var = far pointer of message to display push sel push ax call far ptr MyODS ; put in edit control... pop ax ;; Thirdly, find a module to blame ;;--------------------- ;;--------------------- ;; Fourth, now dump the faulting frame... ;;--------------------- ;;--------------------- ;; Fifth, print out the faulting regs... ;; the above call has destroyed app registers UNSAVEREGS ;; so unsave regs of faulting process SAVEREGS ;; note that we must save them again in case ;; the user wishes to break to a debugger call far ptr _PrintOutFaultRegs ;; CS:IP, SS:SP are bad all other registers ;; are valid ;;--------------------- ;; Sixth, dump stack of faulting app... nop mov bx, sp ; BX = stack pointer add bx, 14h ; skip saved regs (pushed by the SAVEREGS macro) add bx, 2h ; skip flags on stack ; SS:BX now points to DPMI fault frame mov ax, ss:[bx][0Eh] ; [bx][0eh] = segment of faulting app's stack push ax mov ax, ss:[bx][0Ch] ; [bx][0ch] = offset of faulting app's stack push ax call far ptr _PrintOutPointerData ;; end default (retail and debug) processing ;; Seventh, (DEBUG only), write to AUX device ifdef DEBUG push ax mov ax, offset var push sel push ax call OutputDebugString pop ax endif UNSAVEREGS ;; we've now reset ALL REGS (except CS:IP/SS:SP) ;; to their values at the time of the fault ;; in case our user wants to break popf ;; and reset the flags! ENDM Listing Nine mov bx, offset _Prev11 ; dword (Windows Interrupt B ; handler) mov ax, word ptr DS:[bx][2] ; Sel of Windows handler mov bx, word ptr DS:[bx] ; Offset of Windows handler mov cx, wHandlerDS mov ds, cx ; DS now points to our DS alias ; assumes ds, handler of ; our HANDLER segment push bx pop cx mov bx, offset MyFarProc ; Update MyFarProc -- a CS DWORD mov word ptr DS:[bx][2], ax ; Sel of Windows handler mov word ptr DS:[bx], cx ; Offset of Windows handler
http://www.drdobbs.com/windows/windows-apps-and-exception-handlers/184409706
CC-MAIN-2014-52
refinedweb
5,077
59.13
Let's imagine you've populated an object in memory and then written the object to the ASP.NET runtime cache. Do you know that when you read your object from the cache... MyClass myObject = ( MyClass )HttpRuntime.Cache[ "mykey" ]; ... you will have a reference to the same object instance that the cache has access to? This means that any changes you make in your object (Fig. 2) will be seen by all future requests for the same cached object. // no so... I'm afraid every future request will also see this description myObject.Description = "My very own description"; Now this isn't necessarily a good thing. In fact, this could be very bad for you because maybe, you would like to cache an object, and then preserve the object until it expires from the cache. At the same time, new requests for this cached object might want to make slight modifications to it for their own use, so in essence, if we need to preserve the cache yet allow request modifications - we have to give each request its very own copy of the object. Answer: By cloning the object both ways (when inserting into the cache and reading from the cache), using a deep copy, and let me stress "DEEP". Let me show you some code examples I created for this article. I have created a generic CacheList class that will allow a user to add a generic type to a list and provide caching and loading from the cache. I started by creating a helper class and added a method that will allow me to clone a list. CacheList using System; using System.Collections; using System.Collections.Generic; using System.Web; using System.Web.Caching; public static class GenericsHelper { public static IList<T> CloneList<T>( IList<T> source ) where T : ICloneable { IList<T> clone = new List<T>( source.Count ); foreach ( T t in source ) { clone.Add( ( T )t.Clone( ) ); } return clone; } } Then I implemented a simple generic list that will allow me to: // call to add an item to our list Add(T item) : void // call to insert list into the runtime cache CacheIt(string cacheKey, DateTime absoluteExpiration) : void // call to load list from the data in the cache // and allow mods without affecting cache LoadCached(string cacheKey) : bool The generic type must implement ICloneable I have placed a constraint on my list which specifies that we can only stored types that implement the ICloneable interface. Let me point out that I'm assuming the implementations of this interface are performing deep copies and not shallow ones. ICloneable Use a search engine to search for ICloneable and you will no-doubt find all sorts of discussions about how good or bad it is, and to be honest I can see the arguments against using this interface. In a nutshell, developers are suggesting this method isn't clear enough; it doesn't indicate whether a object that implements it is performing a deep or a shallow copy. However, if you agree with the reasons against using it, then you don't really have a problem; just your own cloneable interface to write... and maybe you could make it generic. /// <summary> /// A simple class to demonstrate how we can protect/preserve our data /// in the cache. /// </summary> /// <typeparam name="T">Our generic type</typeparam> public sealed class CacheList<T> where T : ICloneable { private IList<T> _dataList = new List<T>( ); public void Add( T item ) { _dataList.Add( item ); } public void CacheIt( string cacheKey, DateTime absoluteExpiration ) { // // clone the data list IList<T> clone = GenericsHelper.CloneList<T>( _dataList ); // // insert into the runtime cache HttpRuntime.Cache.Insert( cacheKey, clone, null, absoluteExpiration, Cache.NoSlidingExpiration ); } public bool LoadCached( string cacheKey ) { // // perform thread-safe read from cache object obj = HttpRuntime.Cache[ cacheKey ]; // // if this obj is not null then clone it, // returning true to indicate clone success if ( obj != null && obj is IList<T> ) { _dataList = GenericsHelper.CloneList<T>( ( IList<T> )obj ); return true; } return false; } public IList<T> DataList { get { return _dataList; } } } Well, that's it for the code that preserves our cached data using cloning. This is a pretty simple example and a very simplistic implementation of a list. However, I didn't want to detract from purpose of the article. I'm going to extend this article in future and show you how this sort of cache access can be wrapped in a business/domain object. I hope you enjoyed the article, there will be many more to come. I will be writing many more articles about C#, ASP.NET and design on my blog dotnet notepad..
http://www.codeproject.com/Articles/33364/ASP-NET-Runtime-Cache-Clone-Objects-to-Preserve-Ca?fid=1535031&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None
CC-MAIN-2016-26
refinedweb
761
63.29
RationalWiki:Bureaucrat guide Congratulations, you have been further demoted to the rank of Bureaucrat. Having been here for a while, you have gained new powers, mostly for maintenance. As with your demotion to sysop, becoming a ‘crat does not make you a "better user", it simply means that you have been here for a long time. There are at present 0 Bureaucrats on RationalWiki. What looks different As a Bureaucrat, nothing looks too different at first glance, you still have the red !’s, and you still can patrol recent changes. The only real difference is in the Special Pages, where you now have two new abilities, and at the intercom, where you have one more. Editing the interface As a bureaucrat, you can change the wiki interface by editing the pages in the MediaWiki namespace. You can see a complete list at Special:AllMessages. Some of these pages are simply labels for buttons, while others are more complex and may contain wikicode or even HTML. Please be careful when editing them, and make sure you have consensus before radically altering things. Deleting pages Only bureaucrats can delete pages that have more than 2000 revisions. You'll probably never need to do that anyway, and should certainly discuss the reasons for doing so beforehand and seek a consensus. The reason for restricting this ability to bureaucrats is that, once deleted, a page with a vast number of edits will be very difficult to restore, and attempting to do so could cause the whole wiki to break down. This has happened before. Suppressing revisions Bureaucrats can hide revisions from sysops as well as normal users. You will see a new option labeled "Suppress data from administrators as well as others" in the revision delete interface. Since these deletions are logged in the suppression log, which doesn't show up on recent changes, you should first delete the revision without using the option, then check the option and delete again. Renaming user Looking around, you now have "User Rights Management" and "Rename user". The second one is innocuous enough, if someone asks to be renamed, simply follow this procedure: - Reply wherever the request was entered. - Click on the "rename user" link. - Enter the current username, and the user's desired username. - Click on "submit" - Confirm that the user has been renamed, and follow through. User rights management Demotion This is likely to be the feature you will be using the most. At the user rights management page (the link above), you can enter a user's name, then tick or untick boxes to change their membership of user rights groups such as sysops and bureaucrats. As an established user, you probably already know our loose policy towards sysopship ("mostly harmless") as well as the meme that users given sysop powers are said to be "demoted". While you can demote a user to sysop at any time, it is generally recommended that you let three days go by, observe their edits, and demote accordingly. Eventually, though, you might notice another user who is particularly established in the community, and you feel is ready to make the same calls you are making. Unlike sysopship ("mostly harmless"), there are no set criteria for demotion to a Bureaucrat, but in general, new bureaucrats should be an established user who has reacted gracefully to pressure, has been around for a while, and has made many contributions. Your individual criteria can, and probably will, differ from this very generic set of recommendations. It is generally recommended that you nominate candidates for bureaucrat at RationalWiki:Bureaucrat nominations, and let the mob decide, rather than demoting them without prior discussion. It is also a good idea to announce the nomination on the Saloon bar and/or the Intercom. Promotion As a ‘crat, you will likely demote several users, but there is the possibility that you may have to "promote" a user, from sysop to editor, or bureaucrat to sysop, or, in extreme cases, from bureaucrat to regular editor. Promotions happen rarely, and are usually because a user has been abusing their administrative abilities (such as excessive blocking or repeated deletion without discussion). The user should only be promoted following the results of an Administrative Abuse hearing. At the Abuse page, outline why you think the user should be promoted, citing examples (with links) of how they have abused their abilities. Allow time for others to discuss the abuse and the proposed promotion - at least 24 hours is recommended. If the majority favour promoting that user, then a bureaucrat can make the necessary changes in the user rights log. Bear in mind that no matter how annoyed you are at someone, you are not allowed to just promote them out of rage or spite. This is something that you are trusted, as a ‘crat, not to do. However, if someone is blatantly abusing their powers in a destructive way (such as a deletion spree or blocking rampage), you can strip them of their powers immediately to prevent further damage, but must still file an Administrative Abuse case to discuss whether they should stay promoted. Intercom As a bureaucrat, you can send intercom messages to the Site wide (urgent) group. These messages appear to anonymous users, registered users receive them by default but can opt out. You can also hide messages sent to this group from anonymous users by clicking "mark as read for anonymous users". Template If you want to, you can add {{crat pledge}} to your userpage, which produces
http://rationalwiki.org/wiki/RationalWiki:Bureaucrat_guide
CC-MAIN-2013-20
refinedweb
918
59.43
When you define a field with the readonly keyword, you have the ability to set that field's value in one place: the constructor. After that point, the field cannot be changed by either the class itself or the class's clients. Let's say you want to keep track of the screen's resolution for a graphics application. You can't address this problem with a const because the application cannot determine the end user's screen resolution until run time, so you use code like that at the top of the following page. using System; class GraphicsPackage { public readonly int ScreenWidth; public readonly int ScreenHeight; public GraphicsPackage() { this.ScreenWidth = 1024; this.ScreenHeight = 768; } } class ReadOnlyApp { public static void Main() { GraphicsPackage graphics = new GraphicsPackage(); Console.WriteLine("Width = {0}, Height = {1}", graphics.ScreenWidth, graphics.ScreenHeight); } } At first glance, this code seems to be just what you would need. However, there's one small issue: the read-onlyfields that we defined are instancefields, meaning that the user would have to instantiate the class to use the fields. This might not be a problem and could even be what you want in cases in which the way the class is instantiated will determine the read-onlyfield's value. But what if you want a constant, which is static by definition, that can be initialized at run time? In that case, you would define the field with both the static and the readonly modifiers. Then you'd create a special type of constructor called a static constructor. Static constructors are constructors that are used to initialize static fields, read-only or otherwise. Here I've modified the previous example to make the screen resolution fields static and read-only, and I've added a static constructor. Note the addition of the static keyword to the constructor's definition: - using System; class GraphicsPackage { public static readonly int ScreenWidth; public static readonly int ScreenHeight; static GraphicsPackage() { // Code would be here to // calculate resolution. ScreenWidth = 1024; ScreenHeight = 768; } } class ReadOnlyApp { public static void Main() { Console.WriteLine("Width = {0}, Height = {1}", GraphicsPackage.ScreenWidth, GraphicsPackage.ScreenHeight); } }
http://www.brainbell.com/tutors/C_Sharp/Read_Only_Fields.htm
CC-MAIN-2018-43
refinedweb
348
54.32
Adding React.js to old Django site (day 4)— React finally :) Hi guys. Previous 3 days were sacrificed to the gods of preparation. Today I gonna get bless from them and finally plug some React code into my old project. I’m planning to add React.js frontend for private part of my old Django site pokatuha.com. The path full of pain, struggle and monsters. I’m inviting you to join my trip :) The good news are: I won’t bother Django code for next few days (except one template today) before adding DRF for API. I’ll focus mainly on client side code. Bad news are: there are preparation gods in React land too. And they need some sacrifices too. I’m talking about npm, Webpack and project structure. Project structure In order to keep serverside stuff and clientside separated I need to make some changes for assets folder which is used as source for my webpack. First of all I’ll move all my old code inside /assets/serverside. I’ll also need to change webpack configuration later to keep it working with old files. Secondly I need to create folder for my new shiny client app: /assets/clientside. Now all my work will be done inside /assets/clientside. There are different approaches on how to structure React.js project. I have no experience with React at all but understand that my app is quite complicated and feature rich so I need to use strict structure. I really like what this guy is proposing. So here is my structure of /assets category: serverside/ some old stuff here clientside/ components/ data/ scenes/ services/ App.js index.js /assets/clientside/index.js will be used as entry point for webpack. Webpack and npm I already have webpack in my project. It being used for bundling all useful jquery stuff in one single file main.js. Also there is sass pre-processor, text-extraction plugin for making .css files, fonts and images extraction, etc. Before making any changes I decided to update Webpack to 2.2.0 from my 1.x.x version. It was quite a pain in ass but worth it. Just remember you need to use full names of loaders (‘sass-loader’ instead of ‘sass’, ‘file-loader’ instead of file, etc.) Also some packages for node.js should be updated during this process (sass didn’t work). In order to keep old javascript code and to add new code for bundling I’m using multiple entry points in Webpack config inside module.exports part: entry: { main: './assets/serverside/js/index', client: './assets/clientside/index' }, output: { path: path.resolve('./bundles/'), filename: 'js/[name].js', }, After build process I’ll get two js files inside my bundles folder (which is used for collectstatic inside Django): main.js and client.js. Main.js is old sh*t used for classic serverside rendering of Django. I won’t bother it anymore. Client.js is new one used for mounting React app. I also installed dependencies needed for successful react build process in npm: npm install babel babel-core babel-loader babel-preset-es2015 babel-react-preset react react-dom Here is how modules looks inside module.exports of webpack configuration file: module: { loaders: [ {test: /\.js$/, exclude: /node_modules/, loader: 'babel-loader', query: { presets: ['es2015', 'react'] } }, {test: /\.scss$/, loader: ExtractTextPlugin.extract('css-loader!sass-loader') }, { test: /\.(ttf|eot|svg|woff(2)?)(\?[a-z0-9=&.]+)?$/, loader: 'file-loader?name=fonts/[name].[ext]&publicPath=../' }, { test: /\.(jpe?g|png|gif|svg)$/i, loaders: [ 'file-loader?name=img/[name].[ext]&publicPath=../' ] }, ] }, React finally It’s time to turn some React.js on! :) I won’t add any functionality here because the only goal for today is to make all boilerplate dirty job and make React show at least something. Let’s add some code to /assets/clientside/index.js: import React, { Component } from 'react'; import ReactDom from 'react-dom'; import App from './App'; ReactDom.render( <App />, document.getElementById('app') ); As you can see I’re importing App.js here. So let’s add App.js into /assets/clientside/: import React, { Component } from 'react'; class App extends Component { render() { return ( <div> <h1>It's alive</h1> <p>Welcome to pokatuha. Let's ride together!</p> </div> ); } }; export default App; And finally I need to make one small change to Django template which is rendered by my custom middleware: <!DOCTYPE html> {% load static compress %} <html> <head> <meta charset="utf-8"> <link rel="shortcut icon" href="{% static "images/favicon.ico" %}" type="image/ico"> <meta name="viewport" content="width=device-width, initial-scale=1"/> <title>Pokatuha.com - let's ride together</title> </head> <body> <div id="app"></div> <script type='text/javascript' src="{% static 'js/client.js' %}"></script> </body> </html> That’t all. After running webpack building process all the code is bundled to /bundles/js/client.js which is used by collectstatic of Django. Running Django server and seeing this if the user is authenticated: Doesn’t look impressive? Yep. But it means a lot! Finally I can start working with React. Hope you enjoyed this trip. Next day I’m planning to add React Router and plan some Scenes (views) which will be used by my app.
https://medium.com/@artemivasyuk/adding-react-js-to-old-django-site-day-4-react-finally-a5bc3a564606
CC-MAIN-2018-51
refinedweb
868
60.21
@genyded Thanks, I just saw the documentation about the plugins, apparently the store is accessible directly from here. Gonzalo2683 @Gonzalo2683 Posts made by Gonzalo2683 - RE: Access the store in routes.js to use navigation guards @genyded - RE: Access the store in routes.js to use navigation guards Thanks for the example, a question ?, did not you need to import the store to be able to use it? Because I try to use it directly and I do not have access to it. This is the store in which I am testing, it is a function that is being exported, I am maintaining as function it because the idea is to be able to use it with SSR. When I import this store and access to state from here vuex does not work properly. import Vue from 'vue'; import Vuex from 'vuex'; import example from './module-example'; import config from './config/config'; Vue.use(Vuex); /* * If not building with SSR mode, you can * directly export the Store instantiation */ export default function (/* { ssrContext } */) { const Store = new Vuex.Store({ state: { value: 0, }, getters: { value: state => state.value, }, mutations: { updateValueMut: (state, payload) => { state.value = payload; }, }, actions: { updateValueAct({ commit }, payload) { commit('updateValueMut', payload); }, }, }); return Store; } - RE: How to apply access control to different routes in quasar? 530/5000 The answer given by admin is not very explanatory. I am having the same problem as accessing the status in the routes.js file, I am using the SSR version. What I did was import the store in the routes.js file but this apparently creates a new store instance and the state settings through actions and mutations do not seem to work. How would be the correct way to reference the store in the routes.js file in order to use navigation guards? -
https://forum.quasar-framework.org/user/gonzalo2683
CC-MAIN-2019-22
refinedweb
297
66.23
I have two versions of the same DLL, i.e. LibV1.dll and LibV2.dll. Both libraries have the same namespaces and types, but are not compatible. I need to be able to reference both at the same time in a VB.Net project in order to upgrade data from the old version to the new version. This seems to be quite easy to solve in C#, but everything I've read indicates that there is no solution to this in VB.Net. In fact, I see this post from 2011, which confirms this. I'm wondering however if there has been any changes in the past 4 years that might make this possible now? Sorry, I had hoped to paste this a comment, but SO prevents me from doing so. As far as I know, VB has not added the C# Aliasing feature, but your assertion that there is no solution in VB.Net is incorrect. Your referenced post from 2011 points you to using Reflection as a workaround. I think the simplest route would be to choose which DLL you want to have Intellisense support for and add a reference to that DLL. Then you can use Reflection.Assembly.LoadFile to get a reference to the second DLL and use the CreateInstance method on that instance to create an Object reference to a needed class. You would the use late-binding to work with that class instance. Alternatively, you could use Reflection to get the needed MethodInfo's/PropertyInfo's/etc. and work through them to work on the class instance, but I think that would be a lot more work than using late-binding. Edited to add example. Sub Test() ' assume you chose Version 2 as to reference in your project ' you can create an instance of its classes directly in your code ' with full Intellisense support Dim myClass1V2 As New CommonRootNS.Class1 ' call function Foo on this instance Dim resV2 As Int32 = myClass1V2.foo ' to get access to Version 1, we will use Reflection to load the Dll ' Assume that the Version 1 Dll is stored in the same directory as the exceuting assembly Dim path As String = IO.Path.GetDirectoryName(Reflection.Assembly.GetExecutingAssembly.Location) Dim dllVersion1Assembly As Reflection.Assembly dllVersion1Assembly = Reflection.Assembly.LoadFile(IO.Path.Combine(path, "Test DLL Version 1.dll")) ' now create an instance of the Class1 from the Version 1 Dll and store it as an Object Dim myClass1V1 As Object = dllVersion1Assembly.CreateInstance("CommonRootNS.Class1") ' use late binding to call the 'foo' function. Requires Option Strict Off Dim retV1 As Int32 = myClass1V1.foo End Sub
http://databasefaq.com/index.php/answer/16816/net-vbnet-dll-reference-two-dlls-with-the-same-namespaces-and-types
CC-MAIN-2018-47
refinedweb
433
66.03
setattr — set the parameters associated with the terminal #include <termios.h> int tcsetattr(int fildes, int optional_actions, const struct termios *termios_p); The tcsetattr() function shall set the parameters associated with the terminal referred to by the open file descriptor fildes (an open file descriptor associated with a terminal) from the termios structure referenced by termios_p as follows: * If optional_actions is TCSANOW, the change shall occur immediately. * If optional_actions is TCSADRAIN, the change shall occur after all output written to fildes is transmitted. This function should be used when changing parameters that affect output. * If optional_actions is TCSAFLUSH, the change shall occur after all output written to fildes is transmitted, and all input so far received but not read shall be discarded before the change is made. If the output baud rate stored in the termios structure pointed to by termios_p is the zero baud rate, B0, the modem control lines shall no longer be asserted. Normally, this shall disconnect the line. If the input baud rate stored in the termios structure pointed to by termios_p is 0, the input baud rate given to the hardware is the same as the output baud rate stored in the termios structure. The tcsetattr() function shall return successfully if it was able to perform any of the requested actions, even if some of the requested actions could not be performed. It shall set all the attributes that the implementation supports as requested and leave all the attributes not supported by the implementation unchanged. If no part of the request can be honored, it shall return −1 and set errno to [EINVAL]. If the input and output baud rates differ and are a combination that is not supported, neither baud rate shall be changed. A subsequent call to tcgetattr() shall return the actual state of the terminal device (reflecting both the changes made and not made in the previous tcsetattr() call). The tcsetattr() function shall not change the values found in the termios structure under any circumstances. The effect of tcsetattr() is undefined if the value of the termios structure pointed to by termios_p was not derived from the result of a call to tcgetattr() on fildes; an application should modify only fields and flags defined by this volume of POSIX.1‐2008 between the call to tcgetattr() and tcsetattr(), leaving all other fields and flags unmodified. No actions defined by this volume of POSIX.1‐2008,‐2008 to change. If tcsetattr() is called from a process which is a member of a background process group on a fildes associated with its controlling terminal: * If the calling thread is blocking SIGTTOU signals or the process is ignoring SIGTTOU signals, the operation completes normally and no signal is sent. * Otherwise, a SIGTTOU signal shall be sent to the process group. Upon successful completion, 0 shall be returned. Otherwise, −1 shall be returned and errno set to indicate the error. The tcsetattr() function shall fail if: EBADF The fildes argument is not a valid file descriptor. EINTR A signal interrupted tcsetattr(). EINVAL The optional_actions argument is not a supported value, or an attempt was made to change an attribute represented in the termios structure to an unsupported. None. If trying to change baud rates, applications should call tcsetattr() then call tcgetattr() in order to determine what baud rates were actually selected. In general, there are two reasons for an application to change the parameters associated with a terminal device: 1.. 2.. The tcsetattr() function can be interrupted in the following situations: * It is interrupted while waiting for output to drain. * It is called from a process in a background process group and SIGTTOU is caught. See also the RATIONALE section in tcgetattr(3p). Using an input baud rate of 0 to set the input rate equal to the output rate may not necessarily be supported in a future version of this volume of POSIX.1‐2008. cfget TCSETATTR(3P) Pages that refer to this page: termios.h(0p), cfsetispeed(3p), cfsetospeed(3p), tcgetattr(3p)
http://man7.org/linux/man-pages/man3/tcsetattr.3p.html
CC-MAIN-2017-30
refinedweb
667
51.48
Lowercase/Uppercase for namespace-uri Discussion in 'XML' started by R.Georges, Sep 28, 2003. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Function that standardize a string with lowercase and uppercase ?The_Kingpin, Oct 26, 2004, in forum: C Programming - Replies: - 1 - Views: - 465 - Jim - Oct 26, 2004 C++ - how to convert string to uppercase/lowercaseMichal, Dec 17, 2008, in forum: C++ - Replies: - 57 - Views: - 21,465 - Ian Collins - Dec 24, 2008 Uppercase/Lowercase on unicodeKless, Jun 5, 2009, in forum: Python - Replies: - 3 - Views: - 625 - Дамјан ГеоргиевÑки - Jun 5, 2009 Help a beginner - simple lowercase to uppercase and so on function, Jul 26, 2009, in forum: C Programming - Replies: - 63 - Views: - 1,574 - Dan Henry - Aug 2, 2009 /[HELP!]/ regxp replace uppercase/lowercasebrendan, Aug 28, 2003, in forum: Javascript - Replies: - 2 - Views: - 129 - brendan - Aug 28, 2003
http://www.thecodingforums.com/threads/lowercase-uppercase-for-namespace-uri.165735/
CC-MAIN-2014-52
refinedweb
183
67.28
Difference between revisions of "Talk:CryptoPayment" Revision as of 22:59, 19 November 2013 Amount reduction) - This was made back when the price of BTC was in the low two digits. :) Yes payment amount could be reduced, but then you'd have to pay 0.0005 BTC fee for sending < 0.01. - Also, 6 conf is required since this uses the MtGox payment processing API, which requires 6 conf. Not really worth it to special-case it. Nanotube (talk) 13:47, 30 April 2013 (GMT) - To this day, the editing fee is worth 5€ or 5.5$. I already contacted MagicalTux, and posted on Bitcointalk.org (got flamed btw) while BTC was around $100, and I see that the problem is not yet answered. Thanks you MagicalTux for preventing this wiki to gain content and active users ! --Twix (talk) 22:59, 19 November) - Payment is only required to activate edit privileges, hence the flood of spam users. but it's not a bad idea to have it be required for user creation too, to avoid that spam. The problem is that it'd require more development, and mtux doesn't really have time to do this for the wiki right now. - In the meantime as a quick hack, you can just filter the recent changes list to exclude everything from the User namespace. try this. It's not perfect, since it also excludes user page edits... but it's good. :) Nanotube (talk) 13:44, 30 April 2013 (GMT) Informing new users It's not very clear right now that people need to visit this page to learn how to contribute. I didn't find out until I was reading through some questions on StackExchange, saw somebody someone ask why he couldn't edit. I know there's a line on the front page in font-size 10 saying "Anti-spam protection from BitcoinPayment", but people unfamliar with BitcoinPayment will not notice that. When they don't see an edit button they're likely to assume that you either need to wait for manual approval or that you need to know the right people within the community. It's been ages since I tried to write code for MediaWiki, so I don't know if on of the following suggestions is possible: - don't hide the edit link, but instead redirect the user to this page for instructions - next to the "Show source" link, show "Want to contribute?" Or this solution, which doesn't require code: - put a text like "If you want to contribute, please read these instructions" in a very prominent place on the landing page. - You're completely right, I was thinking the same thing for a long time. I don't know how to implement it - I just asked it on webapps Stackexchange. If you know of a good way to implement that without having access to the backened/php, let us know. Ripper234 (talk) 15:44, 17 May 2013 (GMT) - We have edited the "no permission" message to point to BitcoinPayment. Also note that the login screen has had the prominent orange banner mentioning the requirement for BitcoinPayment all this time, also. I suppose "ideally" any user who is not trusted should get a persistent banner in the header or something, but that requires editing on the backend. Nanotube (talk) 03:47, 24 May 2013 (GMT)
https://en.bitcoin.it/w/index.php?title=Talk:CryptoPayment&diff=next&oldid=38020
CC-MAIN-2020-05
refinedweb
560
69.31
Container to store blobs of arbitrary data for attributes. More... #include <GA_BlobContainer.h> Container to store blobs of arbitrary data for attributes. The geometry library can store blobs of binary data for each object element. It does this by storing a shared reference count. Blobs are added to the blob container and accessed by integer handle (index). As a developer, the semantics of: are that the container will create a copy of the blob. It will associate an integer index with that blob and return the index. To test whether a blob is in the container, you can call: This will not add a blob to the container. You get the blob back out of the container by calling: It's important to realize that the blob you get back from the container is immutable. If you need to change the blob, you need to make a copy and store the modifed version in the container. Blobs are deleted from the container by calling: You should call freeBlob() for each blob you stored calling () An example of how you might store a blob: Definition at line 71 of file GA_BlobContainer.h. Allocate a new blob and return the index to the blob. Add a reference to the blob given by the index. This is equivalent to: Definition at line 195 of file GA_BlobContainer.h. Add a reference to the blob given by the index. This is equivalent to: Definition at line 197 of file GA_BlobContainer.h. Allocated size of the container Definition at line 93 of file GA_BlobContainer.h. This will forcibly clear out all blobs, regardless of whether they have references. Use this with caution. Definition at line 226 of file GA_BlobContainer.h. Compact blobs can be called to "shrink" the index list (i.e. remove all vacancies in the index list). Since this will change the index values, a mapping array is returned which can be used to map the existing handle value to the new handle value. For example: Count memory usage using a UT_MemoryCounter in order to count shared memory correctly. If inclusive is true, the size of this object is counted, else only memory owned by this object is counted. If this is pointed to by the calling object, inclusive should be true. If this is contained in the calling object, inclusive should be false. (Its memory was already counted in the size of the calling object.) Occupied size of the container Definition at line 97 of file GA_BlobContainer.h. Extract the blobs into two arrays. Free the blob (de-reference) given by the handle Definition at line 210 of file GA_BlobContainer.h. Free the given blob (de-reference) by blob pointer. This is equivalent to: Definition at line 219 of file GA_BlobContainer.h. Return the blob for a given handle. If the handle isn't valid, a NULL ptr will be returned. Definition at line 112 of file GA_BlobContainer.h. For debugging purposes, this returns the number of references to this blob index in this map. Definition at line 203 of file GA_BlobContainer.h. Return the maximum index number used. If the maximum index is less than zero, there are no blobs in the container. Definition at line 248 of file GA_BlobContainer.h. Report approximate memory usage (including storage for blobs) Get occupancy of the map which can be used to determine whether compaction is required. Definition at line 103 of file GA_BlobContainer.h. Return the blob for a given handle. If the handle isn't valid, a NULL ptr will be returned. Definition at line 122 of file GA_BlobContainer.h. Replaces the content of this with the content of src. Return the n'th blob given in an ordered list of blobs. This method may be significantly more expensive than looking up by GA_BlobIndex or extracting all items at one time. For example: Definition at line 167 of file GA_BlobContainer.h.
https://www.sidefx.com/docs/hdk/class_g_a___blob_container.html
CC-MAIN-2021-17
refinedweb
648
67.25
C Programming Tutorial-Chapter 2 ============================ Things Covered In Chapter 2 Types of Data in C How the Computer Stores Data Formatting text using printf() Placeholders Escape Sequences ============================ I assume that you've read the first chapter of this tutorial. Ohterwise you can get it at: Types of Data in C This information may seem a bit superfluous at this stage, but read it anyway because you won't undestand the next section (or the ramaining chapters in the tutorial) without it. In C, there are two basic types of data: numerical and string. Numerical data is any number like say, 20 or 54.76 while string data is alphanumeric data like say, "cgkanchi" or "hello world!". String data, in C is always enclosed in double quotes so "20" is a string while 20 isn't (don't worry, it'll get clearer as you go along). Numerical data in C is of three basic types: 1. char: As you might guess, char data consists of single characters (which are just numbers to the computer). But char data can also store numbers in the range of -128 to +127 (I'll get to that in the next section). char data is always enclosed in single quotes, like so 'c'. 2. int: Integer data stores whole numbers from -32,768 to +32,767. 3. float: Float or floating point data is used to store decimal numbers between 3.4 x 10 ^ -38 to 3.4 x 10 ^ 38 (10 ^ 38 is read as 10 raised to 38. So 2 ^ 2 would be 2 raised to 2 or two squared which is 4). What about bigger numbers? Well there's a long data type to store integer data between -2,147,483,648 to 2,147,483,647 and the double data type to store data between 1.7 x 10 ^ -308 to 1.7 x 10 ^ 308. What about char? There is no such thing as a long char but if you want to store a number greater than 127 then just use an int instead. You can also qualify any of these types as unsigned which means all data they store is positive or zero. So the range of char changes from -128 to 127 to 0 to 255, that of int changes from -32,768 to 32,767 to 0 to 65,535 and so on. For some background information on how exactly the computer stores data read the next section. Otherwise, just skip to the section on screen output. How the Computer Stores Data The computer is a digital creature. So it stores everything and I mean EVERYTHING in the form of numbers. Actually it stores all the data that you put into it as binary code. So, to the computer, any data is stored in the form of 1s and 0s. The computer has small compartments of data called bits. Every bit can store either a 1 or a 0. In other words, a bit can have only one value at any given time. Now, eight bits make a byte. A data which is qualified as char occupies one byte of memory. Similarly, int data occupies 2 bytes, float and long data 4 bytes and double data 8 bytes. Given seven places, each of which has two possible values (in this case 0 and 1), there are 128 possible combinations. That's why a char has a range of -128 to 127. What about the eighth bit you ask? Well, the eighth bit is called the sign bit and is used to store the sign of the number. When you qualify data as unsigned, the sign bit isn't used. Given eight places, each with two possible values, there are 256 possible combinations. That makes the range of an unsigned char into 0 to 255. Formatting Text Using printf() We ended our last chapter with an analysis of the hello.c program. Remember the statement that does most of the work in the program? Yes I'm talking about the printf("Hello World!"); thing. "printf" stands for "print formatted" and is one of the most popular ways to output text to the screen. Type in this program to see the different types of data: /*data.c A program to illustrate the different types of data*/ #include <stdio.h> void main() { printf("This is char data: %c\n",'A'); printf("This is char data in numerical form: %c\n",65); printf("This is int data %d:\n",65); printf("This is float data: %f\n",65.00); printf("This is float data with only two decimals data: %.2f\n",65.00); printf("This is double data: %2f\n",65.00); printf("This is a string: %s","Hello"); } Save this program as data.c . Compile. Fix errors. Run. This program is very similar to the hello.c program in chapter 1 except that it has many printf statements. The major difference is the %something things. The % sign followed by a letter is known as a placeholder. What a placeholder does is that it tells the printf statement to look for a particular type of data after the final double quote("). You can have multiple placeholders in a single statement. In this case, the printf statement looks for data in the order in which the placeholders appear. Here is a list of placeholders and what they do: Placeholder What it does %c Displays char data %i, %d Display int data %f Displays float data %2f Displays double data %.2f Displays float with the requisite number of decimals as specified after the period(.) %s Displays string data %o Displays octal data (more on octal and hexadecimal in another chapter) %x Displays hexadecimal data As the % sign has a special meaning for printf, to display it, you must double it up. For example, printf("Something%%"); will display Something% Remember the "\n" thing that makes the printf statement go to the next line? That is called an escape sequence. Escape sequences consist of a backslash(\) and a letter following them. Escape sequences can be used to do a lot of interesting things in C. They are used simply by tagging them on at any point in a string. For example printf("Hel\nlo"); makes the program display "Hel" on one line and "lo" on the next. Here is a list of some of the escape sequences: Escape Sequence What it does \n Goes down to the beginning of the next line \r Goes back to beginning of line \t Prints a tab \ Goes down one line without going back to the beginning of the line \x Recognizes the next characters(upto the next space) as hexadecimal numbers \o Recognizes the next characters(upto the next space) as octal numbers \a The computer speaker lets out a beep \b Goes back one character In addition to these certain characters have a special meaning for the printf function. So to display them you must precede them by a backslash. The two most notable ones are the backslash itself and the double quotes("). So, to display a backslash you'd type "\\" and to display double quotes you'd type "\""(the first and the last are the double quotes for the string and the middle one is the oune you want to display). So, that's all for chapter 2 of my C tutorial. Until next time, I'll give you a few assignments to do. 1. Write a program to display the sentence "I Love C", said the programmer. (With the quotes). 2. Write a program to display Hello! Hi! Howdy! Using only a single printf statement. (Hint: remember \n). 3. Write a program to display "C:\ABC\ABC\" (without the quotes). Don't hesitate to ask any doubts or questions related to the tutorial. And experienced programmers who happen to read this, please fill in any placeholders or escape sequences I might have missed. C is for cookie, thats good enough for me... \"Computer games don\'t affect kids; I mean if Pac-Man affected us as kids, we\'d all be running around in darkened rooms munching magic pills and listening to repetitive electronic music.\" Kristian Wilson, Nintendo, Inc. 1989 that was very informative...............just one question is there gonna be a chapter III?????????? Genius of the mind is not necessarily from the mind of a genius I'm very busy right now, but look for chapter 3 in about two or three weeks' time. Cheers, cgkanchi Buy the Snakes of India book, support research and education (sorry the website has been discontinued) My blog: hey kanchi that was great....good post... intruder.. A laptop, internet connection and beer. Nice post cgkanchi!! Can't wait for #3. Can I get out of this prison? Can I stay this prison forever? #3 is out! wow this site is like addicting and hard to leave hope someother new people share or/bump up stuff they find too (hint)(hint) Forum Rules
http://www.antionline.com/showthread.php?139568-C-Programming-Tutorial-Chapter-2&p=486839
CC-MAIN-2017-04
refinedweb
1,494
73.37
go to bug id or search bugs for Description: ------------ While OOP is conquering the world, "a function" is still sometimes enough. Introduce functionality for autoloading namespaces (of grouped functions, classes etc), in the same manner that exists for autoloading instantiated classes. (there is a similar feature request here? id=52385&edit=2, the key difference is that while autoloading for functions could get complicated, - for namespaces it is as straightforward as for classes) Test script: --------------- function __autoload($namespace) { require $namespace. '.php'; } //require_once 'myframework/mvc/dispatching.php'; // - want to get rid of these use myframework\mvc\dispatching; // - here it gets autoloaded even though it's not a class. dispatching\dispatch(...); Add a Patch Add a Pull Request I cannot imagine any "valid" use case for this. Autoloading is designated for classes only (that may happen to be located in a namespace). If you need autoloading functionality for group of functions, put them as static methods inside an abstract class. Example: -------- namespace MyFramework\MVC; abstract class Dispatching { static public function myFunc() { ... } // etc } -------- Then you can take advantage of the autoloading. shiranai7 at hotmail dot com, why would I use a class if I don't need a class? Anyway, ways to hack around are well know. The idea here is that we wouldn't need to "hack around". Since we have namespaces - it can be done. lcfsoft at gmail dot com, You cannot expect anything to be loaded just by an "use" statement. It just defines a local alias for a class or namespace. The classes get loaded only when they are actually used. Based on your example I think that you are looking for a way to "autoload functions". While it could possibly be handy in your case, I dont think it will ever be an actual feature. It is not even possible to import a function or constant through the "use" statement, so why this? Using an abstract class is not "hacking around" in this case. Why do you need to do this: dispatching\dispatch(...); if you can do this? dispatching::dispatch(...); The functionality is IDENTICAL, with added bonus of possible autoloading. Do you really need that \ instead of :: ? That is the only actual difference. shiranai7 at hotmail dot com, If your mindset towards this problem was valid we wouldn't need to have namespaces introduced and implemented in the first place. We would all be fine with those Zend_Crypt_Math_BigInteger_Bcmath, because, you see, the functionality is IDENTICAL to what we have with Zend\Crypt\Math\BigInteger\Bcmath now. If I need to make an abstract static class's method instead of a function to achieve something - it's nothing but a hack. It may work, it may work the same way - but it's a hack. So yes, I really do need that "\" instead of "::". >> You cannot expect anything to be loaded just by an "use" statement. It just defines a local alias for a class or namespace. The classes get loaded only when they are actually used. What you said here, however, is correct. So, yes, it may not be so trivial as adding another line of code. But it's important I believe. Why limit PHP developers to classes? If you want to know why we shouldn't - check out any other decent programming language e.g. Python. Yes, autoloading for classes exclusively might work before as we didn't have any sort of namespaces/packages AT ALL - but now we do and this should be revisited. lcfsoft at gmail dot com, >>> Zend_Crypt_Math_BigInteger_Bcmath, because, you see, the functionality is IDENTICAL to what we have with Zend\Crypt\Math\BigInteger\Bcmath now. Yes, namespaces were introduced as a better alternative to ugly identifier prefixes. But this has nothing to do with "autoloading functions". -- My point is that this approach is rather unusual. Bunch of class-less functions defined in a file is like pre-php 5 procedural code. Of course I am in no position to tell anyone what is the correct way to organise their code or even decide whether this will get eventually implemented or not. My proposal for this would be something like: spl_function_autoload_register( callback(function) ) and spl_constant_autoload_register( callback(constant) ) or spl_ns_autoload_register( callback(namespace, property ,type) ) But still - weird, mostly useless and overkill to implement just because someone does not like :: in his identifier. I would also like to see this feature, especially in the context of function autoloading and procedural programming. 1) It would make PP in PHP way more pleasant & effective (no need to spam guards like require_once, no need for indirect dispatchers, obviously, not loading code you don't need for particular request). For a language that grew on procedural programming paradigm it's a shame development in this area has been stalled for so many years (afair nothing have changed since 4 in this regard). 2) There is a lot of code that could profit, e.g. Drupal. 3) Hacking with classes has its cons, eg. performance (what would be relative difference between direct function call vs static method call and indirect (callbacks)?). 4) How hard can it be? Checking for registered callback and running the procedure before firing error. Proposed interface by shiranai7 at hotmail dot com looks ok. > But still - weird, mostly useless and overkill to implement just because someone does not like :: in his identifier. I disagree that this is mostly useless - far from it. If we had autoloading based on namespace access then namespace-based package/bundle management becomes trivial. Frameworks are having to implement this themselves at present, to take an example from Laravel we have to call: Laravel\Bundle::start($bundle_name); before we can access code in that bundle. Would be great if instead we could use the existing language features and just use a namespace to automatically load that bundles code. Agreed with lcfsoft. What's the point of namespacing if we are told to use static class methods instead? To change underscores to backslashes because backslashes are "prettier" than underscores? That's called class encapsulation. We already had that feature. Namespacing as a concept has been around for decades. For example, they were available in Perl in 1998. Namespaces as a concept is not inherently an OO feature. They are simply a container for a set of identifiers. Functions being valid identifiers in PHP. Please reconsider. Regards, Hi. I am in no position to actually decide anything. Using abstract classes was suggested by me as an alternative since abstract classes can contain both functions and constants and can be autoloaded in current version of PHP without any changes. Also there is virtually no performance penalty when calling a static method instead of plain function. Nevertheless, I agree with your point - there is currently no native way to autoload namespaced functions and constants without use of a class wrapper or the require_once() statement. While the "all class way" is popular way to organise applications right now, this would be a nice thing to have in PHP. In earlier comment I mentioned possible solution involving one or more spl_*_register() functions. While the first two (autoloading based on single function or constant) are not very pretty, the last one could be the solution we are looking for. I have thought this through and I might create a RFC page with detailed description of how it would actually operate. For now, consider this: void spl_package_autoload_register ( callable $autoload_function(namespace, type, name), [$throw = true], [$prepend = false] ) bool spl_package_autoload_unregister ( mixed $autoload_function ) void spl_package_autoload_call( string $namespace, int $type, // SPL_PACKAGE_FUNCTION, SPL_PACKAGE_CONSTANT string $name ) Note that the "package" prefix is used just for demonstration. It could be "namespace" or something else, but I think that "package" is better name for group of namespaced functions and constants. Autoloading namespaces would be a useful feature.
https://bugs.php.net/bug.php?id=62162&edit=3
CC-MAIN-2019-39
refinedweb
1,290
64.81
Typed Python interface stubs for editing Pythonista code on the desktop I sometimes prefer working on larger Pythonista projects in my desktop IDE. However, imports of Pythonista iOS APIs look broken, autocomplete can't work, and static checking is useless. To solve these problems, I've started working a set of typed Python interface stubs. Please check it out and let me know problems and suggestions! Very welcome initiative! I may contribute a module or two that I'm comfortable with. In the style guide, you state "no Union[]return annotations"; does that also imply "no Optional[]return annotations"? The Pythonista modules have some. @Olaf Union[]generally indicates an anti-pattern because a method should be relied on to return a single type. That said, given that we don't control the underlying API it may be necessary at some point. To answer your specific question, I definitely encourage using Optional[]where the method has documentation stating it might. For example, in the remindersinterface there is def get_calendar(calendar_id: str) -> Optional[Calendar]: ...since we're not guaranteed to find a given ID. It would be great to get some help and I'm happy to collaborate on typing syntax. I just hesitate to write interfaces for APIs that I haven't personally used, seems like an easy way to get things wrong :) (For background reading on Union issues see here: ) After some discussion with the author, pygenstub now supports generic module interface generation. (Unlike the more well known stubgen, no C compilation required.) This means that in StaSh we can run something like python3 pygenstub.py -p appex -o out --genericto get a stub for appex. It needs cleanup (like replacing all the Anys and removing private things) but it's a great running start.
https://forum.omz-software.com/topic/5516/typed-python-interface-stubs-for-editing-pythonista-code-on-the-desktop/6
CC-MAIN-2020-45
refinedweb
295
56.55
You need to declare the files in the package_data section of your setup.py file. For example, if your Scrapy project has the following structure: myproject/ __init__.py settings.py resources/ cities.txt scrapy.cfg setup.py You would use the following in your setup.py to include the cities.txt file: from setuptools import setup, find_packages setup( name='myproject', version='1.0', packages=find_packages(), package_data={ 'myproject': ['resources/*.txt'] }, entry_points={ 'scrapy': ['settings = myproject.settings'] }, zip_safe=False, ) NOTE 1: The zip_safe flag is set to False , as this may be needed in some cases. NOTE 2: Please ensure that the name of your project doesn't overlap with existing package names, such as scrapy, as a crawler is a regular python package and it may cause conflicts/unwanted side-effects when deploying. Now you can access the cities.txt file content in the spider code like this: import pkgutil data = pkgutil.get_data("myproject", "resources/cities.txt") Note that this code works for the example Scrapy project structure defined at the beginning of the article. If your project has different structure - you will need to adjust package_data section and your code accordingly. For advanced resource access take a look at setuptools pkg_resources module.
https://support.scrapinghub.com/support/solutions/articles/22000200416-deploying-non-code-files
CC-MAIN-2018-51
refinedweb
201
59.4
Thanks but no... just a reoccuring urge I get to expose more of tomcats inherent functionality... but then it could be as Christopher Schultz mentioned because I'm still on 5.5.23... and Tomcat 6 now exposes the inner libs. Its just stuff like, you need to hash something and MD5 is in tomcat already, but you got to find another lib... MD5 is probably never going to change. I'm thinking there could be utilities like this.... conceptual.. import Tomcat.XML2Bean.* XML2Bean xb = new XML2Bean(xmlFile); String param = xb.getParameter("MyParameterInMyXMLFile"); etc etc..... what I call the missing "inner gold mine". I'm not suggesting that one can get at the same instance of a library that tomcat is actually running on.... just exposing the functionality that tomcat works on. If Tomcat parses and can check XML files... the user gets used of that architecture and wants to use it on any xml file. Thing is.... I tend to agree with Christopher Schultz... one of the refreshing things about Tomcat as a container is its simplicity... one doesnt go... Spring is nice but I only got one life time... the learning curve is reasonable... but maybe exposing a high leverage technology that tomcat uses anyway... as a utility... would make it more attractive. Things like URLEncoding, Hashing, Base64 XML2Beans etc etc. That stuff consumes an enormous amount of a new users time, learning about, searching for and finally implementing something thats actually there already. Anyway I'm ambivalent on this.... no strong feeling... just an urge I get everytime I'm doing something that I know tomcat must has in its gut.... like tomcat, I know you got an 8 cylinder engine... but i can only see 6 cylinders running... feeling. Maybe I'm starting cross over from a user to having much more of an interest in tomcats internals, maybe coz I use tomcat for far more than it was ever designed for... and maybe its that I can percieve a threat... someone is going to take tomcat and make "tomcat advanced"... but then thats how I think half these other frameworks where born anyway.... anyway dont worry about it... just a feeling I have... probably getting paternal about my favorite technology... ha ha. ----- Original Message ----- From: "Martin Gainty" <[email protected]> To: "Tomcat Users List" <[email protected]> Sent: Sunday, May 13, 2007 5:48 PM Subject: Re: Tomcat Inner Gold Mine > Good Morning- > > I *think* what you're speaking of is a comprehensive Adaptive Security > Algorithm solution. > A HW/DW solution from Cisco (Pix) is illustrated here > > More specifically RIPv2 implementations with coverage of text MD5 auth and > keyed MD5 auth > Is this the scenario you envision implementing? --------------------------------------------------------------------- To start a new topic, e-mail: [email protected] To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
http://mail-archives.apache.org/mod_mbox/tomcat-users/200705.mbox/%3C00fe01c79583$c7351e30$0300000a@animal%3E
CC-MAIN-2015-11
refinedweb
484
69.28
The sprintf() Function Aside from being one of the most popular text-output functions in the C language, printf() is the most powerful. It can format all sorts of variables in unique ways, with padding and alignment options that even veteran C programmers can't recite from memory. The printf() function isn't alone. It has many siblings and cousins that also harness its formatting power. One of the most useful variations is sprint(). What the sprint() function does is save formatted output, just like you get from a printf() function, into a buffer. Here's the format: sprintf(buffer,format,variables); In the preceding line, buffer is a char array into which the formatted output is stored. The rest of the arguments are identical to printf(). format is a formatting string: It uses text, escape sequences, and conversion characters. The list of variables and immediate values stuffed into the formatting string are represented by variables, as shown in the following code. #include <stdio.h> int main() { char birthday[12]; int day,month,year; printf("Enter your birth month (1-12): "); scanf("%d",&month); printf("Enter your birth day: "); scanf("%d",&day); printf("Enter your birth year: "); scanf("%d",&year); sprintf(birthday,"%d/%d/%d",month,day,year); printf("I shall wish you a happy birthday on %s\n", birthday); return(0); } The above code illustrates a sample program that uses sprint() to save a formatted date as a string. Values from the month, day, and year variables are placed into the standard date format and are then saved by the sprint() function into the birthday buffer. The printf() statement then displays the result. The best way to put sprint() to work is to store complex numbers and formatted output for later display or manipulation. For example, you could use sprint() instead of printf() and then place code elsewhere in the program to confirm that the output is proper. Regardless of how the sprint() function is used, you'll find it a valuable tool to keep handy in your programming tool chest.
http://www.dummies.com/how-to/content/the-sprintf-function.html
CC-MAIN-2014-15
refinedweb
342
60.24
Question: Given a binary tree, find the leftmost value in the last row of the tree. Example 1: Input: Output: 1 Example 2: Input: Output: 7 Note: You may assume the tree (i.e., the given root node) is not NULL. General idea: Give a binary tree and find the leftmost node value in the bottom row of the tree. Example 1: Input: Output: 1 Example 2: Input: Output: 7 Note: you can assume that the tree (given the root node, for example) is not empty. Idea: This question can actually be broken down into two questions: - Find the bottom row of the binary tree; - Find the leftmost node value in the bottom row. It should be noted that the leftmost node value is not necessarily a left node, but may also be a right child node value on the leftmost. Remember when we were Portal: LeetCode note: 102. Binary Tree Level Order Traversal In, it is required to output the binary tree layer by layer. In the same way, we use BFS breadth first traversal method and queue to ensure that all node values of the lowest layer are found. Then, we only need to use a tag to record the value of the leftmost node every time we comb the nodes of one layer. In this way, when it is determined that it is the last layer and there is no next layer, What we record is the leftmost node value of the lowest layer. Code (Java): /** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */ public class Solution { public int findBottomLeftValue(TreeNode root) { Queue<TreeNode> queue = new LinkedList<TreeNode>(); queue.offer(root); int result = root.val; boolean has = false; while (!queue.isEmpty()) { int levelNum = queue.size(); for (int i = 0; i < levelNum; i++) { if (queue.peek().left != null) { queue.offer(queue.peek().left); if (!has) { result = queue.peek().left.val; has = true; } } if (queue.peek().right != null) { queue.offer(queue.peek().right); if (!has) { result = queue.peek().right.val; has = true; } } queue.poll(); } has = false; } return result; } } outside the box: My method is actually slow. Let's take a look at the following method: public class Solution { public int findBottomLeftValue(TreeNode root) { return findBottomLeftValue(root, 1, new int[]{0,0}); } public int findBottomLeftValue(TreeNode root, int depth, int[] res) { if (res[1]<depth) {res[0]=root.val;res[1]=depth;} if (root.left!=null) findBottomLeftValue(root.left, depth+1, res); if (root.right!=null) findBottomLeftValue(root.right, depth+1, res); return res[0]; } } The first advantage of this method is that the code is much simpler than me... In fact, his method is similar to my first idea. He uses DFS to pass it back and look down, and records the depth of the currently found node. He uses an int array res. the first element of the array records the node value, and the second element records the depth of the node. Only when you enter a deeper layer and the node value has not been recorded in this layer, record the first node value found, which is actually the leftmost node value. After finding it, mark the depth as the current depth, and then all the node values found later will not be recorded unless a deeper node is found. In this way, keep updating the found node values according to the depth, and finally find the leftmost node value of the deepest layer. Collection:
https://programmer.group/leetcode-notes-513-find-bottom-left-tree-value.html
CC-MAIN-2022-40
refinedweb
581
64.2
mysqld fails to start on upgrade 7.10->8.04 Bug Description [deleted AppArmor removal example] please instead of removing apparmor, just disable a profile: sudo touch /etc/apparmod. sudo /etc/init. the mysql server components then upgraded OK and the server started. I don't know about Apparmor, but from the looks of syslog, it looks as though there was a problem with the apparmor acls; I noticed that during the mysql server install, a script appears to load up the apparmor rules and these rules appear to be preventing a clean mysqld upgrade. Previous upgrades of mysqld under Gutsy on this machine have been problem free. I didn't really want apparmor to be installed, but thanks for the tip. Nope, I didn't install apparmor-profiles; I don't believe apparmor is necessary for running mysql on Ubuntu. I was upgrading the whole system, to the best of my recollection, the version numbers were from 5.0.45 under Gutsy to 5.0.51a under Hardy. The error arose after I clicked on the 'upgrade system' icon on the desktop. It seemed at the time that either the apparmor or the mysql package (or both) did not have the correct dependency settings and/or upgrade checks. If you are making a mysql server upgrade (or install) depend in any way on apparmor (I think that is a bad idea) you need to have a mysql-server package dependency set, to ensure that the correct apparmor components are present on the system for mysql. The mysql-server package's pre-install script should also check that the correct apparmour configuration settings are made to allow the mysql server to start. A more informative post-install script message wouldn't do any harm either! Likewise, the apparmor package needs to check if there are any services for which it needs to install special configuration files, so it doesn't prevent them from starting. IMHO, the package management system ought to make it unnecessary for a sysadmin to have to figure out: (1) That there is something new on the system, namely apparmor, preventing a mysqld startup. (2) That an apparmor 'mysql profile' must be installed, sought out and put into 'complain mode' before the mysql server will start. This bug frightened me. For a minute, I thought I had been magically spirited away to my Solaris box! Many thanks for looking into this, by the way. Your efforts are very much appreciated! --Robin If your mysql is installed out of /opt, it is not an Ubuntu version. :). On Fri, Jun 06, 2008 at 10:59:13PM -0000, Robin Joinson wrote: >. That's exactly the problem. AppArmor follows symlinks when applying the rules in a profile. You'd either have to update the mysqld profile to allow access to /opt/share or remove the symlink. -- Mathias Gug Ubuntu Developer http:// Ok, so apparmor doesn't like some local condition on my machine. The substance of my bug report is still the same. The package maintainer should configure the apparmor package so that it does these things: - at the pre-install phase, check for affected services and warn the admin when a service doesn't meet the conditions required by the package. Consider bailing out of the install at this stage, if possible/ - attempt to ensure, at the install phase, that all conditions are met, or if this is not possible, at least ensure that a running service isn't crippled, for example by automating the step to put apparmor into 'complain mode' for that service. - check at the post-install phase that all conditions were, in fact, met and if not, warn the admin. If the package maintainer doesn't do this, the admin has to spend time figuring out that: - a new kind of kernel message in the syslog is caused by a new package (apparmor). - apparmor is writing these messages because some of its defaults for a service profile do not match what is on the system. - apparmor is denying access to file/s essential for a service to start. - a default 'apparmor profile' must be updated somehow, to accommodate the state of the system (I didn't get this far before I purged apparmor). Since the preconditions for file access are known by the package maintainer, why can't the package check these things? Anyhow, thanks for the information about apparmor. It looks as though it might be a very useful security tool, but I would want to read about it in depth and consider the impact very carefully before running it on a production box. All the best, --Robin The package does quite a bit to make sure things don't break. Please see: https:/ Unfortunately, there is no way to reasonably detect every situation a user might have changed his or her system, however, the above migration strategies catch most cases (by far). As such, the decision was made to protect users where possible by default. Documentation is found in the package, wiki, and apparmor protections are mentioned in the release notes. Ok Jamie, From my point of view, I've had the opportunity to raise and discuss all my concerns and I believe the position of the Ubunu team is that things will stay the same in the Apparmor package. I guess my 'bug' is closed. Nevertheless, many thanks to the team for taking the time and effort to correspond with me about this. Best Regards, --Robin On Wed, Jun 04, 2008 at 07:40:37PM -0000, Launchpad Bug Tracker wrote: 9.490:28) : operation= opt/share/ mysql/english/ errmsg. sys" pid=3861 profile= "/usr/sbin/ mysqld" namespace="default" mysql/english/ errmsg. sys' > You have been subscribed to a public bug: > > > Which version of mysql were you upgrading from ? Did you install apparmor-profiles ? > Did > > apt-get remove --purge apparmor > apt-get -f install > > the mysql server components then upgraded OK and the server started. > You don't have to purge apparmor - you can put mysqld profile in /wiki.ubuntu. com/DebuggingAp parmor for more complain mode. See https:/ information. status incomplete --. com Mathias Gug Ubuntu Developer http://
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/204042
CC-MAIN-2017-51
refinedweb
1,024
62.38
Django cookie law ================= This is a Django application that makes it easy to implement cookies compliant with Dutch law, as far as I am able to tell. I'm not a lawyer, so use at your own risk. Updating ======== From 0.1.6 to 0.2 or higher ------------------->=1.5.1 Usage ===== - Add the cookie_law app to your INSTALLED_APPS. - Add 'url(r'^cookies/', include('cookie == '1' %} <cookie> {% endif %} - To change the domain used for the cookies to check a visitor's preferences, create a Django setting.COOKIE_LAW_DOMAIN Cookie Bar Details ------------------ - Optional title: This title can be displayed as a title for your text - Optional link cookie policy page: If you want a page explaining what kind of cookies you use and why place the full url here. - Optional display name of the link: This will be displayed after the display text. For instance: "This site uses cookies, read about it <link_display_name>" - Title of the allow cookie button: This will be the text shown in the cookie bar button. - Text: The text displayed in the cookie bar. - "This cookie bar is displayed": Check this if you want this particular cookie bar to be shown. Cookie bars without this boolean will not be shown. - Optional "Show a close (dismiss) button for this cookie bar" boolean: The close field is used when you want to give visitors the option to hide the cookie bar while not accepting cookies. This is turned off by default. - Optional Language: You can create a cookie bar for multiple languages, read more about this in the Multilingual support section. Each of these cookie bars will need the "This cookie bar is displayed" boolean set to True. Multilingual Support -------------------- If you want multilingual support add the languages you want to your settings file. LANGUAGES = (('nl','Nederlands'), ('en',.
https://bitbucket.org/getlogic/lib_django_cookie_law/src
CC-MAIN-2014-15
refinedweb
298
73.07
Receiving Error 2048 - Resource will not loadspeaker10 Mar 30, 2012 3:35 PM I recently upgarded my flash player to the most recent version. I am using Windows XP Home addition 32-bit system. Now when when the flash content tries to run I receve an Error 2048 and "failed to load the resource" and "can not access the resource file". It apparently is not an installation proble. Is there something on my system's security settings that is keeping the flash content from working? How do I clear that up? 1. Re: Receiving Error 2048 - Resource will not loadchris.campbell Mar 30, 2012 4:45 PM (in response to speaker10) Do you have a URL where I can see this occur? Thanks, Chris 2. Re: Receiving Error 2048 - Resource will not loadspeaker10 Mar 31, 2012 2:14 PM (in response to chris.campbell) Hi Chris, Thank you for your response. A little delay in getting in a reply. My 1st time in the forums. I would appreciate any help that you can provide. In answer to your question. There are 4 URLs that I am concerned with currently. All are at the same root URL. These are flash video's that I wish to send and use with people. I was able to access them with no problem until I upgraded my flash player. I don't think it is a flash player problem judging from the number of users getting this same error code, but I don't know. Most indications are that somehow the content on the following URLs is being blocked. Can you help identify what is blocking the resource download? The sites that do not work are: 1. 2. 3. 4. Any ideas are appreciated. Thank you for your help 3. Re: Receiving Error 2048 - Resource will not loadchris.campbell Apr 2, 2012 12:17 PM (in response to speaker10) I wasn't able to reproduce this using either Win 7 or XP and IE with Flash Player 11.2. Could you try running through a clean install to see if that helps? How do I do a clean install of Flash Player? 4. Re: Receiving Error 2048 - Resource will not loadspeaker10 Apr 2, 2012 1:59 PM (in response to chris.campbell) Hello Chris, Thank you for your reply. I think I have identified the problem. I am not surprised that they worked for you. I did some further testing. The missing ingredient is that not only did I upgrade my flash, I ALSO started using a wireless hotspot operating at 4G speeds. The limiting factor for downloading the content was the speed of the wirelesss adapter card installed in my computer that conected to the hotspot. The adapter is maxed out at 54mbps which is not fast enough to support the resource download timeout limit producing error 2048. Solution - upgrade the wireless adapter in my PC to an Intel Centrino device with 450mbps capability. I'll confirm its success after the upgrade. 5. Re: Receiving Error 2048 - Resource will not loadchris.campbell Apr 3, 2012 2:48 PM (in response to speaker10) Great to hear, thank you for letting me know. 6. Re: Receiving Error 2048 - Resource will not loadUrgentHere Apr 16, 2012 4:58 AM (in response to chris.campbell) Hi Chris, This is regarding error #2048 and #2032.One of my user got error #2032 when he tried to access the application I have made. To understand the error i compiled a simple flex code and saved the swf file on his desktop and then tried opening it through internet explorer but i received error #2048 in IE. I am not able to understand the cause of the error.the application is working fine everywhere else.I need to fix it at the earliest. The sample code which gave the error# 2048 is: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="" xmlns:flexiframe="" layout="vertical" verticalAlign="top" xmlns:comp="components.*" xmlns: <mx:Script> <![CDATA[ import flash.events.MouseEvent; import flash.net.navigateToURL; import flash.sampler.NewObjectSample; import mx.collections.ArrayCollection; import mx.collections.Sort; import mx.collections.SortField; import mx.controls.*; import mx.controls.DataGrid; import mx.controls.List; import mx.events.*; import mx.events.CloseEvent; import mx.events.DataGridEvent; import mx.events.IndexChangedEvent; import mx.events.ItemClickEvent; import mx.managers.PopUpManager; import mx.rpc.events.FaultEvent; import mx.rpc.events.ResultEvent; import mx.utils.ArrayUtil; import mx.utils.StringUtil; public function loadData():void { Alert.show("it is in here"); } ]]> </mx:Script> <mx:VBox <mx:Label</mx:Label> </mx:VBox> </mx:Application> Regards, Sneha 7. Re: Receiving Error 2048 - Resource will not loadchris.campbell Apr 20, 2012 6:46 PM (in response to UrgentHere) Hi Sneha, Could you please open a new bug report on this over at bugbase.adobe.com? Please post back with the URL so that others affected can add their comments and votes. You might also want to post on the Flash Professional forums? This forum is primarily for end users, the Pro forums will get you in touch with a wider developer audience. Thanks, Chris
https://forums.adobe.com/message/4308929
CC-MAIN-2017-13
refinedweb
852
69.48
, LOW); delay(200); } Arduino requires a unique description -- setup() and loop() -- to be included in the sketch. The setup() function is called only once after reset or power-on. Then, the loop() function is executed repeatedly. For example, the sketch to the right turns the LED on and off using the pinMode(), digitalWrite() and delay() functions. The pinMode() in the setup() defines which pin to use for driving the LED. Then, the digitalWrite() turns the LED on and off. The delay() defines on and off time. When using the KURUMI board, always describe #include <RLduino78.h>, the library that includes all these Arduino compatible functions, in the first line. The best thing about Arduino is its plentiful choice of libraries. These libraries make it easy to not only turn on LEDs, but also emit noises, run motors and connect to networks. Check out the lineup in the Library tab from the menu above. About the Pin Layout The image below describes the pin layout for the GR!
https://www.renesas.com/br/en/products/gadget-renesas/boards/gr-kurumi/kurumi-sketch.html
CC-MAIN-2019-35
refinedweb
167
75.61
Thanks for stopping in to read my problem, I'm working on my first premium Wordpress theme for Themeforest I think it's turned out quite nice, well I guess I should^^. I ran into a problem that I haven't ever encountered, After I worked out the comments on a blog post. My sidebar all of the sudden kept getting bumped down about half the way down the page. I tried several fixes to the problem; however, anything I did that fixed it messed up the sidebar in other browsers; however, I got on my computer to try and fix the problem again, and now the issue has disappeared. I just want to make sure if I can that this 'ghost fix' isn't just on my computer and that it works on all. So if I could get you guys to critique this theme of mine, maybe give me some pointers on things I absolutely need to include before I wrap up this project and start selling it. Also let me know if that sidebar looks like it does on the main page, on the single pages( mainly the one titled 'comments test' ) Original post: Updated: - Only happens when logged in -I have determined that the problem is somewhere in the comments (comments.php) form but i can't find anything that would cause such an occurance, nothing is showing up in googles inspect element (AKA firebug for chrome) the only thing i might have is that the body is supposedly only 350ish pixels high, the tags are in the right spot top just under header and bottom just before html closer. Any ideas Thanks again, Jeremy Carlsten Last edited by Jerm993; 01-16-2011 at 03:39 PM. After Completely starting from a scratch comments.php file, and going through with each piece; copy, paste, save, refresh. I found the issue turns out i had a div stretching over a php endif statement and apparently that's a no no. the simple fix was to create a second div with the same class after the endif statement. There are currently 1 users browsing this thread. (0 members and 1 guests) View Tag Cloud Forum Rules
http://www.webdeveloper.com/forum/showthread.php?241030-Sidebar-Kick-down-Wordpress
CC-MAIN-2014-10
refinedweb
368
72.7
Rose diagram The pygmt.Figure.rose method can plot windrose diagrams or polar histograms. Out: <IPython.core.display.Image object> import pygmt # Load sample compilation of fracture lengths and azimuth as # hypothetically digitized from geological maps data = pygmt.datasets.load_fractures_compilation() fig = pygmt.Figure() fig.rose( # use columns of the sample dataset as input for the length and azimuth # parameters length=data.length, azimuth=data.azimuth, # specify the "region" of interest in the (r,azimuth) space # [r0, r1, az0, az1], here, r0 is 0 and r1 is 1, for azimuth, az0 is 0 and # az1 is 360 which means we plot a full circle between 0 and 360 degrees region=[0, 1, 0, 360], # set the diameter of the rose diagram to 7.5 cm diameter="7.5c", # define the sector width in degrees, we append +r here to draw a rose # diagram instead of a sector diagram sector="10+r", # normalize bin counts by the largest value so all bin counts range from # 0 to 1 norm=True, # use red3 as color fill for the sectors color="red3", # define the frame with ticks and gridlines every 0.2 # length unit in radial direction and every 30 degrees # in azimuthal direction, set background color to # lightgray frame=["x0.2g0.2", "y30g30", "+glightgray"], # use a pen size of 1p to draw the outlines pen="1p", ) fig.show() Total running time of the script: ( 0 minutes 0.896 seconds) Gallery generated by Sphinx-Gallery
https://www.pygmt.org/v0.5.0/gallery/histograms/rose.html
CC-MAIN-2022-27
refinedweb
241
62.07
In Java, the switch statement is used for executing one statement from multiple conditions. it is similar to an if-else-if ladder. Switch statement consists of conditional based cases and a default case. In a switch statement, the expression can be of byte, short, char and int type. From JDK-7, enum, String can also be used in switch cases. Following are some of the rules while using the switch statement: Syntax: Following is the syntax to declare the switch case in Java. switch(expression) { case value1: //code for execution; break; //optional case value2: // code for execution break; //optional ...... ...... ...... ...... Case value n: // code for execution break; //optional default: code for execution when none of the case is true; } In this example, we are using int type value to match cases. This example returns day based on the numeric value. public class SwitchDemo1{ public static void main(String[] args) { int day = 3; String dayName; switch (day) { case 1: dayName = "Today is Monday"; break; case 2: dayName = "Today is Tuesday"; break; case 3: dayName = "Today is Wednesday"; break; case 4: dayName = "Today is Thursday"; break; case 5: dayName = "Today is Friday"; break; case 6: dayName = "Today is Saturday"; break; case 7: dayName = "Today is Sunday"; break; default: dayName = "Invalid day"; break; } System.out.println(dayName); } } As we have said, Java allows to use enum in switch cases. So we are creating an enum of vowel alphabets and using its elements in switch case. public class SwitchDemo2{ public enumvowel{a, e, i, o, u} public static void main(String args[]) { vowel[] character= vowel.values(); for (vowel Now : character) { switch (Now) { case a: System.out.println("'a' is a Vowel"); break; case e: System.out.println("'e' is a Vowel"); break; case i: System.out.println("'i' is a Vowel"); break; case o: System.out.println("'o' is a Vowel"); break; case u: System.out.println("'u' is a Vowel"); break; default: System.out.println("It is a consonant"); } } } } Since Java has allowed to use string values in switch cases, so we are using string to create a string based switch case example. public static void main(String[] args) { String name = "Mango"; switch(name){ case "Mango": System.out.println("It is a fruit"); break; case "Tomato": System.out.println("It is a vegitable"); break; case "Coke": System.out.println("It is cold drink"); } } } It is a fruit Break statement is used to break the current execution of the program. In switch case, break is used to terminate the switch case execution and transfer control to the outside of switch case. Use of break is optional in the switch case. So lets see what happens if we don’t use the break. public class Demo{ public static void main(String[] args) { String name = "Mango"; switch(name){ case "Mango": System.out.println("It is a fruit"); case "Tomato": System.out.println("It is a vegitable"); case "Coke": System.out.println("It is cold drink"); } } } It is a fruit It is a vegitable It is cold drinkSee, if we don’t use break, it executes all the cases after matching case.
https://www.studytonight.com/java/switch-statement.php
CC-MAIN-2021-04
refinedweb
515
56.35
I wanted something closer to "Engineering notation".... I'd like to be able to make the width a parameter (instead of fixed at 9). Being able to go as low as 7 (or even 6) would be way cool. this code should meet those requirements. at least, the mantissa width is fixed, the exponent width varies. it should be easy enough to modify this code to suit your needs, if you wish. i used your test suite, too. #!/usr/bin/perl use strict; use warnings; $|++; Main(); exit; ## adapted from code found at: +ml sub eng { my( $num, $digits )= @_; ## default to smallest number of digits allowing fixed width manti +ssa (4) $digits= defined $digits && 3 < $digits ? $digits : 4; my $neg; if( 0 > $num ) { $neg= 'true'; $num= -$num; } 0 == $num and return sprintf '+%.*fe+%s' => $digits - 1, $num, 0; my $exp= 0 != $num ## perl's log() is natural log, convert to common log ? int( log($num) / log(10) ) ## short-circuit: can't do log(0) : 0; ## tricky integer casting ahead... $exp= 0 < $exp ? int( ( int( $exp / 3 ) ) * 3 ) : int( int( ( -$exp + 3 ) / 3 ) * -3 ); $num *= 10 ** -$exp; if( 1000 <= $num ) { $num /= 1000; $exp += 3; } elsif( 100 <= $num ) { $digits -= 2; } elsif( 10 <= $num ) { $digits -= 1; } 0 <= $exp and $exp= '+' . $exp; return ( $neg ? '-' : '+' ) . sprintf '%.*fe%s' => $digits - 1, $num, $exp; } sub Main { my $digits= 2; for my $exp ( -101..-98, -11, -10..11, 98..101 ) { for my $sign ( '', '-' ) { my $num= 0 + ( $sign . "5.555555555e" . $exp ); printf "%-20s (%s)\n", $num, eng( $num, $digits ); } } for my $exp ( -10..11 ) { for my $sign ( '', '-' ) { my $num= 0 + ( $sign . "1e" . $exp ); printf "%-20s (%s)\n", $num, eng( $num, $digits ); printf "%-20s (%s)\n", 0, eng( 0, $digits ) if 1 == $num; } } } [download] ~Particle *accelerates* In reply to Re: Display floating point numbers in compact, fixed-width format by particle in thread Display floating point numbers in compact, fixed-width format.
http://www.perlmonks.org/?parent=294424;node_id=3333
CC-MAIN-2017-26
refinedweb
312
82.75
Published 11 months ago by wardaddy I want to show all the rows of data from the database plus data from joins function Channels Table | id | user_id | name | |----|----------|--------| | 1 | 1 | channel #1 | | 2 | 2 | channel #2 | | 3 | NULL | channel #2 | | 4 | NULL | channel #2 | Users Table | id | name | |----|--------| | 1 | John | | 2 | Jane | here my code dashboardController.php public function index() { $channels = Channel::join('users', 'channels.user_id', '=', 'users.id') ->select('channels.*', 'users.name as dj_name') ->orderBy('updated_at', 'desc') ->get(); return view('dashboard.master', compact('channels')); } master.blade.php @foreach ($channels->take(4) as $channel) <div class="col s12 m4 l3"> <div class="card grey lighten-4 z-depth-0"> <div class="card-content"> <h5 class="truncate">{{ $channel->name }}</h5> <div class="d-flex justify-content-between align-items-end"> <div> @if ($channel->user_id == NULL) <p class="lead mt-1">No DJ here!</p> @else <p class="lead mt-1">DJ : {{ $channel->dj_name }}</p> @endif <small>ID Channel : {{ $channel->id }}</small> </div> </div> </div> </div> </div> @endforeach if I run this code, it only shows channels that have user_id and does not display channels with user_id = "NULL" what should I do to display a channel that has users_id along with a channel with user_id = "NULL"? try a leftJoin instead of a join. Also, it's much easier to get data from related models if you take advantage of Eloquent relationships. Following documentation: In your Channel model. class Channel extends Model { public function user() { return $this->belongsTo('App\User'); } } Then in your controller you can do $channels = Channel::all(); and in your blade, retrieve the DJ name as $channel->user->name @wardaddy If you're goint to do it the eloquent way, you should know about eager loading. Basically // If you retreive the channels normally like this $channels = Channel::all(); // Then you'll be making an extra query for each channel's user you want to access. (This is know as the N+1 query problem) // If you retreive the channels, eager loading the users at the same time... $channels = Channel::with('user')->all(); // like this // Then you're retreiving the users beforhand, you're only running 2 queries. Instead of N + 1 Please sign in or create an account to participate in this conversation.
https://laracasts.com/discuss/channels/general-discussion/how-to-get-all-the-rows-of-data-joins-data-from-the-database?page=1
CC-MAIN-2018-39
refinedweb
372
57.2
TemplateParser::TemplateParserJob #include <templateparserjob.h> Detailed Description The TemplateParser transforms a message with a given template. - Introduction - The TemplateParser transforms a message with a given template. A template contains text and commands, such as QUOTE or ODATE, which will be replaced with the real values in process(). - Basics - The message given in the templateparser constructor amsg, is the message that is being transformed. aorig_msg is the original message to which actions are performed. The message text in amsg will be replaced by the processed text of the template, but other properties, such as the attachments or the subject, are preserved. There are two different kind of commands: Those that work on the message that is to be transformed and those that work on an 'original message'. Those that work on the message that is to be transformed have no special prefix, e.g. 'DATE'. Those that work on the original message have an 'O' prefix, for example 'ODATE'. This means that the DATE command will take the date of the message passed in the constructor, the message which is to be transformed, whereas the ODATE command will take the date of the message that is being passed in process(), the original message. - The process() - The process() function takes aorig_msg as parameter. This aorig_msg is the original message from which various commands in templates with prefix 'O' extract the data and adds to the processed message. This function finds the template and passes them to processWithTemplate(), where templates are processed and its value is added to the processed message. - Reply To/Forward Plain Text Mails - Plain Text mails are the mails with only text part and no html part. While creating reply/forward to mails, processWithTemplate() processes all the commands and then appends its values to plainBody and htmlBody. This function then on the basis of whether the user wants to use plain mails or HTML mails, clears the htmlBody, or just passes both the plainBody and htmlBody unaltered. - Reply To/Forward HTML Mails - By HTML mails here, we mean multipart/alternative mails. As mentioned above, all commands in the TemplateParser appends text, i.e. plain text to plainBody and html text to htmlBody in the function processWithTemplate(). This function also takes a decision of clearing the htmlBody on the basis of fact whether the user wants to reply/forward using plain mails or multipart/alternative mails. - When "TO" and when "NOT TO" make multipart/alternative Mails - User is the master to decide when to and when not to make multipart/alternative mails. For user who <u>don't prefer</u> using HTML mails There is a TemplateParserSettings::self()->replyUsingHtml() (in GUI as Settings->Configure KMail-> Composer->General->"Reply using HTML if present"), which when not true (checkbox disabled in UI), will clear the htmlBody. An another option within the standard templates, FORCEDPLAIN command raises the flag, ReplyAsPlain. This flag when raised in processWithTemplate() takes care that the processed message will contain text/plain part by clearing the htmlBody. Once the htmlBody is cleared, plainBody and an empty htmlBody is passed to addProcessedBodyToMessage(). Here since the htmlBody is empty, text/plain messages are assembled and thus user is not dealing with any kind of HTML part. For user who <u>do prefer</u> using HTML mails The setting discussed above as "Reply using HTML if present" (when checked to true), passes the htmlBody to addProcessedBodyToMessage() without doing any changes. An another option FORCEDHTML within standard templates command raises the flag ReplyAsHtml. This flag when raised in processWithTemplate() takes care that the htmlBody is passed to addProcessedBodyToMessage() unaltered. Since htmlBody received by addProcessedBodyToMessage() is not empty, multipart/alternative messages are assembled. Resolving conflict between TemplateParserSettings "replyUsingHtml" and FORCEDXXXX command. The conflict is resolved by simply giving preference to the commands over TemplateParserSettings. - Make plain part - mMsg is the reply message in which the message text will be replaced by the processed value from templates. In case of no attachments, the message will be a single-part message. A KMime::Content containing the plainBody from processWithTemplate() is created. Then the encodedBody(), contentType (text/plain) of this KMime::Content is set in the body and the header of mMsg. The addContent() method can be used for adding sub-content to content object in case of attachments. The addContent() method is not used for adding content of the above mentioned single-part, as addContent() will convert single-part to multipart-mixed before adding it to mMsg. - Make multipart/alternative mails - First of all a KMime::Content (content) is created with a content-type of multipart/alternative. Then in the same way as plain-part is created in above paragraph, a KMime::Content (sub-content) containing the plainBody is created and added as child to the content. Then a new KMime::Content (sub-content) with htmlBody as the body is created. The content-type is set as text/html. This new sub-content is then added to the parent content. Now, since the parent content (multipart/alternative) has two sub-content (text/plain and text/html) to it, it is added to the reply message (mMsg). TODO: What is the usecase of the commands that work on the message to be transformed? In general you only use the commands that work on the original message... Definition at line 132 of file templateparserjob.h. Member Function Documentation Sets whether the template parser is allowed to decrypt the original message when needing its message text, for example for the QUOTE command. If true, it will tell the ObjectTreeParser it uses internally to decrypt the message, and that will possibly show a password request dialog to the user. The default is false. Definition at line 118 of file templateparserjob.cpp. Sets the list of charsets to try to use to encode the resulting text. They are tried in order until one matches, or utf-8 as a fallback. Definition at line 134 of file templateparserjob.cpp. Set the identity manager to be used when creating the template. Definition at line 129 of file templateparserjob.cpp. Sets the selection. If this is set, only the selection will be added to commands such as QUOTE. Otherwise, the whole message is quoted. If this is not called at all, the whole message is quoted as well. Call this before calling process(). Definition at line 113 of file templateparserjob.cpp. Tell template parser whether or not to wrap words, and at what column to wrap at. Default is true, wrapping at 80chars. Definition at line 1405 of file templateparserjob.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2022 The KDE developers. Generated on Sat Jan 15 2022 23:05:56 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/kdepim/messagelib/html/classTemplateParser_1_1TemplateParserJob.html
CC-MAIN-2022-05
refinedweb
1,133
64.2
Consider this: public struct NiceStruct { public float Floating; } public class JustTesting { public void DoSomething(NiceStruct niceStruct) { niceStruct.Floating = 42; return; } } Why isn't ReSharper flagging the fact that setting the struct member Floating to 42 is futile? The value is discarded when the method returns. (I encountered something like this in a program posted on Code Project. No wonder it didn't work! But it took me a while to find the error, and it would have helped immensely if ReSharper had pointed it out to me.) In fact, this is such an obvious error I'm surprised that Visual Studio is not flagging it. Can someone from JetBrains please comment on this. Thanks. Well, the reason is pretty obvious because this will never produce any side-effects, structs are always passed and copied by value, so when you pass a struct to the method it actually creates a copy of the struct so why should R# warn you about something that will never impact anything else but its own scope? Not sure I agree with that. Consider the following code: This will be flagged with the "value assigned is not used" warning. Why shouldn't the same warning appear if the value is assigned to a field of a struct passed by value? One of the wonderful things about ReSharper is that it flags potential / probable programming errors. This is a very probable programming error. I found an example of it in a Code Project program, and the program didn't work the way the programmer intended - he thought the value was getting changed in the caller's struct. It took me a while to find it, and it would have been a great help if ReSharper had flagged it for me. It's also a bit subtle, as this does work if the passed object is a class and not a struct. Maybe that's what happened, maybe the programmer had originally defined NiceStruct as a class (named NiceClass?) and then later on while trying to optimize things he said to himself, "hey, structs are more efficient than classes, I'll just change NiceClass into a struct!" Or maybe a second programmer who was in love with structs changed the first programmer's classes to structs. And as Richard has pointed out, ReSharper does warn about a similar situation involving simple variables. Well, if the parameter isn't used then you should see that R# marks it as dead code, at least it does for me. Go to Inspection Severity and make sure that Unused Parameter is set to anything on the list except to 'Do Not Show'. But yeah, it might be a bug or something because it doesn't annotate the parameter like it does in Richard's example. However, the examples don't really describe your case, because you're saying the parameter was actually used inside the method so you expect R# to tell you that the value is overwritten, right? which makes more sense to me because it doesn't issue any warning for it and I think it should. Eyal, thanks for your interest. Here's an updated sample program that better shows how insidious this problem is: public class ObjectA { public float TheAnswer; } public struct ObjectB { public float TheAnswer; } public class JustTesting { public static void Main() { ObjectA objectA = new ObjectA(); ObjectB objectB = new ObjectB(); DoSomething(objectA, objectB); Console.WriteLine("Answer A = " + objectA.TheAnswer + ", answer B = " + objectB.TheAnswer); } public static void DoSomething(ObjectA objectA, ObjectB objectB) { // Many complicated and inscrutable calculations // Now return the ultimate answer to the caller objectA.TheAnswer = 42; objectB.TheAnswer = 42; } } The output of this program is: Answer A = 42, answer B = 0 In the Code Project program I'm talking about the programmer was just using a struct (like ObjectB), and he apparently thought he could pass data back to the caller via the struct's members. Which doesn't work, because that's the way structs are. What I'm saying is that the last statement in DoSomething() looks very much like a programming error, and that it would be nice if ReSharper flagged it as such. Do you want RS to give warning in the below scenario as well? I.e. should RS treat this code as "bad practice" and issue a warning? I think RS only safely can display a warning if foo.Bar is assigned a value, but foo.Bar is never read, but isn't it already doing that? bool Foo(SomeStruct foo) { while (foo.Bar>0) { foo.Bar = GetSomethingCalculated(); if (foo.Bar <= 0) return true; } return false; } > I think RS only safely can display a warning if foo.Bar is assigned a value, but foo.Bar is never read, Agreed. > ... but isn't it already doing that? I'm not getting any warning for the sample program I posted. Maybe I have a wrong option set? I'm also a bit disappointed that nobody from JetBrains has replied on this thread. Not sure why they aren't saying anything, I'm waiting for their answer to two topics myself and it seems like they either really busy or ignoring them, go figure. :/
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206660685-Why-doesn-t-ReSharper-flag-erroneous-use-of-structs-
CC-MAIN-2020-16
refinedweb
861
63.49
) We're excited to bring you Brigade 0.7.0, with features and bugfixes. We made some big changes that may impact existing users. Join the discussion in the Kubernetes slack #brigade room! The big feature of this release is the much-improved handling of GitHub pull_request events. For security reasons, Brigade now blocks pull requests on forked repos from triggering pull_request events. To disable this behavior, you can set the BRIGADE_BUILD_FORKED_PULL_REQUESTS environment variable on the Gateway's deployment. Breaking Changes We try to be careful with our breaking changes. But prior to the 1.0 release of Brigade, we also occasionally make changes that are inconvenient but (we believe) in the overall best interest of the project. Summary of breaking changes: - The imagePullevent was renamed image_pullto match other events. - The Dockerhub/ACR webhook integration has been moved to a separate gateway service. The gateway has been split in two. The existing gateway (which will likely be renamed to "github-gateway" in the future) now only handles GitHub's push and pull_request hooks. A new gateway, the Container Registry Gateway ( cr-gateway) now handles container registry webhooks (event image_push) from Dockerhub and ACR. Because that webhook API does not have authentication, we have turned this gateway off by default. To enable it with Helm, use --set cr.enabled=true. In the future, we will be moving toward gateway microservices instead of configuring one gateway to listen to lots of different external services. Changes - docs(javascript): document the JS api for enabling docker socket 34f50ce (Matt Butcher) - fix(charts): add roles to services, deployments, pods c92dc82 (Matt Butcher) - fix(webhook): do not set ssh key in environment cd23d1d (Adam Reese) - fix(worker): add regexp to validate job name f2a1d19 (Matt Butcher) - fix(git-sidecar): add openssh back in b500bd3 (Matt Butcher) - fix(worker): print worker version in logs a8ac06c (Adam Reese) - ref(*): name vcs-sidecar consistently across components 2edd731 (Adam Reese) - fix(gateway): correctly set default flag 8a131e7 (Adam Reese) - ref(cr): set default namespace for cr-gateway fd97ad2 (Adam Reese) - feat(gateway): make forked pull-request builds optional 9b330c2 (Adam Reese) - feat(cr-gateway): break container registry gw into separate binary (#99) 6dd534d (Matt Butcher) - fix(brig): remove dead code a366e02 (Matt Butcher) - fix(api): improve performance of fetching builds fe6a95c (Matt Butcher) - ref(*): set flag defaults a8e0721 (Adam Reese) - fix: delete the dead link in the Github status payload ba4dfd7 (meyerbaptiste) Downloads This release includes an array of bug fixes and feature additions. - Docker socket can now be mounted if the project allows - RBAC has been fixed - GitHub hooks no longer mark the build successful by default - brig now correctly waits for a build to start - Clone depth is limited to 50 - and more Changelog - Limit the clone depth at 50 to speed up builds bb98a1f (Kévin Dunglas) - feat(worker): add ability to configure the image pull policy of a job 98737b2 (meyerbaptiste) - fix(brig): wait for pod to start 22bfd67 (Matt Butcher) - docs(design): explain why we use secrets instead of CRDs bee54fa (Matt Butcher) - feat(worker): allow disabling privileged mode at project level 43265eb (Matt Butcher) - fix(worker): fix PVC RBAC resource aeace3b (Sean Knox) - feat(api): list projects with latest builds 07a9c28 (Matt Butcher) - set github status to pending after creating build secret 148758e (Matthew Fisher) - Use alpine edge to get [email protected] 9ff41ed (abluchet) - allow host mounted docker sockets c46a3a4 (Matthew Fisher) - mount host docker socket when enabled 5c1502a (Matthew Fisher) Downloads This release of Brigade contains a number of improvements and fixes, and we recommend that all users upgrade. Breaking Changes From 0.5, we now require that a Build ID (in the form of a ULID) and a Build Name (arbitrary, but unique, string) be set on the Event secret by the gateway. If a build ID is not supplied, the controller will no process the request. For developers: The Makefile got a major upgrade. You may need to re-run make bootstrap. Some make targets changed, too. For example, to build the client you can now make brig. Changelog These reflect changes since 0.3.0, since we forgot to do 0.4.0 release notes. - fix(controller): set service account on worker pod 2d10908 (Matt Butcher) - fix(tests)): accept environment vars for selftest.sh 5eb8f5b (Adam Reese) - ref(*): remove unused field Worker.Commit 783fe48 (Adam Reese) - fix(brigade.js): bump go v1.9 dc9619f (Adam Reese) - fix(worker): stop after from firing when no hook was executed. 086bbb3 (Matt Butcher) - docs(*): switch recommendation to helm repo 83de2b7 (Matt Butcher) - docs(scripting): update brig command name a88a930 (Adam Reese) - fix(worker): wrong default image name. (#144) 3601c17 (Matt Butcher) - fix(chart): fix if-checks and add if wrappers to all optional values aae15b1 (lukepatrick) - ref(Makefile): add build tag to integration tests c765fc8 (Adam Reese) - fix(brigade-worker): fix type error when no sidecar is set 100e0b6 (Adam Reese) - fix reference to brigade-server-brigade-gw cc8b742 (Matthew Fisher) - git mv chart/ charts/ c94f82f (Matthew Fisher) - fix(git-sidecar): fetch remote SHA before checkout 10597f0 (Adam Reese) - fix(api): show secret names, but not values (#128) db6d0f4 (Matt Butcher) - fix(chart): add guard for vcs-sidecar 9622899 (Adam Reese) - Update demo paths in docs (#134) 71f0ace (Luke) - Update documents with Client-only build instructions 974d49c (lukepatrick) - fix(Makefile): fix ldflags version 563d1e1 (Adam Reese) - fix(api,controller,worker): use ULID for ID (#122) 9f69ad2 (Matt Butcher) - Update rbac instructions f61ad2b (Jesse Keating) - fix(*): remove hard-coded image versions 6ba0d6e (Adam Reese) - chore(*): Brigade release v0.4.0 3259978 (Adam Reese) - Update documentation for runEach 2708ac1 (Marius Nordrik) - Make runEach return JobResult of all Jobs 13a060d (Marius Nordrik) - fix(makefile): add CGO_ENABLED=0 for build-docker-bin 078eca0 (u2takey) - fix(api): prevent exposing secrets on api 194ea13 (Adam Reese) - fix brigade.js for dep 058a284 (u2takey) - docs(README): fix examples in README 5d565ba (Matt Butcher) - Update location of brigade-project helm chart 3efd737 (Jesse Keating) - ref(chart): move brigade-project to chart dir d49957b (Matt Butcher) - projects typo fix d6260d9 (Jesse Keating) - Add missing highlights to object names ea4201f (Jesse Keating) - Typo fix 019ab25 (Jesse Keating) - ref(*): delete dead code, source cleanup e1ef97c (Adam Reese) - ref(chart): make chart smart about versions 1afa3d1 (Matt Butcher) - fix(brig): remove default value for -f 44e7ec2 (Matt Butcher) - docs(brig): update Brig help text 498ce51 (Matt Butcher) - fix(brig): read the file c1a0c5e (Matt Butcher) - move worker related setting to values d034b0b (u2takey) - change node:8 -> node:8-alpine 2625b71 (u2takey) Downloads This release fixes several small bugs, and now works on Kubernetes 1.8 Kubernetes 1.8 dropped support for the initContainer annotation, which we were using to support older Kubernetes versions. This release switches to spec.initContainer for the init container definition. Breaking Changes - This build no longer works with Kube 1.6 and prior - We switched from glideto depfor package management, so building from source has a new requirement. Errata - You should continue to build Brig from master, not from this tag. Changelog - fix(worker): fix error in unit test 765d832 (Matt Butcher) - chore(charts): update charts to 0.3.0 a99afc1 (Matt Butcher) - feat(*): switch to dep for go dependency management e820d37 (Adam Reese) - fix(tests): bump empty-testbed sha 856730b (Adam Reese) - fix(worker): use new schema for defining init containers f4c7f29 (Adam Reese) - fix(controller): fix typo in image name 825274f (Adam Reese) - chore(*): bump client-go to latest stable b969db7 (Adam Reese) brigade v0.16.0: project create|deletecommands brig builds listcan now take a project name brigade-projectchart is deprecated, but not removed yet. It will be removed before 1.0.0-alpha.1 (in favor of brig project create). What's Next? Known Issues Our cross-compile of brigis failing for macOS and Windows currently. We are repairing it and will post binaries when we can. UPDATE: Binaries for brigare now available. Changelog Jobdocs (#537) 4e52c4a (Mike Bannister) brig dashboardc6e35fe (Matthew Fisher)
https://www.ctolib.com/article/releases/65297
CC-MAIN-2018-43
refinedweb
1,327
58.01
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hi everyone ! I'm really excited to start using the toxiclibs library following this Daniel Shifmann tutorial but i'm having hard times implementing the following snippet in Python: class Particle extends VerletParticle2D { Particle(float x, float y) { super(x, y); } void display() { fill(255); ellipse(x, y, 10, 10); } } This is supposed to get the x and y coordinates from the main .pde file and make them "move"/'"change" according to the toxiclibs VerletPhysics engine. I believe the corresponding Python code would be: import toxi.physics2d.VerletParticle2D as VerletParticle2D class Particle(VerletParticle2D): def __init__(self, x, y): super(Particle, self).__init__(x, y) self.x = x self.y = y def display(self): fill(255) ellipse(self.x, self.y, 10, 10) But I must have missed something because the coordinates don't change and all my points stay still on the screen. Do you have any idea what I'm doing wrong here ? Answers Thanks for your answer GoToLoop. I must admit I don't really understand how this code could help me. I need to get the coordinates from the main .pyde file so I think this part is mendatory (?) : Now, following that logic I should then write: def display(self):#'self' is important right ? Below the main .pyde file: AFAIK, the way I've refactored your Particle class should work alright. :ar! The console output from: print Particle(width>>1, height>>1).display() as {x:50.0, y:50.0}proves that the class had been properly initialized. :-B Why don't you test it the way it is now, then make your own changes to it later? :> I did ! I'm still fiddling with your code right now, trying to figure this out. Version 1.2 now: x, y = p.x(), p.y(); print p, x, yprints out: {x:50.0, y:50.0} 50.0 50.0B-) Sorry still don't get it, I must be dumb. How can I specify x and y IN the inheritance part ? ToxicVerletParticle2D.pyde: Particle.py: If I can get away w/ it, I very much prefer not to implement my own __init__()when extending a class. 8-| In particular, b/c parent class VerletParticle2D got both fields x & y, plus their corresponding getters x() & y(), that forces Jython to have to choose either the fields or the getter methods. :-SS It is clear to me that Jython picked the getters x() & y() over the fields x & y. :-\" Your code surely works but whenever I try to apply to my sketch it doesn't work at all. Thanks for your time anyway. Well, could you at least post your attempt to use my own Particle class version? L-) Also, I don't think the Toxic library can be imported via add_library()! :-& The thing is my code initialize the class but nothing moves. At first I thought it had to do with my initialization but now I come to think it has something to do with the Python mode not fully working with the toxiclibs library I'm still curious to know what would have been your version of the extension using __init()__ Ru doing more than passing fields x & y inside your __init__()? :-/ In case you're not, there's no advantage on implementing your own __init__(). :-@ Like I've already asked, w/o seeing how you would attempt to use my own Particle implementation, it's hard to have an exact idea what's going wrong! #-o Got it after an hour messing with your answer: Just that simple ! I feel really dumb tbh. Thank you GoToLoop ! I've merged the code you've provided w/ my own subclass Particle. :-bd Also, I've added an __init__()constructor w/ super() for my subclass Particle too: :bz def __init__(p, *args): super(Particle, p).__init__(*args) It's been posted commented out b/c it's actually redundant & unnecessary. :-\" But you can remove the #there if you wish to check that out for yourself. :> ToxicVerletParticle2D.pyde: Particle.py: Beautiful ! I'm going to test this right now. Many thanks ! Latest "Toxic VerletParticle2D" version 3.0; now based on the original Daniel Shifman's "Cloth2d" Java Mode sketch from Coding Train Challenge YouTube tutorial: \m/ ToxicVerletParticle2D.pyde: Particle.py: Spring.py: That's neat. I'd like to add a mouseEvent (to drag the particles with the mouse and see the distortion) in the Particle class of this sketch (my version) but can't figure how to make it work. Should I post a new question or keep asking here ? Yea, you may choose to start a new forum thread for it. But given it's related to the stuff here, also post a link to this thread there. Will do thanks. Given Toxiblibs is also available under JavaScript as well: ~O) I've converted "Toxic VerletParticle2D" to p5.js too: B-) And you all can check it online here: :bz index.html: Particle.js: Spring.js: sketch.js:
https://forum.processing.org/two/discussion/comment/117174/
CC-MAIN-2020-40
refinedweb
850
74.59
Getting Started To begin evolution, we need to create a seed genome and a population from it. Before everything though, we create an object which holds all parameters used by NEAT: import MultiNEAT as NEAT params = NEAT.Parameters() params.PopulationSize = 100 This is usually the point where all custom values for the parameters are set. Here we set the population size to 100 individuals (default value is 300). Now we create a genome with 3 inputs and 2 outputs: genome = NEAT.Genome(0, 3, 0, 2, False, NEAT.ActivationFunction.UNSIGNED_SIGMOID, NEAT.ActivationFunction.UNSIGNED_SIGMOID, 0, params, 0) Notice that we set more properties of the genome than just number of inputs/outputs. Also, if the number of inputs you're going to use in your project is 2, you need to write 3 in the constructor. Always add one extra input. The last input is always used as bias and also when you activate the network always set the last input to 1.0 (or any other constant non-zero value). The type of activation function of the outputs and hidden neurons is also set. Hidden neurons are optional. After the genome is created, we create the population like this: pop = NEAT.Population(genome, params, True, 1.0, 0) # the 0 is the RNG seed The last two parameters specify whether the population should be randomized and how much. Because we are starting from a new genome and not one that was previously saved, we randomize the initial population. Evolution can run now. For this we need an evaluation function. It takes a Genome as a parameter and returns a float that is the fitness of the genome's phenotype. def evaluate(genome): # this creates a neural network (phenotype) from the genome net = NEAT.NeuralNetwork() genome.BuildPhenotype(net) # let's input just one pattern to the net, activate it once and get the output net.Input( [ 1.0, 0.0, 1.0 ] ) net.Activate() output = net.Output() # the output can be used as any other Python iterable. For the purposes of the tutorial, # we will consider the fitness of the individual to be the neural network that outputs constantly # 0.0 from the first output (the second output is ignored) fitness = 1.0 - output[0] return fitness So we have our evaluation function now, we can enter the basic generational evolution loop. for generation in range(100): # run for 100 generations # retrieve a list of all genomes in the population genome_list = NEAT.GetGenomeList(pop) # apply the evaluation function to all genomes for genome in genome_list: fitness = evaluate(genome) genome.SetFitness(fitness) # at this point we may output some information regarding the progress of evolution, best fitness, etc. # it's also the place to put any code that tracks the progress and saves the best genome or the entire # population. We skip all of this in the tutorial. # advance to the next generation pop.Epoch() The rest of the algorithm is controlled by the parameters we used to initialize the population. One can modify the parameters during evolution, accessing the pop.Parameters object. When a population is saved, the parameters are saved along with it.
http://multineat.com/docs.html
CC-MAIN-2018-43
refinedweb
523
57.87
In this topic, we’ll cover the Python OpenCV library in complete detail. Computer Vision refers to the field of study which deals with how computers perceive images. It involves feeding images into a computer and then trying to gain high-level intelligence from it using different algorithms. It works in close coordination with fields like Machine Learning and Artificial Intelligence. Computer Vision is a broad field and is progressing rapidly. Computer Vision has a variety of real-world applications: - Object detection - Facial Recognition - Self-Driving Cars - Cancer-Detection One of the popular tasks under the broad field of Computer Vision is Image Processing. Image processing involves performing some operations on an image, in order to get an enhanced image or to extract some useful information from it. A major part of object detection is solved using Convolution Neural Networks. Table of Contents - 1 What is a Convolution Neural Network? - 2 What is OpenCV? - 3 1. Installing OpenCV - 4 2. How to read images using Python OpenCV? - 5 3. Convert an Image to Grayscale using OpenCV - 6 4. Detecting Edges using OpenCV - 7 Conclusion What is a Convolution Neural Network? A convolutional neural network is a class of deep neural networks that can analyze image data. It is able to draw useful high-level information from image data. These networks can be trained for recognizing objects, facial features, handwriting, and image classification. A Convolutional Neural Network usually contains a combination of the following layers. - Convolutional Layers - Pooling Layers - Flattening Layers. Let’s briefly discuss about these layers. 1. Convolution Layer The. When it comes to Python, OpenCV is the library that offers the best image processing tools. In this tutorial, we will learn how to read images into Python using OpenCV. We will also look at some basic image processing operations. What is OpenCV? OpenCV is a library of programming functions mainly aimed at real-time computer vision. Apart from importing and saving images, OpenCV also provides image processing operations such as edge detection, segmentation, Morphological operations and lots more. We will cover some of these operations in this tutorial. Before we move any further, let’s install OpenCV onto our system. 1. Installing OpenCV To install OpenCV use the pip command as shown below : pip install opencv-python Once you are done with the installation, you can get started with importing an image using OpenCV. 2. How to read images using Python OpenCV? Let’s select a sample picture that we can import using OpenCV. We are going to use this very popular image of ‘The Beatles‘. To read this image using OpenCV use : import cv2 img = cv2.imread('beatles.jpg') This will store the image in the variable ‘img‘. Let’s see what happens when we print this variable. import cv2 img = cv2.imread('beatles.jpg') print(img) Output: We get a matrix as the output because this is how your computer perceives an image. For a computer an image is just a collection of pixel values. A digital image is stored as a combination of pixels in a machine. Each pixel further contains a different number of channels. If it a grayscale image, it has only one channel, whereas a colored image contains three channels: red, green, and blue. Each channel of each pixel has a value between 0 and 255. These pixel values together make the image, which we then perceive as ‘The Beatles‘. Let’s learn some image processing operations now. 3. Convert an Image to Grayscale using OpenCV In this section we will convert our sample image to grayscale and the display it. import cv2 img = cv2.imread('beatles.jpg') gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #show print(gray_image) cv2.imshow('image',gray_image) cv2.waitKey(0) cv2.destroyAllWindows() This piece of code will first convert the image into grayscale. The line of code responsible for doing that is : gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) It will then print the image matrix and display the resulting image. The code for displaying any image is: cv2.imshow('image',gray_image) cv2.waitKey(0) cv2.destroyAllWindows() Output : Saving the resulting image You can also save the resulting image for later use. The code for doing that is: import cv2 img = cv2.imread('beatles.jpg') gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imwrite('sample_grayscale.jpg',gray_image) 4. Detecting Edges using OpenCV Edge Detection is an important operation under object detection. OpenCV makes it very easy for us to detect edges in our images.. You can play around with these two values to increase or decrease the sensitivity of your edge detector. Here’s the code for detecting edges in your images. import cv2 img = cv2.imread('beatles.jpg') edges = cv2.Canny(img,50,300) cv2.imshow('image',edges) cv2.waitKey(0) cv2.destroyAllWindows() Output : Conclusion This tutorial was an introduction to Computer Vision and OpenCV in Python. We learned how to read and save images using OpenCV. We also covered some basic image processing operations that you can perform using OpenCV. To know more about OpenCV, refer to its official website.
https://www.journaldev.com/45253/python-opencv-image-processing
CC-MAIN-2021-21
refinedweb
842
51.04
Introducing Boemly, our component library for React 3/17/2022 | Lukas Bals I’m very happy to introduce our brand new component library for React - Boemly. Boemly is based on the widely known open-source component library ChakraUI, which is focused on accessibility and speed. Boemly focuses on the same values and enriches ChakraUI with some neat components conceptualised and built by the Tree.ly team. Boemly offers a very limited amount of customizations to keep it simple, only core tokens like fonts, colours, border radii, … can be customised. How to use Boemly? Boemly tries to be as easy to use as possible. It’s based on the widely known provider pattern in React. You have to wrap the whole app in the BoemlyThemeProvider and then use the components of the library. A little example: import { Button, BoemlyThemeProvider } from 'boemly'; function App() { return ( <BoemlyThemeProvider> <Button>Button</Button> </BoemlyThemeProvider> ); } export default App; Useful links: Full documentation → GitHub → (Don’t forget to star!) NPM → Technical details Boemly is 100% written in Typescript and follows the high coding standards at Tree.ly. Same as ChakraUI, Boemly uses Emotion for styling and Framer Motion for smooth animations. To ensure code quality an automated linting, testing, and publishing pipeline is set up. Deploy early, fail fast! Open source Boemly is an open-source project and has already found some contributors. We at Tree.ly believe in the valuability of an engaged developer community. That’s why we want to open-source as much as possible and encourage employees to contribute to open-source projects whenever they want. Boemly is the first open-source project of our young company - contributions are highly appreciated - don’t be afraid of being wrong.
https://tree.ly/en/blog/boemly-announcement
CC-MAIN-2022-21
refinedweb
284
56.45
The reason it’s so easy to "make a sandwich" in an object-oriented language is that some programmer, somewhere, already did the work to define what a sandwich is and what you can do with it. He or she did this using a class. A class defines how to create an object, the properties and methods available to that object, how the properties are set and used, and what each method does. A class may be thought of as a blueprint for creating objects. The blueprint determines what properties and methods an object of that class will have. A common analogy is that of a car factory. A car factory produces thousands of cars of the same model that are all built on the same basic blueprint. In the same way, a class produces objects that have the same predefined properties and methods. In Python, classes are grouped together into modules. You import modules into your code to tell your program what objects you’ll be working with. You can write modules yourself, but most likely you'll bring them in from other parties or software packages. For example, the first line of most scripts you write in this course will be: import arcpy Here you're using the import keyword to tell your script that you’ll be working with the arcpy module, which is provided as part of ArcGIS. After importing this module, you can create objects that leverage ArcGIS in your scripts. Other modules that you may import in this course are os (allows you to work with the operating system), random (allows for generation of random numbers), csv (allows for reading and writing of spreadsheet files in comma-separated value format), and math (allows you to work with advanced math operations). These modules are included with Python, but they aren't imported by default. A best practice for keeping your scripts fast is to import only the modules that you need for that particular script. For example, although it might not cause any errors in your script, you wouldn't include import arcpy in a script not requiring any ArcGIS functions. Read Zandbergen chapter 5.8 (Classes) for more information about classes.
https://www.e-education.psu.edu/geog485/print/book/export/html/111
CC-MAIN-2019-22
refinedweb
367
69.31
Formatting - Blank Lines By Petr on Feb 11, 2010 Another formatting category that you can find in PHP formatting setting is Blank Lines category. If you don't know how to obtain settings for PHP formatter, look at the first blog from the formatting series. This category define adding and deleting blank lines between different elements in the PHP code. As you can see on the picture, user can set how many blank lines the formatter should insert before or after an elements. Property defining blank lines around a namespace declaration and use statement are PHP 5.3 specific. On the picture are displayed all properties that influence the inserting blank lines during formatting and also the default values. The algorithm deletes blank lines in groups of use statements and class fields declarations. The picture on the right sight displays an example with fields, where on the left side is code before formatting and on the right sight is the same code after formatting. There are two groups of fields, the first one contains $field1, $field2, $field3 and $field4 properties. As you can see all the blank lines within the group of fields are removed, except the line before PHP doc comment for $field4 property, where is inserted the number of lines that is set to as Before Field property. The second group of fields contains $field5 and $field6 properties. Similar behavior is for Use. So the properties Before Use, After Use, Before Field and After Field set to number of lines inserted before / after a group of use statements or fields, not the individual occurrences. If there are two elements subsequently in the code that the first defines one blank line after the element and the second defines one blank line before the element, then there are not inserted two lines between these elements, but only one. Similarly only the bigger number of lines is placed between the elements, if the number of blank lines after the first element is not equal to the number of blank lines before the second element. See the picture below. The After Namespace property defines one blank lines after a namespace and Before Use property set to two lines. So two blank lines are created between the namespace declaration and group of use statements. From the picture above should be clear the meaning of properties Before Namespace, After Namespace, Before Use and After Use. The meaning of the rest properties is displayed on the picture below. For better understanding every property is set to one. The lines in the formatted code have color of the property that influence the number of blank lines. A few lines has two colors, because both property are counted for the line according the algorithm that is mentioned above. As you can see there are properties that define blank lines around a class (Before Class and After Class), blank lines after the open brace after class declaration (After Class Header) and blank lines before the close brace in class body (Before Class End). The properties Before Field and After Field were already mentioned. The last properties defines blank lines around a function / method (Before Function and After Function) and also blank lines before the close brace in function / method body (Before Function End). This looks really promising. More rules the team does not have to worry about because the IDE can do the job in place. Thats a big bonus point for netbeans in my book. Just love it, thanks a lot ! Posted by Volker Dusch on February 11, 2010 at 02:32 AM CET # Great! The more configurable NB is, the more we enjoy it Posted by Pet on February 11, 2010 at 02:33 AM CET # I really appreciate your efforts and thanks again for a job well done. However I'm not sure if this is the best (easiest) way to do it. I somehow got the impression you are step-by-step re-implementing functionality to enforce coding styles. Don't get me wrong: I think this is great! But: wouldn't on-board scripting capability do the same thing, plus it could serve far more purposes? Or simply a button to execute some external program and add the current file path to it? If in addition there was an easy way to distribute and share these scripts and settings, this would be heaven! If we just had a chance to use our own scripts in NetBeans, we would write and share those ourselves. BTW: Checking a rule via regular expressions is a 3-liner in PHP and a lot less complicated than in Java. We are PHP-programmers - not Java-programmers. Just don't tell us that we need to write Java to get it up and running. Let us do it in PHP and we will be fine. I'm also worried about Oracle. So if we just had a really easy way to write our own small extensions using our scripting language of choice, we didn't have to worry so much anymore. I also think this would mean a big plus for Java developers as well. Posted by Tom on February 11, 2010 at 02:52 AM CET # To Tom: There are few reasons why there has to be build in formatter. - formatting is not used only for formatting action. - running external script after every action which requires formatting / indentation (pressing enter, inserting an item from code completion window, templates) is too expensive. The editor in such case will be very slow. - you can not have full control over external process, that is started from java application - depending on a php script means that you can not use NetBeans on a machine where is not installed PHP runtime. - running out of the box - it's not possible to put result of parsing (AST) and lexical analysis of the code to a script. Such script has to do it again. - our parser is "hacked" that is able to somehow parse broken code (and in the editor in 90% you have broken code). But you are right, it's possible to do a plugin which will work as you describe. But still there has to be a formatter that will be used by NetBeans internally. Posted by Petr Pisl on February 11, 2010 at 03:28 AM CET # The screenshots described here has some extra features which are not available in the version that I am using. I am using netbeans v6.8 and I am unable to find out extra options for removing blank lines in the code. :-? please help. Posted by gaurav on February 11, 2010 at 04:24 AM CET # To gaurav: These features are not available in NB 6.8. They are available in the development builds, which can be downloaded from netbeans.org (nightly builds : ) or also you can download a continual build from our server: The formatting setting is pretty new for PHP and I'm sure that it is not bugless. If you try it, please report every bug. Posted by Petr Pisl on February 11, 2010 at 05:17 AM CET # This will be even nicer, if these properties can be shared with CodeSniffer: Posted by Bruce Ingalls on February 11, 2010 at 08:02 AM CET # To Bruce: I was thinking about integration the code sniffer directly into NB, but time probably will not permit it. You are right, there is space to improve. Posted by Petr Pisl on February 11, 2010 at 08:28 AM CET # @Petr you might want to talk to the Java guys about CodeSniffer integration. It outputs Java-CheckStyle compatible XML files. Maybe the Java folks already implemented support for CheckStyle that could work just as well for CodeSniffer. Or in case they haven't: maybe they would like to see CheckStyle support in NetBeans as a new feature. Posted by Tom on February 12, 2010 at 02:52 AM CET # @Bruce you might already know that, but your plugin won't work on Windows (probably also won't work on Mac either). Also you can't select your own coding standard in the options dialog and have to use one of the preinstalled ones. Posted by Tom on February 12, 2010 at 04:03 AM CET # If I copy-paste following code and select all and format, why does nb not reformat code so there are no blank lines as specified in format rules?; } } Posted by Pet on February 14, 2010 at 09:38 AM CET # To Pet: Which version of NB do you use? These formatting setting is new in NetBeans 6.9 development builds. Anyway the new lines are corrected only in places, which are defined by the properties. The blank lines in the code (like in if in your example), are not affected. Posted by Petr Pisl on February 15, 2010 at 08:48 AM CET # Nice. This will be even nicer, if you can synchronize your work with PHP CodeSniffer. Take a look at Posted by Bruce Ingalls on February 17, 2010 at 10:20 AM CET # An Option for blank lines after comment-blocks would be nice, so instead of removing all blank lines here: /\*\* \* some comment \* \*/ /\*\* \* another comment \* \*/ and create /\*\* \* some comment \* \*/ /\*\* \* another comment \* \*/ it should be possible to create formatting like: /\*\* \* some comment \* \*/ /\*\* \* another comment \* \*/ So four additional Options like "After Commentblock" "Before Commentblock" and for single-line-comments "After Single Line Comment" and "Before Single Line Comment" would be nice Posted by robo47 on February 17, 2010 at 10:34 AM CET # Sorry about the dual post (Browser cache issue). FWIW, comment preview does not show other comments. Note that CodeSniffer can read properties, such as ini-style files. This may be enough to contact the PEAR & Netbeans plugin author, to see if there is any interest in synergy. I'm not a spokesman for the Whitewashing.de plugin, but some fixes are arriving for issues that you raise. On one hand, RATS & YASCA work nicely for multiple languages. However, it may be possible to leverage an existing PMD plugin for PHPMD support; I think PHPMD currently only supports sniffing, anyway. HTH, Bruce Posted by Bruce Ingalls on February 17, 2010 at 10:55 AM CET # To robo47: I can create new options for blank lines before and after php comment and php doc comment. I'm not sure, whether it would be useful for single line comment, because sometimes you can want to keep more line comments together and other separate. To Bruce: Thanks for the links. I think that the mentioned tools can be integrated, but it's slightly different from formatting. Posted by Petr Pisl on February 18, 2010 at 06:44 AM CET # Code formatter is pretty good thing, and I very appreciated its presence in Netbeans. But due such improvements (which are damn good in general) it produces lots of false changes when your code is under source control, which was saved by NB 6.8. This leads to big trouble when you looks at diff after Reformat in 6.9 and trying to locate small piece of problematic code between huge changes. Just imagine, when you work with team and your team is working on few big projects which are already formated according 6.8 scheme. This new scheme 6.9 is practically denied for whole team, and has to be. Here are some suggestions, which might help with this issue: a) bring back the code formatter from 6.8 and let the user to decide, which version will use, eq. some checkbox, "Use old scheme?" which will actually "switch" the format engine. b) add "leave untouched" checkbox next to every option in formatter settings - especially in "Blank Lines" section, so it will be possible to setup new formatter to behave close to old one. Option A seems to be a quick and elegant solution, but B can bring way better possibilities on beautifying code. Also option for export formatter settings only will be great also. Thanks in advance, Radek Posted by Raazy on June 23, 2010 at 10:42 AM CEST # @Radek: "Old formatter" didn't exist. The formatter before was basically just indenter (functionality of the indenter was called). Yes, the new formatter have few problems. Please report them separately for the cases. The option A basically is nothing, because the Old scheme was not there. The option B is doable. Again the best solution would be if you can file a bug for it. Regarding changes in the source code. It's good practice don't format source code, when there is another change and formatting changes commit separately. So then it's clear that the change is "just" visual. Posted by Petr Pisl on June 24, 2010 at 03:59 AM CEST # @Petr Pisl: Good point with separate commits; it sounds rather like an compromise, but it's possible. Can you tell me, where would be best to file a bug for this so called option B? Thanks Radek Posted by Raazy on June 24, 2010 at 05:42 AM CEST # @Radek: You can enter an issue from this page: . There is also a link to a document, how to enter a bug etc. You have to create netbeans account, but it's easy:). Thanks. Posted by Petr Pisl on June 24, 2010 at 06:29 AM CEST # Sorry for spamming here but I am unable to file a bug. When I trying to open this to post a bug I always get 500 Internal Server Error Is your bugzilla OK? Posted by Raazy on June 25, 2010 at 08:02 AM CEST # Hello, I mostly like the PHP auto-formatting in Netbeans. Currently using (Build 201007040001). One issue is whitespace between end of code and a comment. I like tabs for readable comments that are lined up vertically and auto-format happily wrecks that and I don't see an option to stop it. $code<tab>= something;<tab><tab>// comment etc. $m< ><tab>= lilbit;< ><tab><tab>// more comments It seems to add spaces around operators and after semicolons which is okay, but most of my tabs are getting whacked also. I avoid using is on commented blocks but it is so great to do Ctrl-A then Shift-Alt-F I sometimes forget ;) Thank you for all your efforts -Mike Posted by Mike on July 13, 2010 at 04:13 PM CEST # Hi Mike, could you file a new issue for this. I think it can be considered as a bug. The issue you can enter here Thanks, Petr Posted by Petr Pisl on July 14, 2010 at 07:34 AM CEST # in reply to Bruce post... I'm currently working on a NetBeans plugin, which displays phpMd () and phpCodeSniffer () messages in the nb task area. At the moment it only works with nb 6.9.1, but it works ;) Posted by Jens on November 02, 2010 at 03:46 AM CET # Hi, I was wondering regarding robo47 post on blank lines after comments. I like using editor-folds. However I would have liked to have a blank line between them after i auto format. I'm guessing that a blank line after a comment will not be comfortable in other places so any chance for a blank line after editor-fold? Posted by buenon on August 20, 2012 at 06:29 PM CEST #
https://blogs.oracle.com/netbeansphp/entry/formatting_blank_lines
CC-MAIN-2015-48
refinedweb
2,566
70.13
This release has a focus on bug fixes and a few larger new developments. This release has a focus on bug fixes and other cleanups after the big changes that went into 3.3.0. This release focusses on a reorganization of Wt's theme infrastructure, with the objective of supporting Twitter's Bootstrap CSS framework as a new theme. At the same time we've added a number of widgets for which Twitter Bootstrap provides styling. It is our intention to support the Bootstrap theme (or more specifically, the Bootstrap class names) alongside the themes we already supported (which are based on our own class names). Ignoring what Bootstrap brings, you should be able to upgrade to this release without too much trouble, although you may need to adapt some CSS stylesheets as we did reorganize a number of things which were required for Bootstrap and were a good idea for our own CSS stlesheets too. This release contains mostly bug fixes and one new feature: a payment processing API. If you had massive trouble migrating to 3.2.2 because of the layout rewrite, then you'll appreciate the efforts we've made to make the layout algorithms in 3.2.3 much more robust and consistent. A common nuisance when working with the QueryModel (which retrieves data from the database as needed), is that concurrent database modifications such as insertions of new data, may interfere with the model's mapping of rows to objects (this is in fact a common problem with most ORM's indeed). This mapping may be important, especially when you want to process the user's selection of one or more rows selected by the user, in e.g. a table view. We've added a mechanism to assure that one can request the model for data at a given row, which is guaranteed to be the same row that has been previously retrieved, using the stableResultRow() method. It works by default for simple queries (returning data from one table), but can be easily customized for more complex queries. Support was added for %-based sizes for block widths and table cell widths. In addition, table rendering in Wt::Render has been improved to support repeating headers (<thead> sections) for multi-page tables, and explicit page breaks (using the css page-break-after/before properties). We've changed the implementation of the storage ISO8601AsText format for time stamps (datetime). In the new, corrected, implementation, we generate dates using 'T' as the separator between date and time (as mandated by ISO8601), while the old behaviour used a space (' ') instead as the separator. Sqlite3 supports either format equally. This may however break some applications which use queries for an exact date (or a date comparison), as the results may be affected. The old behaviour is still available as PseudoISO8601AsText, which can be configured using connection.setDateTimeStorage(Wt::Dbo::SqlDateTime, Wt::Dbo::backend::Sqlite3::PseudoISO8601AsText) This release contains mostly bug and feature improvements, but also a rewrite of the layout managers in Wt (WBoxLayout and WGridLayout), and this comes with some changes in (in most cases previously undefined) behaviour. The layout managers have been reimplement, to address various issues with the old implementation, including API (in particular the wonked side-effects of AlignTop | AlignJustify) inconsistencies and bugs. The new implementation no longer uses tables when JavaScript is available, but instead using JavaScript-based layout with absolute positioning. The table-based implementation is still kept for plain HTML sessions (and progressive bootstrap). The code now uses exactly the same layout logic for both horizontal and vertical layout (but giving precedence to horizontal layout) and should be much more consistent (and perhaps also more performant). However, because of the many complexities and problems with the old code (inconsistent behaviour), you may see issues while upgrading. Please see the "Non-backwards compatible changes" below for hints on how to deal with this. A drag & drop mime-type can now be specified on a per-item basis using a new ItemDataRole, and the mime-type for the entire selection is computed from these individual mime-types. Alignment flags Previously, specifying an alignment for a widget in a layout, or for the layout when set to a container, had a double meaning. Not only would it implement the given alignment but also revert to passively letting HTML layout decide the layout of the contents, and adjust the parent (layout respectively container) accordingly. This had all kinds of side effects such as not propagating the size of layout-size-aware widgets and quirks in the vertical alignment. WContainerWidget::setLayout(layout, alignment) has been deprecated and will be removed from a future release. This call was almost always used to let the parent container resize to fit the size of the contained children, instead of fitting children in the parent container. This behaviour is now automatically deduced based on an (empty) size of the parent container. In case this heuristic does not work, then setting a non-0 maximum size on the container using setMaximumSize() will act as a trigger, with the added benefit that the parent will only be allowed to resize up to a specified maximum size. An alignment specified in W(Box/Grid)Layout::addWidget(widget, stretch, alignment) now purely affects the alignment but has no other side effects. The perferred and minimum sizes of a child widget or layout is now always taken into account. Child item sizes The layout algorithm is now implemented entirely in JavaScript, and is more gentle when dealing with a combination of cells (or columns/rows) that have a stretch factor and others that don't. Previously, a minimum (or fixed) size would be used to layout items that do not have a stretch factor. This had for example as a consequence that a WText would be narrowed down to its minimum width by using word wrapping throughout. Now, the preferred size is used for a child item, and shrinking to a minimum size only if necessary. Progressive bootstrapA layout in the first page of an application rendered using progressive bootstrap will no longer fully upgrade to a full JavaScript version, but will result in a hybrid (between table-based and JavaScript-based). If it doesn't work out as you'ld expect, then you should reconsider the use of progressive bootstrap, or the use of a layout manager in your first page. This release contains mostly bug and feature improvements. This release contains a number of new modules, as well as the usual batch of bug fixes and small feature improvements. In this release we also change the WValidator API in a way that it is likely to break existing applications. This release contains many bug fixes and a few new features. This release contains a mix of new features and bug fixes This release contains mostly bug fixes and quality improvements. This release contains initial work on supporting Android and iPad/iPhone as targets for deploying Wt applications within a webkit view widget. For iPad/iPhone, we added a script that builds Wt as an OSX Framework which may be used in XCode to build iOS applications. For Android, we added support for building the library and examples as shared objects which are packaged together with a small Java project which instantiates a WebView, into standalone APK files. This is ongoing work. We need to improve support in Wt for Mobile Webkit to make the applications look and behave more as native applications on these devices. This release contains mostly bug fixes, quality improvements, and a few new features. This release contains mostly bug fixes, quality improvements, and a few new features. Note: the package was updated (3.1.7a) on Nov 30 to fix a layouting regression. This release contains a healthy mix of bug fixes, quality improvements, and new features. And hopefully no regressions :-) This release contains mostly bug fixes. This release contains several new features, but also a few changes that break backwards compatibility (but are unlikely to affect an average application). class User : public Wt::Dbo { ... }with class User : public Wt::Dbo<User> { ... } This release several new features, but also a few changes that break backwards compatibility (but are unlikely to affect an average application). This release contains mostly bug fixes, and a few new features. The minimum boost version is now 1.36. This release handles mostly bug fixes, with as most visible change an update of the polished theme, which is now considered complete. This release contains several new features and classes, after a long period of stabilization that happened before the 3.0.0 release. This release contains mostly bug fixes, build improvements and documentation improvements compared to the latest pre-release (2.99.5). Most build improvements are related to finding the boost libraries. Previously, Wt used a custom script, since CMake versions < 2.6 did not provide a good enough script for finding boost. Starting with this release, when using CMake 2.6 or later, Wt will use the script that comes with CMake. You can still fall back to the script that comes with Wt, which is still used for older versions of CMake, by defining one of the BOOST_COMPILER or BOOST_VERSION variables. This release contains mostly bug fixes. The previous release (2.99.4) contains some critical bugs that cause mayhem on IE, and a regression with server push. !! This release contains bugs that render it unusable on IE !! This release contains mostly bug fixes and back-end improvements. The most exciting new feature is the addition of a new bootstrap method, which implements progressive enhancements (starting with a plain HTML page, and then upgrading it to an AJAX page if the browser has support), see also the documentation. This release contains mostly bug fixes and small feature improvements. The most notable change that might affect existing applications is a simplified internal path API behavior. This release contains mostly build improvements, bug fixes, and API cleanups. This release contains only build improvements, bug fixes, and API cleanups. This release is a preview for Wt 3.0.0. Many things have changed both in the internals and the API. This is the first release that provides several API changes which are not backward compatible (some of which were post-poned until now). Please read the following notes carefully, especially sections C) and D), to understand what changes to expect and how to adapt existing applications. Support for the C++ boost library < 1.35 has been dropped: Wt now requires at least boost >= 1.35.0. New virtual method rerender() which allows a widget to prepare itself before rendering (and defer internal changes until that time). A widget may ask to be rerendered using askRerender() Widget no longer inherits from WResource, but instead inherits directly from WObject. It was simply a bad idea, and not useful for anything. Instead, they are now accessor member functions: e.g. WInteractWidget::clicked has been renamed to WInteractWidget::clicked(). This has as major benefit that signals can be created on-demand, which leads to drastically lower memory usage and signifcant speedups especially on embedded systems. The change requires that everywhere in your code where you access a signal, you will need to change to add parentheses. For consistency, you may also want to use the same convention for your own widget classes that define signals. The API has been redesigned and greatly simplified. If you are implementing your own resources, then you will need to redesign your implementation. The new API is simpler (requires only one virtual method to be implemented) and more powerful, providing support for continuations to serve large resources without blocking a thread or requiring large memory usage. In addition, resources have better thread-safety: they are now by default reentrant (requests for a single resource may be handled concurrently) and they are protected from concurrently being destroyed by the main event loop. The signature for the virtual validate() method was changed: parameter pos which was ignored anyway has been removed. The methods getArgument() and arguments() were renamed to respectively getParameter() and getParameterMap(). The signature for getParameter() is also different as it returns a pointer to a string, which is 0 when the parameter is not defined, instead of the olde behaviour of throwing an exception. There is a new method that allows to read all values for a parameter, getParameterValues() The 20-byte SHA1 hash based internal pointer has been removed again as the object increase and overhead could not be justified. This release is a maintenance release, with mostly bug fixes and feature improvements. As of now, we will also be listing noteworthy new API features, even if they are no concern for backwards compatibility. This release does not contain changes that break existing applications. This release is as usually a mix of bug fixes, improvements and new features. We have made a significant change to the MVC system, which will break existing program code in case you have implemented your own models (i.e. deriving from WAbstractItemModel) or views widgets (i.e. components that listen to model changes). The WAbstractItemModel interface was changed to support hierarchical models. This means that most methods will now take an extra parameter that specifies the parent WModelIndex, and also all signals have now this extra parameter. Because the parameter has a default value of WModelIndex() which corresponds to the top level parent, the API is largely backwards compatible when merely using the model. It is only those classes that reimplement the interface, or listen to signal events, that are affected. The immediate benefit of the new WAbstracItemModel interface is that it allows us to implement View widgets like the new WTreeView widget. This release has a rather substantial rewrite (and simplification) of Wt's bootstrapping process. In the past, Wt used a frameset trick to be able to load the AJAX-based skeleton when JavaScript was available. Isntead, now, the entire AJAX-based stuff is loaded directly into the bootstrap page. A benefit of the new approach is that we avoid iframe tricks, which have been deprecated from strict HTML and XHTML. But, it was in fact motivated in the first place to support all major browsers for a new internal path API. This new API allows to fully support URL changes and bookmarks in a unified way (i.e. it works equally when the browser supports AJAX, no JavaScript, or is a bot such as google bot). As a consequence, this release contains the following changes that may break your application: The following methods have been deprecated (but are still supported): Wt now installs its include files in a Wt/ subdirectory. You may want to change your build files to pick up this new include directory, or, change your code to scope the include files to look like #include<Wt/WLineEdit> instead of #include<WLineEdit> This release contains the following changes that may break your application: The following methods and enumerations have been deprecated (but are still supported): The following has changed for building Wt: The following has changed in the wt_config.xml file: This release should not contain changes that may break your application. This release should not contain changes that may break your application. The following changes may break your application build: This release should not break any of your applications, but we did deprecate some methods and enumeration types. You are advised to migrate to the replacements methods since we will discontinue support for the older ones in the future. The following methods and enumerations have been deprecated: The following changes affect run-time behaviour: The library dependencies have changed slightly. To build Wt 2.1.0, you need: Furthermore, the Wt::Ext library has been upgraded and now wraps around the extjs 2.x library, instead of extjs 1.x. Some API changes may need a porting effort: This release adds a few new features: This release fixes some build-related problems, as well as smaller bugs. The main improvement in this release is related to use of Wt in resource-constrained embedded systems. The most visible change is that the dependency on the Xerces C++ XML library was dropped in favour of the much smaller Mini-XML library. The draw-back is a reduction of supported character encodings to only UTF8 and UTF16, next to the default locale character encoding (which is typically an 8-bit flavour). When using the built-in httpd, you can now disable support for SSL at compile time, freeing a number of SSL-related dependencies. In the API, more comparison operators (== and !=) were added to WString, and a WViewWidget was added for simple MVC widgets (with the main purpose to reduce session-state at the server). This release contains numerous changes which are likely to cause some porting effort for Wt 1.1.x applications to work properly. If you are upgrading from a 1.99.x release, you will notice that some of these notes have actually evolved, especially with respect to WString and unicode support. Here is a list of changes with respect to Wt 1.1.x that are likely to require your attention, and some tips on how to do the porting. All Wt classes are now inside the namespace Wt. To handle this change, you will need to: Previously, most widgets offered double methods that either used a std::string for literal text, or a WMessage for localized text. In the new release, widgets use Wt::WString for both literal and localized text. WString offers unicode support for both literal as well as localized text. To create a literal string, simply assign or construct a Wt::WString from that string. The strings supported or both narrow and wide C and C++ strings. UTF8 encoded narrow strings may also be converted. To create a localized string, use one of the static methods WString::tr(const std::string key) and WWidget::tr(const std::string key). To help with legacy code, WMessage is now a typedef for WString, but is deprecated and should not be used in new code. Unfortunately, the constructors WMessage(const char *text) and WMessage(const std::string text), changed meaning! While previously they took a key to construct a localized message, they now take a literal text (the exact opposite!), since they are in fact plain WString() constructors. As a consequence your application will display key values instead of resolving those values (but will not break entirely). The new approach offers the benefit of only requiring one method signature for both literal and localizable text. This not only simplifies our work, but more importantly by using WString for displayed text in the API of your own widgets, localization (including the automatic language switching) comes automatically and is decided on by the user of your widget. Fortunately, there is a straightforward trick to handle most consequences of this change: Since Wt 2.0.0, the API for Wt has been changed to use WString instead of C++ narrow strings. WString supports both narrow and wide strings, and provides conversion between both. It does not provide string operations, however, and instead acts as a string container. You should convert to a C++ string type to perform operations. You should also not use WString outside of the user interface part of your application. Previously, the Wt library implemented the main(int argc, char **argv) function, and called a wmain() function which created the WApplication instance. Wt 2.0.0 allows multiple applications to run within a single process. Therefore, the WApplication::exec() approach was no longer feasible. The new approach requires that: int main(int argc, char **argv) { return Wt::WRun(argc, argv, &createApplication); } Wt::WApplication *createApplication(const Wt::WEnvironment& env) { // return a new application object. } Wt 2.0.0 uses a configuration file for a number of settings that could previously be configured at build time of the library, or in the API. The latter functions are: Wt 2.0.0 removed a number of classes that were still in the widget tree, but have been obsoleted by more flexible classes: The constructor and methods that takes a boost::regex object in the WRegExpValidator API have been deprecated, to remove the dependency on boost from the public API. You should consider the std::string based construtor and method instead. Since Wt 1.99.1, we have removed WObject::emit() function. Instead, you may simply call the signal with its arguments, or use the explicit emit method (recommended). To adapt your code, you should: Since Wt 2.0.0, WResource::streamResourceData() returns a boolean value which indicates if all data has been streamed. If you have reimplemented WResource for your applications, you must update the signature and return true. The change is relevant only within the new server-push support that is now in Wt 2.0.0. This allows you to continuously append to the content of a resource.
http://www.webtoolkit.eu/wt/doc/reference/html/Releasenotes.html
CC-MAIN-2014-15
refinedweb
3,489
53.41
Who is Evangeline Lilly Ranked #1,246 in Movies & TV, #32,882 overall Evangeline Lilly Table of Contents - Lost - The Complete Third Season with Evangeline Lilly - Evangeline Lilly Biography - Evangeline Lilly Bio - Lost - The Complete Second Season - Evangeline Lilly Lost - Lost - The Complete First Season - Quick, what do you think of Evangeline Lilly? - Evangeline Lilly Movies - The Latest News on Evangeline Lilly - Evangeline Lilly Videos - Evangeline Lilly Photos - Evangeline Lilly Pictures - Evangeline Lilly Filmography - Evangeline Lilly Films Lost - The Complete Third Season with Evangeline Lilly Matthew Fox, Evangeline Lilly, Josh Holloway, Dominic Monaghan, Terry O'Quinn Lost - The Complete Third Season Warning! Major spoiler alert! I believe that Season Three of LOST is one of those seasons of a show that will have a significant impact on the dynamics of television quite apart from the merits or demerits of the season itself. This is mainly due to various tensions the networks have had in broadcasting serial dramas. Season Two of LOST provoked vast viewer anger over the seemingly endless repeats. All season long they would give us four or five new episodes, only to do three or four repeats. No one knew sometimes if they would be tuning into a repeat or a new episode. To counter this, ABC made the decision to broadcast six episodes in the fall to be followed by sixteen episodes shown without interruption beginning in January. Unfortunately, the six episodes they showed in the fall were almost universally perceived as the weakest group of episodes in the show's run. The results of all this I think will be threefold: 1. In the future, I think the trend with popular serial dramas will be to broadcast shows in uninterrupted hunks. We had already seen this happening with 24. I think after the Season Three debacle with LOST, which saw the show lose a huge number of viewers during its break, this will become far more commonplace. 2. The general perception of the first six episodes of the season was that they dawdled too much, provided too little plot development, and simply didn't advance the narrative sufficiently. Shows tend to learn from the mistakes and failures of other series. Damon Lindelhof of LOST has stated that the writers on the show have attempted to avoid the piling up of mysteries that occurred on TWIN PEAKS and the lack of focus on character rather than plot on THE X-FILES and to emulate the focus on character within the overall narrative that was seen in BUFFY THE VAMPIRE SLAYER. I believe in the future that writers on other serial narratives will strive to make sure that the mysteries on a show are being revealed at a good pace. (Just as I think future writers will try to emulate the pace at which this has been done on BATTLESTAR GALACTICA.) 3. There was also a widespread perception that in much of Season Two of LOST and the first six episodes of Season Three, the overall narrative was simply being padded out to ensure a long run for the show. From the beginning looked like a show that needed fewer rather than more seasons to be truly good, but it appeared that with the ratings monster it was in the first two seasons that the powers that be were hoping they could stretch it out to seven or eight seasons instead of five or six. Luckily, the huge backlash against the show following the first six episodes--a backlash that occurred both among everyday fans and among TV critics--seems to have created a reassessment and in the spring it was announced that LOST would be back for three more sixteen-episode seasons. I was delighted with how positively this announcement was greeted by fans and critics alike. I think the result has been for the networks to recognize that certain kinds of series have only a limited potential in terms of the number of episodes that can be produced, that there are certain series that you can really only produce if you anticipate their going four or five or at most six seasons. The other series this is happening with is BATTLESTAR GALACTICA, which like LOST is more or less telling a single story. Both of these are outstanding series that will benefit from a smaller number of series (the debate among the producers of BSG at the moment is whether they need to end the series at the end of Season Four or whether they will need a Season Five). As a negative example there is PRISON BREAK, which currently is threatening to fall apart for lack of any real plan. The great news about Season Three is that after the long break the show came back as good as ever. If we were rating this as one would rate an ice skater, we'd have to give it a lower rating based on some slips and falls early in its routine. But it rebounded wonderfully and the frustration that most viewers experienced in the fall rarely if ever returned in the spring. Furthermore, they started giving us concrete answers to a host of questions that had been bothering us for ages. We found out all about the others (though not where they originally came from), about the lay out of the island, about the facilities on the island, and a few of -- though by no means all of -- the island's secrets. At the end of the season there were still things we'd like to know about -- Just who is Jacob? What's up with the black smoke? What makes the island so special? What was the genesis of the Dharma Initiative -- but there is no doubt that we knew vastly more than we knew before. There were also many new characters. Ben, whom we knew in Season Two as Henry, was back and became one of the most fascinating characters on the show. And we were introduced to the enigmatic Juliet, whose sad and wistful smile was as impossible to comprehend as the Mona Lisa's. We learned that following the decimation of the hatch at the end of Season Two Desmond experienced visions of the future and seemed doomed to reenacting events. The deep attraction between Jack and Kate was made more explicit even though she ends up furthering things with Sawyer. And as many fans suspected as early as Season One, Locke's father turned out to be the real Sawyer. Our Sawyer coming face-to-face with the real Sawyer was not only one of the highlights of the season but of the entire series. I want to say something about the finale, but without giving away the details of how the last five minutes of the season changes absolutely everything we know about the series. The changes are, interestingly, not so much in new revelations as in ways that are open for the show to proceed narratively in the future. For the past three seasons the narrative has proceeded in the present with flashbacks to the past of various characters. That is no longer possible. In the future the narrative will of necessity either proceed on the island with flash forwards or will take place in the future with flashbacks to events following the end of Season Three. (Sorry to be vague here, but I really think that one should watch Season Three without knowing what happens at the end of the season to change everything so completely.) I honestly have no idea what way they will proceed. If I had to bet, I would say that the show will continue to use flashbacks, but that the main narrative will proceed in the present. The first three seasons took place pretty much exclusively in the calendar year 2004. I believe Season Four could well take place in 2008 with flashbacks to the previous four years. Regardless, the surprising ending changed everything. There is one beef I want to make with the show. As much as I love this series, it has to handle the death of characters worse than just about any I have seen. The first series to kill off a substantial number of central and beloved characters was BUFFY THE VAMPIRE SLAYER. There had been other deaths on television series, to be sure. The death of Deep Throat at the end of the first season of THE X-FILES was close to unprecedented at the time. Previously characters largely died because they wanted to leave a show, like the death of Edith on ALL IN THE FAMILY or Denise Crosby's departure from STAR TREK: THE NEXT GENERATION. But BUFFY created the habit of killing off key characters. I'm not sure I've ever been so completely shocked at the death of any character on TV as I was when Angel killed Jenny Calendar in Season Two of BUFFY. It shattered the hallowed tradition on TV that you simply don't kill off characters you like. It ushered in a new era on TV to great effect. Suddenly, a new sense of danger was introduced to TV. Before you always knew that all the characters would survive any catastrophe, no matter how dire, simply because that was the nature of TV. But after BUFFY and the way that other series so quickly picked up on its willingness to kill characters, a new sense of precariousness extended to almost every show on TV. And TV was certainly the better for it. One thing that made the deaths on BUFFY so compelling was that each one carried such a great price and had such enormous consequences. All the deaths were exceedingly well done. But this has not been the case on LOST. Perhaps the deaths will be made less meaningless by developments in the final three seasons, though I somehow doubt it. Characters were killed off in almost random fashion. At least there was no real sense about why they were killed off. It seems like someone said, "Well, we need to kill someone off." And some of the deaths seemed to be caused by off screen activities. Michelle Rodgriguez's death in Season Two was thought by many to be in response to a violation of probation that might have required some jail time and impinged on the shooting schedule (she claims she only signed up for one season). In Season Three Adewale Akinnuoye-Agbaje's great character Mr. Eko was killed reportedly because he was hated by all his fellow cast members and he hated them all in return (some anonymous cast members reported that he was dictatorial to the extent of telling other actors what they should do or how to speak their lines--all reports are that no one was sad to see him leave the set). But even so his death felt like he had been ripped from the show prematurely. And a major death in the season finale felt equally unnecessary. I believe that this will also influence future shows. I think "the body count" is a permanent fixture in any series with an adventure element, but I think that future shows will strive not to make the death of characters as superfluous as they have been in LOST. The final three seasons will all begin in the winter and be broadcast for sixteen straight weeks with no interruptions. I love this not merely because it means no dead times between episodes but because it puts definite limits on how much time they have left to finish the story. I think most fans of the show feel a lot better about how things are going now than they did last fall. Then the series seemed moribund and seemed almost to be drifting. Now it feels like it is heading somewhere definite. And it ended the season by doing something that all the really great shows do: it took a gigantic risk that changes everything. I look forward with excitement to what happens next. - Robert Moore (Chicago, IL USA) Release Date: 12/11/2007 Amazon Price: $33.49 (as of 07/13/2009) List Price: $39.99 Usually ships in 24 hours Evangeline Lilly Biography - Evangeline Lilly Bio Evangeline Lilly Timeline - Evangeline Lilly Life When Evangeline.. Nicole Evangeline Lilly (born August 3, 1979) is a Canadian actress, known best for her role as Kate Austen in the ABC drama, Lost. Lost - The Complete Second Season Matthew Fox, Evangeline Lilly." Lost - The Complete Second Season. - Mark Baker (Santa Clarita, CA United States) Release Date: 09/05/2006 Amazon Price: $33.49 (as of 07/13/2009) List Price: $39.99 Usually ships in 24 hours Evangeline Lilly Lost Lost - The Complete First Season Matthew Fox, Evangeline Lilly, Terry O'Quinn, Josh Holloway, Dominic Monaghan First? -- Robert Moore (Chicago, IL USA) Release Date: 09/06/2005 Amazon Price: $34.99 (as of 07/13/2009) List Price: $39.99 Usually ships in 24 hours Quick, what do you think of Evangeline Lilly? Evangeline Lilly Movies The Long Weekend Amazon Price: $13.49 (as of 07/13/2009) ABC News 20/20 Getting "Lost" Amazon Price: $19.95 (as of 07/13/2009) The Latest News on Evangeline Lilly Evangeline Lilly Videos Evangeline Lilly YouTube Evangeline Lilly Photos - Evangeline Lilly Pictures Evangeline Lilly Pics - Evangeline Lilly Images Evangeline Lilly Filmography - Evangeline Lilly Films Evangeline Lilly Movies - Evangeline Lilly TV Appearances - The Hurt Locker (2009) - Afterwards (2008) - The Long Weekend (2005) as Simone - Freddy vs. Jason (2003) as School Student - Stealing Sinatra (2003) - The Lizzie McGuire Movie (2003) as Police Officer Television - Lost (2004 to present) as Kate Austen - Reviews on the Run (2002) (guest appearance) as JD Girl - Smallville (2002) (guest appearance) as girl in The Talon coffee shop wearing a pink top (Season 2, Episode 11 "Visage") - Smallville (2002) (guest appearance) sat behind Clark Kent in a cinema wearing a bright yellow t-shirt - Smallville (2002) (guest appearance) as Wade's Girlfriend (Season 1, Episode 13 Kinetic) - Tru Calling (2003) (guest appearance) as Party Guest - Kingdom Hospital (2004) (guest appearance) as Benson's Girlfriend - Punk'd (2007) as herself
http://www.squidoo.com/evangeline-lilly-bio
crawl-002
refinedweb
2,354
66.37
React Native is a platform that enables you to build native mobile apps using JavaScript. As the name implies, it uses React, so composing mobile interfaces is similar to using React on the web. Rather than creating components using HTML tags, it has its own set of components that bind to native UI components. React Native can be tricky to set up, especially when it comes to push notifications. You need to set up certificates and dive into Objective-C for iOS and Java for Android. However, Expo eliminates the need for this. Expo means you can create React Native apps and deploy them to the app stores using only JS. Additionally, it offers an SDK with access to native functionality such as notifications, camera, contacts, location and so on. It also provides access to some UI components that aren’t included in the React Native core, but are often used, such as icons, blurred views and so on without needing to write a line of native code. Expo can also create submission-ready app builds without needing to build using Xcode or Android Studio. If you haven’t used either before, they can be a scary place, especially if you’re just getting started with React Native – or aren't confident in general with how to make an app. Apps (or projects) can also be published to Expo rather than submitting to an app store, which lets users try the project through the Expo mobile client. Don’t worry, this will be covered later! 01. Set up Expo Installing Expo is a straightforward process. Head over to the site and download the latest XDE, then install the mobile app on your phone or tablet. The mobile client enables running apps on a real device via their app without needing any developer licences or certificate setup upfront. You can then publish apps to the Expo service so that users can run your finished projects via the Expo mobile client, without needing to go through the App Store and Google Play review processes. Expo does support building stand-alone apps that can manually be published to the App Store or Play Store, however you would require developer accounts for the platforms you release on. Apple’s Developer Program costs £79 per year and Google’s Play Console costs a one-time fee of $25 USD. 02. Create your first app This XDE gives us detailed information for the project. The left window is the React Native Packager and the right window shows any logs from devices Let’s get to business and create our first app with Expo. Open up the Expo XDE, select Project > New Project and select the blank template. Install all the dependencies and start the React Native packager, which bundles all the assets. From here, you can publish the app, share the app to the Expo mobile client or run the app on a simulator. This app is going to involve creating push notifications, so you're going to share to the Expo mobile client, as simulators don’t support them. Click the share button to get a QR code to scan or the option to send a link via SMS or email. Scan the QR code with the Expo client. This runs the app through the Expo client and it opens up to a screen with: ‘Open up App.js to start working on your app!’ With the app running, open up the project in your favourite code editor and open up App.js. Find the text above and change it to the classic ‘Hello world!’ , then save. You should see the app reload automatically and have the text ‘Hello world!’ now visible. Magic! While you are developing, shaking the device will reveal a developer menu, which has helpful options for debugging. From this menu you can also return to the Expo home – helpful to exit out of the app or switch to another. 03. Add notifications Now you have a basic app set up and running, let’s add the ability to send and receive notifications. This would normally require Objective-C and Java, integrating a third-party service to send the notifications and you would also need to set up certificates for iOS and create keys for Android. First, you need to import permissions and Notifications from Expo and create a new function in the app class: import { Permissions, Notifications } from ‘expo’; export default class App extends React.Component { async registerForPushNotifications() { } } Here you use the async keyword to make use of the ES2017 async/await feature – React Native has Babel working under the hood so you can leverage all the new JavaScript goodness. Now, let’s ask for notification permissions and retrieve the Expo push token – to identify the device – so that Expo knows where to send push notifications: async registerForPushNotifications() { const result = await Permissions.askAsync(Permissions.REMOTE_NOTIFICATIONS); if (result.status !== ‘granted’) { return; } const token = await Notifications.getExpoPushTokenAsync(); console.log(token); } Here, you will ask for notification permission using the Permissions.askAsync() method. The alert will only show on iOS as Android enables notifications by default. Despite this, you actually still need to run through the same logic on Android, as it’s possible to turn notifications off. Next, check the response. If the notifications aren’t granted then you can stop and return out of the function. If notifications are granted, you get the push token from the Expo service. Finally, you can log the token out for later use to test notifications and this will get logged to the Expo XDE. It’s worth noting at this point that – on iOS – this alert can only be triggered once by design; so, if a user denies permission, you may want to consider adding a custom notice or alert before returning out of the function. In order to enable notifications at this point, the user will need to go to the app settings (in our case, this will be Expo) > Notifications and enable them from there. Because of this, you will need to delete the Expo mobile client and then reinstall if you wish to test the permission alert on iOS more than once. Additionally, Expo provides a method called Permissions.getAsync() and this works in a similar way to Permissions.askAsync(), just without firing off the iOS alert. This could be useful if you want to check the status first to create a custom flow that will ask you for permissions, for example. If the iOS alert has previously been triggered (remember, it can only be fired once per app install), the Permissions.askAsync() will return the same information as Permissions.getAsync(), so in our use case we don’t need to use Permissions.getAsync(). You then need to call this function in the componentDidMount lifecycle method so that it runs on app launch: componentDidMount() { this.registerForPushNotifications(); } Next, if you accept the notification permission, your app will be able to receive local and remote notifications. Local notifications are then set up and sent via the device directly to the app and don’t require an internet connection. Remote notifications come from a server and send the notification via the Apple Push Notification System (APNS) or Google Cloud Messaging (GCM) services. This process will also require access to an internet connection to receive them. Expo has a service to send notifications at; all you need to do is send some data to it. This will then look a little like the following: { // The push token. “to”: <token from the app> // The notification title. “title”: “Notification title”, // The notification body. “body”: “Notification body”, // Pass in data as an object, this can be used when handling the notification. “data”: { “value”: “Hello world!” } } The Expo toolkit makes sending and receiving push notifications effortless Before we test a notification, we will configure our app to handle notifications if the app is open. By design, iOS and Android don’t show a notification when an app is open. You can also handle any data sent in the push message here, in case we need to act on it. Let’s create a new function to handle this and alert the data value property: handleNotification(notification) { alert(notification.data.value); } Then in our componentDidMount lifecycle method you need to set up a listener, which calls the notification handler we just created: componentDidMount() { this.registerForPushNotifications(); Notifications.addListener(this.handleNotification); } You’re now set. Let’s send a push notification with a cURL request: curl \ -X POST \ -H “Content-Type: application/json” \ -d ‘{ “to”: “<TOKEN>”, “title”: “Notification title”, “body”: “Notification body”, “data”: { “value”: “Hello world!” } }’ \ You can grab this cURL command here. If you’re not familiar with cURL, you can send the request using something like Postman, which uses a GUI for sending requests. Once the request has sent, you should now see the notification come through. There are no limitations on using the Expo push service. In a real-world scenario, it’s encouraged to use the batch API by sending an array of push notifications, in batches of 100, to be sent out for efficiency: [ { // notification object as above }, { // another notification object } ] 04. Publish with Expo Running an app on a device is a no-fuss experience. Just scan a QR code with the Expo app and it loads right away So, now that we have a shiny new React Native app (even if it doesn’t do much yet!), how do we let others use it without running it from Expo XDE? The simplest way is by clicking Publish from the XDE. This will publish the project to Expo and a link will appear on your profile, which you can find at<username>. When you are visiting the published project link, there will be a QR code to scan (located among the same options as the XDE) with the Expo mobile client, just like during development. If we want to update the app, all we need to do is republish and the changes will be available to the user when running the app again. This article was originally published in issue 297 of net, the world's best-selling magazine for web designers and developers. Subscribe to net here. Want to learn more ways you can master React?:
https://www.creativebloq.com/how-to/jump-start-react-native-with-expo
CC-MAIN-2019-13
refinedweb
1,710
53.1
Cutting Edge - Predictive Fetch with jQuery and the ASP.NET Ajax Library By Dino Esposito | February 2010 Last month I discussed the implementation of master-detail views using the new features coming with the ASP.NET Ajax Library. The list of new features includes a syntax for client-side live data binding and a rich rendering component, exemplified by the DataView client control. By putting these features together, you can easily build nested views to represent one-to-many data relationships. In the ASP.NET Ajax Library, the mechanics of master-detail views are largely defined in the logic of the DataView component and in the way the component handles and exposes its events. This month I’ll go one step further and discuss how to implement a common and popular AJAX design pattern—predictive fetch—on top of the ASP.NET Ajax Library. Basically, I’ll extend last month’s example—a relatively standard drill-down view into customer details—to automatically and asynchronously download and display related orders, if any exist. In doing so, I’ll touch on some jQuery stuff and take a look at the new jQuery integration API in the ASP.NET Ajax Library. Without further ado, let’s review the context and build a first version of the example. The Demo to Expand Figure 1 shows the application scenario on top of which I’ll add predictive fetch capabilities. Figure 1 The Initial Stage of the Sample Application The menu bar allows the user to filter customers by initial. Once a selection is made, a smaller list of customers is displayed through an HTML bulleted list. This is the master view. Each rendered item has been made selectable. Clicking on one causes the details of the customer to be displayed in the adjacent detail view. This is where I left off last month. As you can see in Figure 1, the user interface now shows a button to view orders. I proceed from here onward. The first decision to be made is architectural and relates to the use-case you are considering. How would you load order information? Is it information you have downloaded already along with customer information? Are orders attached to customers? Is lazy loading an option here? The code we’re considering is expected to run on the client side, so you can’t rely on lazy loading facilities built into some object/relational mapping (O/RM) tools such as Entity Framework or NHibernate. If orders are to be lazy-loaded, then any code is up to you. On the other hand, if you can assume that orders are already available on the client—that is, orders have been downloaded along with the customer information—then you’re pretty much done. All you need to do is bind orders data to some HTML template and go. Obviously, things get much more interesting if lazy loading is what you want. Let’s work out this scenario, then. As a side note, you should know that lazy loading is fully supported if you get your data through the AdoNetDataContext object. (I’ll cover this in future articles.) For more information, be sure to look at asp.net/ajaxlibrary/Reference.Sys-Data-AdoNetServiceProxy-fetchDeferredProperty-Method.ashx. A New Way to Load Script Libraries For years, Web developers had been left alone to figure out which script files a page would need. That wasn’t a daunting task, because the limited amount of simple JavaScript code that was used made it quite easy to check whether a required file was missing. Increased amounts of complicated JavaScript code in Web pages introduced the problem of splitting scripts among distinct files and, subsequently, referring to them properly to avoid nasty runtime “undefined object” errors. Many popular JavaScript libraries have been providing facilities in this regard for years now. For example, the jQuery UI library has a modular design and allows you to download and link only the pieces you really need. This same capability is also offered by the scripts that make up the ASP.NET Ajax Library. The script loader, however, is something more. The script loader provides a number of extra services and builds on the partitioning of large script libraries into smaller pieces. Once you tell the loader about the libraries you’re interested in, you delegate to the loader any tasks related to the correct ordering of required files. The script loader loads all required scripts in parallel and then executes them in the right order. In this way, the loader saves you from any “missing object” exceptions and provides the fastest way to handle scripts. All you need to do is list the scripts you want. Hey, wait a moment. If I have to list all scripts I need, what are the benefits of using a loader? Well, what the loader requires is nowhere near to what is required in the well-known process of linking assemblies to a project. You link assembly A and let the Visual Studio 2008 loader figure out any static dependencies. Here’s a code snippet that shows how to deal with the script loader: The Sys.require method takes an array of references to the scripts you want to link to your page. In the preceding example, you are instructing the loader to take care of two scripts—dataView and jQuery. As you can see, however, the call made to the method Sys.require doesn’t include any Web server path to any physical .js files. Where’s the path, then? Scripts that will work with the ASP.NET Ajax Library loader are required to define themselves to the loader and inform it when they load completely. Registering a script with the loader doesn’t cause any round trip, but is simply a way to let the loader know that it may be called upon to manage a new script. Figure 2 includes an excerpt from MicrosoftAjax.js that shows how jQuery and jQuery.Validate are registered with the loader. loader.defineScripts(null, [ { name: "jQuery", releaseUrl: ajaxPath + "jquery/jquery-1.3.2.min.js", debugUrl: ajaxPath + "jquery/jquery-1.3.2.js", isLoaded: !!window.jQuery }, {) } ]); Of course, you can use this approach with custom scripts and client controls as well. In that case, you need to reference the loader-specific definition of the script in addition to your actual script. A loader-specific definition includes release and debug server paths of the script, a public name used to reference it, dependencies and an expression to be evaluated in order to test whether the library loaded correctly. In order to use the script loader component, you need to reference a new JavaScript file named start.js. Here’s an excerpt from the sample application that uses a mix of old and new script-loading techniques: <asp:ScriptManagerProxy <Scripts> <asp:ScriptReference <asp:ScriptReference <asp:ScriptReference <asp:ScriptReference </Scripts> </asp:ScriptManagerProxy> You reference the start.js file using a classic <script> element. Other scripts can be referenced using the ScriptManager control, plain <script> elements or the Sys.require method. As you can see from the code snippet above, there’s no reference to the jQuery library. In fact, the jQuery library is referenced programmatically from the page-specific JavaScript file linked via the ScriptManager. Another interesting feature in the ASP.NET Ajax Library is the availability of jQuery features through the Sys namespace and, conversely, the exposure of Microsoft client components as jQuery plugins. This means, for example, that you can register an event handler for the ready event—a typical jQuery task—using the Sys.onReady function, as shown here: Given all these new features, the typical start-up of a JavaScript file to be used as an extension of a Web page looks like the following: An even simpler approach is possible, however. You can use Sys.require to load a control like a DataView instead of the files that implement it. The script loader would load these files automatically based on the dependencies defined for the DataView. Let’s focus on predictive fetch. Handling the Customer Selection To obtain the user interface shown in Figure 1, you use HTML templates and attach data to data-bound placeholders using the DataView component. Customer details are automatically shown due to DataView-based data binding when you click on a listed customer. Orders, however, are not bound directly via the DataView. This is because of the requirements we set at the beginning of the article—by design, orders are not downloaded with the customer information. To fetch orders, therefore, you need to handle the change of selection within the template associated with the DataView. Currently, the DataView doesn’t fire a selection-changed event. The DataView does provide great support for master-detail scenarios, but much of it happens automatically, even though you can create custom commands and handlers (see asp.net/ajaxlibrary/Reference.Sys-UI-DataView-onCommand-Method.ashx). In particular, you set the sys:command attribute to “select” on any clickable elements that can trigger the details view, as shown here: When the element is clicked, it fires an onCommand event within the DataView and, as a result, the content of the selectedData property is updated to reflect the selection. Subsequently, any parts of the template that are bound to selectedData are refreshed. Data binding, however, entails updating displayed data, not executing any code. As mentioned, when a command is fired within the DataView, the onCommand event is raised internally. As a developer, you can register your own handler for the event. Unfortunately, at least with the current prerelease version of the DataView component, the command handler is invoked before the selected index property is updated. The net effect is that you can intercept when the details view is about to show, but you have no clue about the new content being shown. The only goal of the event seems to be giving developers a way to prevent the change of selection should some critical conditions not be verified. An approach that works today, and that will continue working in the future regardless of any improvements to the DataView component, is the following. You attach an onclick handler to any clickable elements of the master view and bind an extra attribute to contain any key information that is helpful. Here’s the new markup for the repeatable portion of the master view: The markup presents two changes. First, it now includes a new sys:commandargument attribute; second, it has a handler for the click event. The sys:commandargument attribute contains the ID of the customer that has been selected. The ID is emitted through data binding. The attribute where you park the ID doesn’t have to be sys:commandargument necessarily; you can use any custom attribute as well. The click handler is responsible for fetching orders according to whatever loading policy you have set. Figure 3 shows the source code of the orders loader. function fetchOrders(elem) { // Set the customer ID var id = elem["commandargument"]; currentCustomer = id; // Check the jQuery cache first var cachedInfo = $('#viewOfCustomers').data(id); if (typeof (cachedInfo) !== 'undefined') return; // Download orders asynchronously $.ajax({ type: "POST", url: "/mydataservice.asmx/FindOrders", data: "id=" + id, success: function(response) { var output = response.text; $('#viewOfCustomers').data(id, output); if (id == currentCustomer) $("#listOfOrders0").html(output); } }); } The fetchOrders function receives the DOM element that was clicked. First, it retrieves the value of the agreed attribute that contains the customer ID. Next, it checks whether orders already exist in the jQuery client cache. If not, it finally proceeds with an asynchronous download. It uses a jQuery AJAX method to arrange a POST request to a Web service. I’m assuming in this example that the Web service employs the “HTML Message” AJAX pattern and returns plain HTML ready to be merged with the page. (Note that this is not necessarily the best approach and mostly works in legacy scenarios. From a pure design perspective, querying an endpoint for JSON data would generate a much lighter payload.) If the request is successful, the orders markup is first cached and then displayed where expected (see Figure 4). Figure 4 Fetching and Displaying Orders Figure 4 shows only a screenshot and doesn’t really explain what’s going on. As you click to select a customer to drill down, the request for orders fires asynchronously. Meanwhile, the details of the customer are displayed. As you may recall, there’s no need to download customer information on demand, as that information is downloaded in chunks as the user clicks on the high-level menu of initials. Downloading orders may take a while and is an operation that doesn’t give (or require) any feedback to the user. It just happens, and it’s completely transparent to the user. The whole point of the predictive fetch pattern is that you fetch information in advance that the user may possibly request. To represent a true benefit, this feature has to be implemented asynchronously and, from a usability perspective, it’s preferable if it’s invisible to the user. Let’s focus on the most common tasks a user would perform on the user interface in Figure 4. A user would typically click to select a customer. Next, the user would likely spend a few moments reading the displayed information. As the user views the display, the orders for the selected customer are silently downloading. The user may, or may not, request to view orders immediately. For example, the user may decide to switch to another customer, read the information and then switch back to the first one, or perhaps navigate to yet another. In any case, by simply clicking to drill down on a customer, the user triggers the fetch of related orders. What happens to downloaded orders? What would be the recommended way of dealing with them upon download? Dealing with Fetched Orders Frankly, I can’t find a clearly preferred way to deal with preloaded data in such a scenario. It mostly depends on the input you get from your stakeholders and end users. However, I would suggest that orders be automatically displayed if the user is still viewing the customer for which the download of orders has just completed. The $.ajax method works asynchronously and is attached to its own success callback. The callback receives orders downloaded for a given customer, but at the time the callback runs, the displayed customer may be different. The policy I used ensures that orders are displayed directly if they refer to the current customer. Otherwise, orders are cached and made available for when the user comes back and clicks the “View orders” button. Let’s have a second look at the success callback for the fetch procedure: The id variable is local to the $.ajax method and is set with the ID of the customer for which orders are being fetched. The currentCustomer variable, though, is a global variable that is set any time the fetch procedure is executed (see Figure 3). The trick is that a global variable may be updated from multiple points, so the check at the end of the download callback makes sense. What’s the role of the “View orders” button that you see in Figure 1 and Figure 4? The button is there for users wanting to see orders for a given customer. By design, in this example displaying orders is an option. Hence, a button that triggers the view is a reasonable element to have in the user interface. When the user clicks to view orders, order information may, or may not, be available at that time. If orders are not available, it means that—by design—the download is pending, or it has failed for some reason. The user is therefore presented with the user interface shown in Figure 5. Figure 5 Orders are Not Yet Available If the user remains on the same page, orders display automatically as the download completes successfully, as in Figure 4. The First Displayed Customer One thing remains to finish the demo of this master-detail scenario enriched with predictive fetch capabilities. The DataView component allows the specification of a particular data item to be rendered in selected mode on display. You control the item to be initially selected via the initialselectedindex attribute of the DataView component. This is shown below: In this case, the first customer retrieved for the selected initial is automatically displayed. Because the user doesn’t need to click, no automatic fetch of orders occurs. You can still access the orders of the first customer by clicking on it again. In this way, the first customer will be processed as any other displayed customer. Is there any way to avoid such behavior? For the user, clicking to view orders wouldn’t be sufficient for the first customer. In fact, the button handler limits the information that can be displayed to what’s in the cache. This is done to avoid duplicated behavior in code and to try to do everything once and only once. This is shown here: function display() { // Attempt to retrieve orders from cache var cachedInfo = $('#viewOfCustomers').data(currentCustomer); if (typeof (cachedInfo) !== 'undefined') data = cachedInfo; else data = "No orders found yet. Please wait ..."; // Display any data that has been retrieved $("#listOfOrders0").html(data); } The preceding display function has to be slightly improved to trigger order fetches in the case of no current customer selection. This is another reason for having a global currentCustomer variable. Here’s the edited code of the display function: function display() { if (currentCustomer == "") { // Get the ID of the first item rendered by the DataView currentCustomer = $("#itemCustomer0").attr("commandargument"); // The fetchOrders method requires a DOM element. // Extract the DOM element from the jQuery result. fetchOrders($("#itemCustomer0")[0]); } // Attempt to retrieve orders from cache ... // Display any data that has been retrieved ... } If no customer has been manually selected, the sys:commandargument of the first rendered item is read. The quickest way of doing that is leveraging the naming convention for the ID of items rendered through the DataView. The original ID is appended with a progressive number. If the original ID is itemCustomer, then the ID of the first element will be itemCustomer0. (This is an aspect of DataView that may change in the final release version of the ASP.NET Ajax Library.) Note also that fetchOrders requires you to pass in a DOM element. A jQuery query returns a collection of DOM elements. That’s why in the code above you need to add an item selector. Finally, note that another solution is also possible, if it’s acceptable to you that no customer is initially displayed after data binding. If you set the initialselectedindex attribute of the DataView to -1, no customer would be initially selected. As a result, to see order details, you need to click on any customer, which would trigger the fetch of associated orders. Wrapping Up The DataView is a formidable instrument for data binding in the context of Web client applications. It’s specifically designed for common scenarios such as master-detail views. It doesn’t support every possible scenario, however. In this article, I showed some code that extends a DataView solution through the implementation of the “predictive fetch” pattern. [The ASP.NET Ajax Library beta is available for download at ajax.codeplex.com. It is expected to be released at the same time as Visual Studio 2010.—Ed.] Dino Esposito is the author of the upcoming “Programming ASP.NET MVC” from Microsoft Press and co-authored “Microsoft .NET: Architecting Applications for the Enterprise” (Microsoft Press, 2008). Based in Italy, Esposito is a frequent speaker at industry events worldwide. Thanks to the following technical expert for reviewing this article: Stephen Walther Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/ee291635.aspx
CC-MAIN-2019-39
refinedweb
3,316
55.84
I’m Loving the .NET Fx ARM Saturday, April 10, 2004 I enjoyed the annotations in The .NET Framework Standard Library Annotated Reference so much that I read them all in one sitting (in the bathtub, if you must know…). The annotations are where the authors put in their 2 cents about a particular class, method or property and it was very interesting. Here’s what I learned just from that part of the book: - You shouldn’t use ApplicationException for custom exceptions. Derive from Exception instead. - Because ICloneable.Clone doesn’t define whether it returns a deep or a shallow copy, it’s practically worthless. - Prefer UTF-8 because it takes up the same amount of space to represent ASCII characters as ASCII, but can also represent all Unicode characters (I knew that UTF-8 took up the same amount of space but hadn’t yet jumped to “prefer”). - I was reminded of the System.Convert class, which does all of the same conversions that the various type-specific ToXxx and Parse methods do, but puts them all in one place. - There’s another use for the private interface method implementation syntax in C#: class Enum : IEnumerator { object IEnumerator.Current { get { return this.foo; } } ... }This syntax hides an interface method from general use unless you cast to the interface base class: Enum enum = new Enum();The thing I never thought of that this enables is that it lets you override based on return type, which isn’t normally allowed in C#: object obj1 = enum.Current(); // compile-time error object obj2 = (IEnumerator)enum.Current(); // this works class Enum : IEnumerator { object IEnumerator.Current { get { return this.foo; } Foo Current { get { return this.foo; } } ... } This private method implementation syntax lets you return the generic type as part of the interface implementation so that you can plug into standard idioms, e.g. foreach, but a more specific type for users of the type directly, saving the cast: Enum enum2 = new Enum();This will be less interesting as generics take over, but still, it’s a useful trick. Foo foo1 = (Foo)((IEnumerator)enum.Current()); Foo foo2 = enum.Current(); // no cast required - DateTime is always assumed to represent local time, so if you convert to Universal time twice, you’ll change the value each time. - Since it’s possible to cast any number to an enum value, e.g. TrueFalseEnum tfe = (TrueFalseEnum)42, make sure to always check that an enum is a legal value. Theoretically this is a pain, but in practice, I tend to check enum values with switch statements, making the default case the error case, so it’s good to know that my code does the right thing. - The Threading.Interlocked class. - Jim Miller has way more birthdays than we do. : ) - Jeff Richter agrees with me that the ThreadStart delegate needs an object parameter. - I should be using CompareInfo instead of ToLower for case-insensitive comparisons (although they don’t show how): using System.Globalization; using System.Threading;… CompareInfo compare = Thread.CurrentThread.CurrentCulture.CompareInfo; if( compare.Compare(“foo”, “FOO”, CompareOptions.IgnoreCase) == 0 ) { Console.WriteLine(“the same”); } If I can get that much out of just the annotations, imagine what I could learn if I read the whole thing. : ) Discuss
http://sellsbrothers.com/12499/
CC-MAIN-2021-43
refinedweb
538
56.86
04 May 2007 09:14 [Source: ICIS news] SINGAPORE (ICIS news)--Producer of expandable polystyrene (EPS) Ming Dih Group is on track to complete the expansion of its No 2 unit in July, said a source at the Taiwanese company on Friday.?xml:namespace> Capacity at the unit will be increased from 70,000 tonnes/year to 106,000 tonnes/year. The company operates two lines in ?xml:namespace> “Expansion work on the No 2 unit has already started and would be completed in the second half of July,” said the source. Ming Dih has two EPS units in It also has a 30
http://www.icis.com/Articles/2007/05/04/9026267/taiwans-ming-dih-on-track-to-complete-expansion.html
CC-MAIN-2015-18
refinedweb
104
69.11
Question 6 : How do I delete a file? (And other file questions...) Use os.remove(filename) or os.unlink(filename); Question 7 : How do I copy a file? The shutil module contains a copyfile() function. Question 8 : How do I read (or write) binary data? or complex '>' in the format string forces big-endian data; the letter 'h' reads one "short integer" (2 bytes), and 'l' reads one "long integer" (4 bytes) from the string. Question 9 : How do I run a subprocess with pipes connected to both input and output? Use the popen2 module. For example: import popen2 fromchild, tochild = popen2.popen2("command") tochild.write("input\n") tochild.flush() output = fromchild.readline() Question 10 : How can I mimic CGI form submission (METHOD=POST)?.'
http://www.indiaparinam.com/python-programming-language-question-answer-python-interview-questions/set-5/page1
CC-MAIN-2019-18
refinedweb
124
70.5
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series. Where we left off last time, we transform numbers into Fixnum objects, and turn + and other operators into method calls. In theory at least - we haven't actually tested that functionality properly in practice. So lets start with the simplest possible test: Can we call Fixnum#+ at all? Let's compile 5 + 3. Nope. Doesn't work. For starters, we need to add require statements for Numeric and Integer (in 1c51f99) But that's not enough. So lets try dipping down to our low level syntax, and compile this decidedly non-Ruby test: %s(printf "test: %d\n" (callm (call __get_fixnum 5) + ((call __get_fixnum 3)))) The (callm ...) is explicitly meant to do roughly what we should expect the 5 + 3 to get turned into. (You can compile this with ./compile [filename] in the working directory, and get /tmp/[filename] out if everything worked) To get it to return something other than junk, let's actually implement Fixnum#+ still using runtime.c to provide add() (in 326ae54): def + other %s(call add (@value (callm other __get_raw))) end With that in place there's just one little wrinkle to get the example above to compile: Our %s() syntax can't handle operators. So for the sake of convenience, let us fix that by allowing any valid method name (in a7d4c31) by modifying sexp.rb: def parse_exp ws - ret = expect(Atom, Int, Quoted) || parse_sexp + ret = expect(Atom, Int, Quoted, Methodname) || parse_sexp ws return ret end At this point you should be able to compile and run the test above, and get the expected test: 8 output. So why is 5 + 3 on its own segmentation faulting? Luckily we have a debugging tool in the --norequire --parsetree options: %s(do (callm (sexp (call __get_fixnum 5)) + (sexp (call __get_fixnum 3))) ) And there we see an ugly pitfall of this syntax. Spotted it? The second argument to callm is a parameter list enclosed in parantheses, but here we pass it another list/array that happens to be an expression. It does reveal rather weak error handling and consistency checking for this syntax that we probably should do something about, but for now let us focus on the operators. With a one line fix we get the output we need (in 1d9df53:): --- a/transform.rb +++ b/transform.rb @@ -86,7 +86,7 @@ class Compiler next :skip if e[0] == :sexp if OPER_METHOD.member?(e[0].to_s) - e[3] = e[2] + e[3] = E[e[2]] e[2] = e[0] e[0] = :callm end And now the parse output looks like this: %s(do (callm (sexp (call __get_fixnum 5)) + ( (sexp (call __get_fixnum 3)))) ) }}} Which is far more reasonable, and actually doesn't segfault. So let us test that it returns the right value too: -ruby- a = 5 + 3 %s(printf "%d\n" a) ... and that works. And that is wrong. Sigh. What we forgot above is that this whole mess means we have to re-wrap integers into an object every time we get one back. The example above should have been: a = 5 + 3 %s(printf "%d\n" (callm a __get_raw)) And that still fails. Until we do this to Fixnum#+ (in 50fcf26) def + other - %s(call add (@value (callm other __get_raw))) + %s(call __get_fixnum ((call add (@value (callm other __get_raw))))) end And now our latest test should work. I made you look through this intensely frustrating sequence for a reason - it's easy to present the more polished description of this process, but it's important to keep an eye on just how many details are there to trip you up when generating code and trying to debug things based on effects on generated code through several layers like this. It is one of the hardest parts of writing a compiler. You may have noticed that despite a growing number of unit-tests for the parser components, there's far fewer tests for the code generation, despite the fact that it is far trickier to track down bugs there. One of the upcoming parts will start addressing the difficulties of unit-testing the code generation. In the meantime, it is now finally time to do what we came here for, and re-implement runtime.c as compiler intrinsics + Ruby. Time to break out our old friend gcc, and look at some asm output. To start with, here's add: .globl add .type add, @function add: pushl %ebp movl %esp, %ebp movl 12(%ebp), %edx movl 8(%ebp), %eax addl %edx, %eax popl %ebp ret .size add, .-add Oh. That was simple. It "just" sets up the stack frame, moves the arguments into %edx and %eax respectively, adds them, and frees the stack frame. We've put up with runtime.c and do a function call on every add for that? Of course, when we start trying to implement the Ruby semantics, an extra function call doesn't look all that bad (we'll have a lot of fun trying to optimize that out at some point in the future...), since there'll be a full blown method call to begin with. Not all of them are as simple (do gcc -o runtime.s -S runtime.c and take a look at runtime.s yourself), but none of the few runtime functions we have are all that much work. So let us get to it. First we add a :add keyword in the Compiler class, and include a new file to hold our arithmetic keywords so we don't keep messing up compiler.rb (in dc2bbd8) diff --git a/compiler.rb b/compiler.rb index 00c857f..059abf4 100644 --- a/compiler.rb +++ b/compiler.rb @@ -10,6 +10,8 @@ require 'transform' require 'set' require 'print_sexp' +require 'compile_arithmetic' + class Compiler attr_reader :global_functions attr_accessor :trace @@ -21,7 +23,7 @@ class Compiler :do, :class, :defun, :defm, :if, :lambda, :assign, :while, :index, :let, :case, :ternif, :hash, :return,:sexp, :module, :rescue, :incr, :block, - :required + :required, :add ] We then add a simple implementation in compile_arithmetic.rb: def compile_add(scope, left, right) @e.with_register do |reg| src = compile_eval_arg(scope,left) @e.movl(src,reg) @e.save_result(compile_eval_arg(scope,right)) @e.addl(reg, :eax) end [:subexpr] end Note that this is flawed in many ways: We don't do any simplification even when both arguments are constants; we don't do anything to try to avoid using two registers (if one of the arguments is an integer, we can evaluate the other expression first, and just addl the integer to this result. We also for good measure comment out add() from runtime.c As usual, we're being lazy and the above works, as you can show by compiling this: %s(printf "test: %d\n" (add 5 3)) Next up we need to make use of this in Fixnum... And we'll find it doesn't work. It turns out the above version is too naive - our with_register is a "fake" start to a register allocator - it only hands out %edx. That's fine for now. But what isn't fine is that it doesn't actually allocate it in any way. You'll find it gets clobbered elsewhere. So we'll need to implement very basic register allocation. For now we'll just add %ecx to our set of registers to hand out, and actually keep track of them, and raise an error if we run out (which we will sooner or later - at which point we'll need to implement more sophisticated register allocation). We modify emitter.rb like this (in d84e905): @@ -83,6 +83,7 @@ class Emitter @out = out @basic_main = false @section = 0 # Are we in a stabs section? + @free_registers = [:edx,:ecx] end @@ -333,9 +334,16 @@ class Emitter end def with_register - # FIXME: This is a hack - for now we just hand out :edx, - # we don't actually do any allocation - yield(:edx) + # FIXME: This is a hack - for now we just hand out :edx or :ecx + # and we don't handle spills. + + @allocated_registers ||= Set.new + free = @free_registers.shift + raise "Register allocation FAILED" if !free + @allocated_registers << free + yield(free) + @allocated_registers.delete(free) + @free_registers << free end Basically we've changed from just blindly handing out %edx for code that needs a spare register, to handing out one of %edx or %ecx depending on which one is already in use. But if we try to allocate more than two, we're screwed. The reason I haven't added more registers is because I've not wanted to try to figure out to what extent they're "safe". E.g. we use %eax as our default scratch register all over the place. The normal way of handle running out of register is "register spilling". That is, we need to "spill" the registers into variables (this is reasonable if we've used the registers to cache variables), or push them onto the stack. The latter adds complexities in that it means we need to ensure we keep track of it when handling offsets to the stack pointer. For now this will do. If you try to compile this, you should find it works: a = 5 + 3 %s(printf "%d\n" (callm a __get_raw)) (This of course reveals that we now really should have some IO functions that can handle other types - so that will be one of the things to handle in one of the upcoming parts). So lets do the same thing for sub. I won't go over every detail as it's pretty much identical to add. The meat is in 0c6de44. Let's test it. a = 7 - 2 %s(printf "%d\n" (callm a __get_raw)) Compiling this gives -5. What gives? Well, the evaluation order monster reared its head. We didn't spot this for compile_add since addition is commutative. So lets look at compile_sub and figure out a fix we can hopefully also apply to compile_add (this might sound unnecessary, but consider that expected evaluation order matters a great deal in a language with side effects, and few languages have more potential side effects than one in which you can, if you please, redefine even basic arithmetic operators, or modify classes as they are being defined...) def compile_sub(scope, left, right) @e.with_register do |reg| src = compile_eval_arg(scope,left) @e.movl(src,reg) @e.save_result(compile_eval_arg(scope,right)) @e.subl(reg, :eax) end [:subexpr] end We evaluate the left hand, then save it to a register, and we evaluate the right hand. Then we subtract the left hand from the right, and that's obviously wrong... But if we switch the order, we leave the result in another register than our "result" %eax, and end up having to move it. We can fix that by evaluating the right side before the left, but that's the wrong evaluation order. We can try to save the result from the left hand evaluation in %eax and save the result of the right hand in %edx, which would solve the problem if it wasn't for the fact that we likely will clobber %eax when evaluating the left hand side. What we're seeing here is that choices made for simplicity early on are starting to really affect the code-generation. Here are two possible alternatives: %eaxas a scratch register and allow allocating it. This is still problematic as it will get clobbered by a lot of calls we might want to make to external code. It may still be the best choice in some cases. %eax. This is a much more viable choice for many situations. If we can return regfrom compile_sub, we push the problem up one level, and its quite possible that the next step up can make use of %edxdirectly. We may want to go further and not restrict it to variables, as that would allow us to get rid of many instances of that awful call *%eaxthing that we currently do for calls. It may be time to look into this in one of the forthcoming parts - as much as a focus of this series has been about being "lazy" and deferring changes like these, we're now at a point where not being able to do the above actually makes the code more convoluted. For this part, lets just bite the bullet and fix the order and do a movl at the end for compile_sub (and this allows us to leave #compile_add untouched, since the evaluation order of the operands to the Ruby + operator are left intact): def compile_sub(scope, left, right) @e.with_register do |reg| src = compile_eval_arg(scope,left) @e.movl(src,reg) @e.save_result(compile_eval_arg(scope,right)) @e.subl(:eax,reg) @e.save_result(reg) end [:subexpr] end This works, but is not very DRY, when compared with compile_add, so lets add a helper, since we'll keep relying on this same pattern for the remaining bits and pieces (in a23f0d0): def compile_2(scope, left, right) @e.with_register do |reg| src = compile_eval_arg(scope,left) @e.movl(src,reg) @e.save_result(compile_eval_arg(scope,right)) yield reg end [:subexpr] end def compile_add(scope, left, right) compile_2(scope,left,right) do |reg| @e.addl(reg, :eax) end end def compile_sub(scope, left, right) compile_2(scope,left,right) do |reg| @e.subl(:eax,reg) @e.save_result(reg) end end We'll mostly ignore mul, since it's pretty much the same as add except using the imull instruction. See 5036b94 for the details. Div, however, looks curious, when we disassemble the code generated by gcc for runtime.c. With the normal prolog and epilog stripped out from the div function: movl 8(%ebp), %eax movl %eax, -8(%ebp) movl -8(%ebp), %edx movl %edx, %eax sarl $31, %edx idivl 12(%ebp) leave ret What is going on here? For starters, it doesn't look very optimized. 8(%ebp) and 12(%ebp) are the operands passed in to the function. The moves back and worth between %eax, -8(%ebp) and %edx looks fairly wasteful. As it turns out, idivl performs signed division of a double-word (64 bit on i386) combination of %edx and %eax registers by a third register or a memory location. The sarl instruction does an Arithmethic Shift Right on a Long. For those who are still not up on assembler, consider an "arithmetic shift" effectively moving the bits of the binary representation of the value one step in the specified direction. The prefix "arithmetic" is usually used on assembler instructions to indicate that the shift copies in the furthermost bit in the "empty" slots. So if you shift one bit right with an arithmetic shift right, it is equivalent to dividing by two regardless whether or not the value is negative or positive, as the left-most (furthermost for a shift right) bit is the sign bit. Here is a page that illustrates it with a little graphic So what is happening here is that it is filling all of %edx by the sign bit from %eax in order to handle negative values correctly. Lets take a stab at implementing that. I end up with this horrible monstrosity, which is another reason to take another look at the code generation shortly (in 95d642e) def compile_div(scope, left, right) # FIXME: We really want to be able to request # %edx specifically here, as we need it for idivl. # Instead we work around that below if we for some # reason don't get %edx. @e.with_register do |reg| src = compile_eval_arg(scope,left) @e.movl(:eax,reg) So far, so good. We've just evaluated the left hand, and moved the result out of the way. Then we evaluate the right hand. @e.save_result(compile_eval_arg(scope,right)) And then the ugly stuff starts. Depending on what registers we got allocated, we spit out variout variations up to and including temporarily pushing %edx onto the stack, since we need to have %edx free. This is a mess. One way to make it simpler would be to be able to request two registers, of which one should be %edx and the other doesn't matter. Then we could be assured a cleaner way of setting this up. So another task for when we clean up the code generator shortly. @e.with_register do |r2| if (reg == :edx) @e.movl(:eax,r2) @e.movl(:edx,:eax) divby = r2 else divby = reg if (r2 != :edx) save = true @e.pushl(:edx) end @e.movl(reg, :edx) @e.movl(:eax,reg) @e.movl(:edx,:eax) end @e.sarl(31, :edx) @e.idivl(divby) # NOTE: This clobber the remainder made available by idivl, # which we'll likely want to be able to save in the future @e.popl(:edx) if save end end [:subexpr] end It is finally time for the rest of the operators (for now), and wiping out runtime.c!
https://hokstad.com/compiler/29-the-operators-3
CC-MAIN-2021-21
refinedweb
2,810
70.13
kill -- send signal to a process Standard C Library (libc, -lc) #include <sys/types.h> #include <signal.h> int kill(pid_t pid, int sig); The kill() system call sends the signal given by sig to pid, a process or a group of processes. The sig argument process with the same session ID as the caller.). The kill() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error. The kill() system call will fail and no signal will be sent if: [EINVAL] The sig argument. getpgrp(2), getpid(2), killpg(2), sigaction(2), raise(3), init(8) The kill() system call is expected to conform to ISO/IEC 9945-1:1990 (``POSIX.1''). The kill() function appeared in Version 7 AT&T UNIX. FreeBSD 5.2.1 April 19, 1994 FreeBSD 5.2.1
https://nixdoc.net/man-pages/FreeBSD/man2/kill.2.html
CC-MAIN-2022-27
refinedweb
147
73.98
Lesson 8 - Constant methods in C++ In the previous lesson, Arena with warriors in C++, we finished our object-oriented arena. In today's tutorial, we're going to find out what constant methods are and when to use them. Constant methods As the name suggests, constant method is a method that does not change instance data. This applies for all getters - they only get data, they don't change anything. Therefore, all getters should be marked as constant. The Warrior.alive() method is also a kind of getter because it only determines whether the warrior has enough health. All methods that don't change data should be marked as constant. We do this simply by adding the const keyword after the method name (to both the .h and .cpp files). For example, it'd look like this for our Warrior class: Warrior.h #ifndef __WARRIOR_H_ #define __WARRIOR_H_ #include <string> #include "RollingDie.h" using namespace std; class Warrior { private: float health; float max_health; float damage; float defense; RollingDie ¨ public: Warrior(float health, float damage, float defense, RollingDie &die); bool alive() const; float attack(Warrior &second) const; float getHealth() const; float getMaxHealth() const; float getDamage() const; float getDefense() const; }; #endif Warrior.cpp #include "Warrior.h" Warrior::Warrior(float health, float damage, float defense, RollingDie &die) : die(die), health(health), max_health(health), damage(damage), defense(defense) {} bool Warrior::alive() const { return this->health > 0; } float Warrior::attack(Warrior & second) const { float defense_second = second.defense + second.die.roll(); float damage_first = this->damage + this->die.roll(); float injury = damage_first - defense_second; if (injury < 0) injury = 0; second.health -= injury; return injury; } float Warrior::getHealth() const { return this->health; } float Warrior::getMaxHealth() const { return this->max_health; } float Warrior::getDamage() const { return this->damage; } float Warrior::getDefense() const { return this->defense; } Note that even the attack() method is constant - it does not change instance data, it just changes parameter data. Rules Why are we doing this? The const keyword will make sure that the method is properly implemented. If we defined the setSidesCount() method for the RollingDie class and set it as constant, the compiler would report us the following message (for Visual Studio): error C2228: left of '.sides_count' must have class/struct/union note: type is 'const RollingDie *const ' note: did you intend to use '->' instead? The compiler will ensure that the function cannot change anything, and inform other programmers that the data is safe (it can't be changed). What other rules apply? We can only call constant methods from a constant method. This means that when we set a getter as constant, we can't call a setter from it (otherwise the keyword const wouldn't make much sense since the method would modify the data). There's also a third and most important rule: only constant methods can be called on constant objects. For example, for our warrior. Let's assume that the following code should work: const Warrior warrior(100, 8, 5, die); float damage = warrior.getDamage(); float defense = warrior.getDefense(); float aggressivity_level = damage - defense; Why would we assume that? Although the warrior is constant, we want to get the damage and defense only to read them - this shouldn't be a problem because the methods don't change the data and the constant isn't broken. But it's us who knows that, but the compiler doesn't know it yet. In order for this code to work, we must set the getters as constant - the compiler will then be certain that the method won't change anything and can be called on a constant object. Pointers For pointers and references, the situation becomes a little more complicated. By adding the const keyword to the method name, we can imagine that it marks all the fields as constant. For demonstration, let's consider the following class: class User { int age; char* name; void printName() const; }; Inside the printName() method, the const will make the this pointer of the User const * const type (as opposed to User * const). Just to make sure, we always read pointers backwards, so User const * const is a "constant pointer to a constant user", while User * const is a "constant pointer to a user". We also have two syntax options, so the following two samples are equivalent: User const * and const User *. So we can imagine a constant method as all the fields are constant: class User { const int age; char * const name; }; Note the type of the name field. It's a constant pointer (we can't change the address where it points to), but it's not a pointer to a constant value - we are still able to change the name. Constant methods do not guarantee that instance data will not change - it only ensures that fields do not change. Problems occur when working with references as well. Let's suppose we want to return the user's age by reference. It's not usual, but for some reason we want it. We can't use a reference to int because the field type is const int and the types must match. This means that we must use a constant reference: class User { int age; char* name; void printName() const; const int& getAge() const; }; Now this code part would work, and when we think about it, it literally didn't work until now - it respects constant values where it makes sense. That would be all for today's shorter lesson. In the source code, constants were added to the getters and to a few other methods (for which it made sense). Next time, we'll continue with the code, so I recommend downloading the source code below the article. You will be sure that we'll work with the same source code. Next time, in the Static Class Members in C++ lesson, we have static members waiting for us. Download Downloaded 0x (1.08 MB) Application includes source codes in language C++ No one has commented yet - be the first!
https://www.ict.social/cplusplus/course/oop/constant-methods-in-cplusplus
CC-MAIN-2020-05
refinedweb
995
61.87
Java agents are a special type of class which, by using the Java Instrumentation API, can intercept applications running on the JVM, modifying their bytecode. Java agents aren’t a new piece of technology. On the contrary, they’ve existed since Java 5. But even after all of this time, many developers still have misconceptions about this feature—and others don’t even know about it. In this post, we remedy this situation by giving you a quick guide on Java agents. You’ll understand what Java agents are, what are the benefits of employing them, and how you can use them to profile your Java applications. Let’s get started. Defining Java Agents Java agents are part of the Java Instrumentation API. So to understand agents, we need to understand what instrumentation is. Instrumentation, in the context of software, is a technique used to change an existing application, adding code to it. You can perform instrumentation both manually and automatically. You can also do it both at compiling time and runtime. So, what is instrumentation good for? It’s meant to allow you to change code, altering its behavior, without actually having to edit its source code file. This can be extremely powerful and also dangerous. What you can do with that is left to you. The possibilities are endless. Aspect-Oriented Programming? Mutation testing? Profiling? You name it. With that out of the way, let’s focus again on Java agents. What are these things, and how do they relate to instrumentation? In short, a Java agent is nothing more than a normal Java class. The difference is that it has to follow some specific conventions. The first convention has to do with the entry point for the agent. The entry point consists of a method called “premain,” with the following signature: public static void premain(String agentArgs, Instrumentation inst) If the agent class doesn’t have the “premain” method with the signature above, it should have the following, alternative method: public static void premain(String agentArgs) As soon as the JVM initializes, it calls the premain method of every agent. After that, it calls the main method of the Java application as usual. Every premain method has to resume execution normally for the application to proceed to the startup phase. The agent should have yet another method called “agentmain.” What follows are the two possible signatures for the method: public static void agentmain(String agentArgs, Instrumentation inst) public static void agentmain(String agentArgs) Such methods are used when the agents are called not at JVM initialization, but after it. How to Write a Java Agent A java agent, in practice, is a special type of .jar file. As we’ve already mentioned, to create such an agent, we’ll have to use the Java Instrumentation API. Such an API isn’t new, as we’ve also mentioned before. The first ingredient we need to create our agent is the agent class. The agent class is just a plain Java class that implements the methods we’ve discussed in the previous section. To create our Java agent, we’re going to need a sample project. So, we’re going to create a silly, simple app that does just one thing: print the n first numbers of the Fibonacci sequence, n being a number supplied by the user. As soon as the application is up and running, we’re going to use a little bit of Java instrumentation to perform some basic profiling. Building Our Sample App For this project, I’m going to use the free community edition of the IntelliJ IDEA, but feel free to use whatever IDE or code editor you feel most comfortable using. So, let’s begin. Open the IDE and click on “Create New Project,” as you can see in the following picture: In the “create new project” window, select “Java” as the type of the project and click on “Next:” Then, on the next screen, mark the “Create project from template” box, select the “Command Line App” template for the application and click on “Next” again: After that, the only thing that’s left is to configure the name and location for the project and click on “Finish:” With our project created, let’s create the Fibonacci logic. Copy the following content and paste on your main class: package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); System.out.println("How many items do you want to print?"); int items, previous, next; items = scanner.nextInt(); previous = 0; next = 1; for (int i = 1; i <= items; ++i) { System.out.println(previous); int sum = previous + next; previous = next; next = sum; } } } The application is super simple. It starts asking the user for the number of items they wish to print. Then, it generates and prints the Fibonacci sequence with as many terms as the number the user informed. Of course, the application is very naive. It doesn’t check for invalid items, for one. Another problem is that if the user enters a large enough value, it causes the program to overflow the upper limit of int. You could use long or even the BigInteger class to handle larger inputs. None of that matters for our example, though, so feel free to add those improvements as an exercise, if you wish to do so. Starting Our Java Agent Our sample application is up and running, so we’re ready to create our Java agent. Repeat the process of creating a new project. Call it “MyFirstAgentProject.” Create a new class by going to File > New Java Class, like in the following image: Then, name the class “MyFirstAgent” and press enter. After that, replace the content of the created file with what follows: package com.company; import java.lang.instrument.Instrumentation; public class MyFirstAgent { public static void premain(String agentArgs, Instrumentation inst) { System.out.println("Start!"); } } Now we’ll have to create a custom manifest. Let’s start by adding Maven support to our project. Right-click on the “MyFirstAgentProject” module. Then, click on “Add Framework Support.” On the “Add Frameworks Support” window, check “Maven” and click on OK. After that, IntelliJ will create a pom.xml file and open it so you can edit. Add the following content to the pom.xml file and save it: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>3.2.0</version> <configuration> <archive> <manifestFile>src/main/resources/META-INF/MANIFEST.MF</manifestFile> </archive> </configuration> </plugin> </plugins> </build> <properties> <maven.compiler.source>1.6</maven.compiler.source> <maven.compiler.target>1.6</maven.compiler.target> </properties> In the code above, we add the “maven-jar-plugin” plugin to our pom file, as well as configuring the location for our manifest file. Now we need to create it. To do that, copy the following content, paste it on a new file, and save it as “src/main/resources/META-INF/MANIFEST.MF.” Manifest-Version: 1.0 Premain-Class: com.company.javaagent.helloworldagent.MyFirstAgent Agent-Class: com.company.javaagent.helloworldagent.MyFirstAgent We’re almost there! With the manifest creation out of the way, let’s now perform a maven install. On the “Maven” tool window, expand the “Lifecycle” folder, right-click on install and then check the “Execute After Build” option. With that setting, the IDE will perform a maven install every time we build the application. So, let’s build it! Go to Build > Build Project, or use the CTRL + F9 shortcut. If everything went well, you should be able to find the resulting jar file, under “target.” We’ve successfully finished creating the jar file for our first Java agent. Now, let’s test it! We’re now going to use our agent, and to do that, we need to load it. There are two ways to load a Java agent, and they are called static and dynamic loading. Static loading happens before the application runs. It invokes the premain method, and it’s activated by using the -javaagent option when running the application. Dynamic loading, on the other hand, is activated with the application already running, which is done using the Java Attach API. Here we’re going to use static loading. With the sample application open in IntelliJ IDEA, go to Run > Edit Configurations…, as you can see in the image below: A new window will be shown. There, you can, as the name suggests, configure many different options regarding the running and debugging of the application. What you have to do now is to add the -javaagent option to the VM options field, passing the path to the agent’s jar file as an argument to it. After configuring the path, you can click on OK and then run the project as usual. If everything went right, that’s the output you should see: As you can see, the message “Start!” that we’ve defined using the premain method, was printed just before the main method of the application being run. That means that our agent was successfully loaded. Start! How many items do you want to print? 10 0 1 1 2 3 5 8 13 21 34 Process finished with exit code 0 What Comes Next? You might wonder if all that we’ve seen is too much trouble for little result. The answer to that is a firm “no.” First, you must keep in mind that our example here is the equivalent of a “Hello world” for Java agents. Things can get—and they do get—a lot more complex than this. As we’ve already mentioned, there are very sophisticated tools that make use of the Java Instrumentation API. Secondly, keep in mind that there are many additional tools one can use to really extend the power of Java instrumentation to new levels and allow you to do things like bytecode manipulation, for instance.Also, consider that much of the heavy lifting has already been done for you, regarding profiling. There are a lot of powerful tools out there, coming in different types that cater to virtually all profiling needs you might have. -
https://stackify.com/what-are-java-agents-and-how-to-profile-with-them/
CC-MAIN-2020-05
refinedweb
1,700
65.32
How to Convert a JSON String to Python Dictionary? In this article, we will learn how to convert a JSON string to dictionary in Python. We will use built-in function available in Python for JSON and some related custom examples as well. Let's first have a quick look over the full form of JSON, and how JSON files are used.. Convert JSON String to Python Dictionary using json.load() Function Python provides json.load() method to convert a JSON file contents to Python Dictionary. Converting JSON files to a dictionary is quite an easy task in python as python script provides a built-in JSON module and JSON has a built-in load() function to carry out the conversion process. Using the same JSON function, we can also convert a JSON string given as input by the user to a dictionary. This method is used when the programmer already has a JSON file with structured data. Syntax json.load(file object) Sample JSON File This JSON file, we will convert into Python Dictionary. { "Science" : [ { "name" : "Flora" "age" : 18 "marks": 87 }, { "name" : "Frank" "age" : 18 "marks": 76 } ], "Commerce" : [ { "name" : "David" "age" : 18 "marks": 92 }, { "name" : "Denver" "age" : 19 "marks": 65 } ] } Example In the following example, we are going to read a JSON file and then print out the data in the form of a dictionary. This json.load() function reads the string from the JSON file. The json.load(file) function creates and returns a new Python dictionary with the key-value pairs in the JSON file. Then, this dictionary is assigned to the data variable, and the result is displayed. You can also check the type of the variable using the built-in type() function of Python. import json #opens the JSON file with open("sample.json") as json_file: data = json.load(json_file) #type of data variable print("Type:", type(data)) #prints the data in dictionary print("Science Students:", data['Science']) print("Commerce Students:", data['Commerce']) Type: <class 'dict'> Science Students: [{'name': 'Flora', 'age': 18, 'marks': 87}, {'name': 'Frank', 'age': 18, 'marks': 76}] Commerce Students: [{'name': 'David', 'age': 18, 'marks': 92}, {'name': 'Denver', 'age': 19, 'marks': 65}] As we have read the JSON file, converted the JSON string into a Python Dictionary, we can now access the data using the index as shown below. This is how we can print the nested data. #access dictionary using index print(data["Science"][0]) {'name': 'Flora', 'age': 18, 'marks': 87} Points to Remember: 1. For reading any JSON file and to work with JSON (string, or file containing JSON object) you must import JSON module in python script. 2. Your JSON file and your Python script must be in the same directory. 3. Your JSON file must follow the JSON standard, so it has to have double quotes rather than single quotes, else it will return JSONDecodeError. Conclusion: In the above code, we learn to read a JSON string and convert the data to Python Dictionary. Now, we can access the data using indexes as we do in Python Dictionary.
https://www.studytonight.com/python-howtos/how-to-convert-a-json-string-to-python-dictionary
CC-MAIN-2022-21
refinedweb
509
69.52
You need to sign in to do that Don't have an account? How to define PDF filename? When rendering a page as a PDF the filename of the PDF is the name of the page which is not a good thing. The problem with this is that the name is not unique and can cause confusion with the user. I'm working on a quoting app that renders a quote as a PDF. Some broswers open the PDF embed, others automatically launch your PDF reader, and some prompt you to save or open. The problem is that if opened or saved theses files are all saved as qoute.pdf, qoute[1].pdf, quote[2].pdf, quote[3].pdf. The problem should be obvious. Ideally you should be able to define the name of the generated PDF but I haven't figured out how to do this. Thanks, Jason In a Visualforce email template the following code works well: <messaging:attachment If the name field of the related object is an autonum field, each pdf filename is guaranteed to be unique. I don't know if something similar can be done for a standard Visualforce page (i.e. not a VF email template). It doesn't work, <apex:page> does not have filename attribute. I also tried <apex:page, doesn't work either. This code generated a file called '0018000000PlLFm.xls' for me in FF3: <apex:page </apex:page> Same. No Solution yet??? Regards, Sait Hello, Folloving solution will work: <apex:page But there is an issue. Instead of generating pdf file on server side you will force SF to generate file on local machine. It will take more time and result not predictable if you use Static Resources ... The best way is to ask SF team to add filename attribute for <apex:page>. Since filename attribute is not yet supported for apex:page, has someone already found way to solve this pdf naming problem? Would it be possilbe to take care of the pdf file name with controller extension? I need this feature too. Where you guys able to figure out a way to define the name of the generated(using renderAs) PDF? bump. i need this too. Has anyone come up with a solution for this? If you add following lines of code in the Constructor of the Controller of the PDF page, it should do the trick - String yourFileNameName = 'nameofFile.pdf' ; Apexpages.currentPage().getHeaders().put( 'content-disposition', 'inline; filename=' + yourFileName ); Hope this helps.. Thank you for posting the suggestion but I could not get it to work. The only affect it had was to disable the "Save" on the pdf file. I have no VF experience so I suspect that might be the first problem ;-) Here is the general layout: public with sharing class SVMX_ServiceOrderPrintSummary { private String sId = System.currentPageReference().getParameters().get('id'); private static SVMXC__Service_Order__c WOHeader; .... public SVMX_ServiceOrderPrintSummary() { populateWOHeader(); populateWorkOrderLines(); // inserted your code here String yourFileName = 'nameofFile.pdf' ; Apexpages.currentPage().getHeaders().put( 'content-disposition', 'inline; filename=' + yourFileName ); } .... } This code works with a slight modification (typo). Line 2 of your code should read + yourFileNameName instead of + yourFileName. So in my case I did the following: ============================== public class PurchaseReqPDFExt { private Id appId; public PurchaseReqPDFExt(ApexPages.StandardController c) { this.appId = (Id) c.getId(); // I want to append the Name field as a suffix to PurchaseOrder filename so it will read Req-004PurchaseOrder.pdf (or whatever) // Below I make the SQL query and get the fields I want from the Purchase_Req__c object and it will be called Request variable // Then I insert the variable.field append it to my original filename and when I render the VF page and save, it appends the Name field to PurchaseOrder.pdf String yourFileNameName = Request.Name + 'PurchaseOrder.pdf' ; Apexpages.currentPage().getHeaders().put( 'content-disposition', 'inline; filename=' + yourFileNameName ); } public Purchase_Req__c Request { get { return [SELECT Name, Id, Status__c, Purchase_Amount__c, Actual_Amount__c, LastModifiedDate, CreatedDate, Requester_Name__c, Requester_Name__r.Name, Requestor_email__c, Description__c, Last_Name__c, Justification__c, First_Name__c, DivisionAcc__c, DivisionAcc__r.Name, Date__c, Address__c, Phone__c FROM Purchase_Req__c Where Id =:this.appId]; } } I decided to make it even shorter and hard code the file name. Here is the code and then the results: When I go to save the PDF, it defaults to the VF page name (as usual) which is "SVMX_VF_ServiceOrderPrintSummary.pdf" Thanks for your help though! So I doublechecked after your response and looks like that code works on Google Chrome, but does not work on Internet Explorer or Firefox. Will need to do more research to see if there is any other attribute that we can set as content-disposition instead of 'inline' to make it work for all the browsers. Please let me know if you find a solution for IE Thanks OK - will do and thanks again... Hi, Try using attachment instead of inline. This works in IE too. However the behavior is changed in the manner that Save/Open dialog appears instead of pdf rendering in browser . APathak, String yourFileName = 'MyFile.pdf' ; Apexpages.currentPage().getHeaders().put( 'content-disposition', 'attachment; filename=;' + yourFileName ); This is also not working.It's downaloaded as pdf but if i try open file file format nort support something this is kind of error we got it. Here's my solution that definitely worked but bare in mind I already overwrote the GUI with a view state VF page... -first make 2 VF pages (One for View state (ViewObject) and one for Print state (PrintObject)) -make an extension controller that BOTH will reference so that whatever you use to populate stuff in ViewObject can be populated PrintObject -put whatever content you need on each and format the layout for the print state however you'd like (one of the upsides of doing it this way) -put the renderAs="pdf" in page tag of PrintObject -put this commandlink in ViewObject <apex:commandLink Print </apex:commandLink> -put this is in the Extension controller //Property set in Controller public Object__c record {get;set;} //Print method for view state commandlink public PageReference print(){ Id objId = this.record.Id; String objName = this.record.Name; PageReference printPage = new PageReference('/apex/PrintObject'); printPage.getParameters().put('id', objId); printPage.getHeaders().put('content-disposition', 'attachement; filename='+objName+'.pdf'); return printPage; } Not only do you have a printable PDF of whatever record the resulting filname for something such as custom Sales Order object if the name is SO100100 will be SO100100.pdf! Simba (" id="ext-gen52" style="font-family:DSCDefaultFontRegular; color:#777777; text-decoration:none; font-weight:700) String yourFileNameName = 'nameofFile.pdf' ; Apexpages.currentPage().getHeaders().put( 'content-disposition', 'inline; filename=' + yourFileName ); Thank you. It works with extension in my case. public class theNameOfExtansion { public theNameOfExtansion (ApexPages.StandardController stdController){ String downloadingFileName = 'theNameOfDocument.pdf'; Apexpages.currentPage().getHeaders().put( 'content-disposition', 'inline; filename=' + downloadingFileName ); }
https://developer.salesforce.com/forums/?id=906F000000095BPIAY
CC-MAIN-2021-21
refinedweb
1,114
57.27
By Peter A. Bromberg, Ph.D. Microsoft has released MSXML 4.0 RTM (release to manufacturing) of what is now called Microsoft XML Core Services. As with any new technology, there are both plusses and minuses. I'll try to cover both here. First, some of the good things: New functionality added This is the RTM (supported, production quality) release of Microsoft® XML Core Services (MSXML) 4.0, formerly called the Microsoft XML Parser. This version has a number of improvements compared to MSXML 3.0: * Support of the World Wide Web (W3) Consortium final recommendation for XML Schema, with both DOM and SAX. * Substantially faster XSLT engine. Microsoft claims tests show about x4, and for some scenarios x8, acceleration. * New and substantially faster SAX parser, which is also available in DOM with the NewParser property [use dom.setProperty(*NewParser,* true)]. * Better support for sequential architectures and streamed XML processing based on SAX 2, including DOM-SAX integration and HTML generation. * Improved standards conformance and scalability. Specifically, the following old, non-conformant technologies have been removed: old XSL with XSLPattern; uuid namespaces for XDR; the proprietary XmlParser object; and the normalize-line-breaks property in SAX. Corresponding standard technologies (XSLT 1.0, XPath 1.0, and http-based namespaces for XDR and SAX2) have been available since MSXML 3.0. *'ll have to "bite the bullet" and should use DOMDocument.4.0 to get MSXML 4.0 functionality. * A number of bug fixes. * The msxml4 RTM cab file, which enables redistribution of MSXML over Internet or intranet. Because version-independent ProgIDs existed in previous releases, but have been removed from MSXML 4.0, installing this release will make them nonfunctional. To avoid this, run the following two commands from the command line before installing this release. regsvr32 /u msxml4.dll regsvr32 msxml3.dll This will restore version-independent ProgIDs to point to MSXML 3.0. It is important that you do this before installing this release. New Features: XML Schema Support The latest version of Microsoft XML Core Services (MSXML 4.0) complies with the World Wide Web Consortium (W3C) 2001 XML Schema Recommendation. Numerous features in this version provide XML Schema support. You can validate XML against XML schemas in both SAX and DOM using either an external schema cache or xsi:schemaLocation/xsi:noNamespaceSchemaLocation attributes. Although there is no XPath 2.0 yet, MSXML 4.0 provides extension functions, permitted by standards, to support handling XSD types in XPath and XSLT. MSXML 4.0 also provides a way to reach schema information in validated documents using type discovery in SAX and the Schema Object Model (SOM) in DOM. In addition to this added support for the final XML Schema (XSD) recommendation, MSXML continues to support XML-Data Reduced (XDR) and document type definition (DTD) validations. Performance Improvements: MSXML 4.0 provides the new, faster XML parser and a substantially improved XSLT engine. You can use the new parser with DOM by setting the NewParser property to True, e.g. xmlDoc.setProperty("NewParser", true). The new parser does not yet support asynchronous DOM load or DTD validation. However, everything else functions the same way as with the old parser, only faster. Microsoft's tests of MSXML 4.0 showed about 2x better performance for pure parsing, and more than 4x better performance for XSLT transformation. Other test claims I've seen show up to an 8x better performance. My own tests confirm this, but I would be hesitant to make such claims, although my tests -- mostly because of time constraints -- have been less than "Lab Quality". Extended Support For Sequential XML also build a DOM tree out of SAX events. This feature allows you to closely integrate DOM and SAX in your applications. For developers who really want to integrate complex XML processing, this opens up a whole new world of efficiencies. A new object, MXHTMLWriter, enables you to output HTML using a stream of SAX events in much the same way that the <xsl:output> element in XSLT can generate HTML from a result tree. The new MXHTMLWriter object provides support for high-performing Active Server Pages, which can now read XML documents with a SAX reader, put those documents through custom SAX filters, and output the data to the user as an HTML page. The MXHTMLWriter object is also useful for a number of other applications such as the manual generation of HTML pages. You will also find corresponding classes in the .NET platform to do things like this, with fine - grained control over the output. The XSLT processor can now accept the SAX content handler as output. This means that the chain of SAX filters can directly process the transformed XML. You can use this feature to eliminate XML regeneration and reparsing, allowing XML documents to be consumed immediately by an application when incoming XML documents need to be translated to the same dictionary. The new MXNamespaceManager object allows you to programatically track namespace declarations and resolve them, either in the current context or in the context of a particular DOM node. Of course MSXML supports namespaces and can automatically resolve names of elements and attributes, but there are many cases in which an attribute's value or an element's content uses qualified names. The MXNamespaceManager object tracks and resolves these qualified names. Separate WinHTTP Version 5.0 Component The former functions of the ServerHTTPRequest component are now provided by the separate WinHTTP component. This is a new server-side component (which has been in separate BETA as a stand-alone component for some months, with its own newsgroup on MS) and which provides reliable HTTP stack functionality. Without the WinHTTP component, ServerHTTPRequest and DOM/SAX with server-side mode can not access HTTP-based data. When you install MSXML 4.0 on a computer running NT / 2000 / XP, you automatically get the WinHTTP component. Windows 98 / Me / 95 can not support WinHTTP. You can still install MSXML 4.0 on Windows 98 or Windows Me, but you will have to use the default DOM/SAX mode, or the XMLHTTPRequest object, which uses WinInet. The RTM release provides more compact, faster, and more conformant XML processing components to be used in a server-side environment with enterprise-grade systems. MSXML 4.0 can still be used on the client side in a controlled environment where you can ensure installation of the component on client machines, as in cases with Intranet or trusted site environments and applications. Now let's look at a few of the negatives (at least for some of us). NewParser Property to use new Parser with DOMDocument(Transitional): The NewParser internal property (flag)- True/False holds a value indicating whether MSXML uses the old or new internal parser when loading DOMDocument objects. IMPORTANT: If you want to use the new faster parser, you must explicitly set this flag to "true"! This property is transitional for the period while MSXML provides a choice of two internal parsers. The new parser is faster and more reliable, but it does not yet support asynchronous mode or DTD validation. Once the new parser has been updated to provide for asynchronous mode and DTD validation, this property will always be set to True. If the newParser property is set to False, which is the current default setting, subsequent DOMDocument objects are loaded using the old parser. If this property is set to True, subsequent DOMDocument objects are loaded using the new parser. For example, the following code makes a DOMDocument object use the new parser when loading. xmldoc.setProperty("NewParser", True ); Side-by-Side Functionality and the Removal of Replace Mode: XMLInst.exe is Gone! Until MSXML 3.0, you could use replace mode to make the latest MSXML component simulate MSXML 2.0, which Internet Explorer 5.0 and 5.5 used for presenting XML when browsing. Now replace mode is completely removed from MSXML 4.0 and cannot be used to substitute MSXML 2.0 for Internet Explorer. That means that if Internet Explorer is your default program to open XML files, and you double click on an XML document, Internet Explorer will not use MSXML 4.0 to show it. MSXML 4.0 can still be used in the traditional way to manipulate XML within an HTML page using a script. Removal of Version-Independent ProgIDs Version-independent ProgIDs are gone. This provides true side-by-side installation, compared to previous versions in which some ProgIDs were upgraded with the installation of a new version of MSXML. Now CreateObject("MSXML2.DOMDocument"*) will not instantiate the MSXML 4.0 DOM, but a previous version (if it is registered). If you want to use MSXML 4.0, you must use a ProgId like this: CreateObject(*MSXML2.DOMDocument.4.0*). With C++ and Visual Basic you will create "MSXML2.DOMDocument40". Similar changes will be necessary with all other MSXML objects in order to use the MSXML 4.0 version. The reason for this change is to improve the maintainability of code which otherwise would be error-prone when unexpected changes occur in the environment. Version-independent ProgIDs were great for developers trying MSXML, but proved risky in a production environment. If a user developed code with version-independent ProgIDs, expecting MSXML 3.0 to be in place, and later installed or reinstalled SQL Server, for example, they might find that they were using MSXML 2.6 instead of MSXML 3.0. Removing version-independent ProgIDs in MSXML 4.0 kind of "bites the bullet", eliminating such instability, and improves MSXML as a server-side enterprise-grade component. Side-by-Side Functionality The release version of MSXML 4.0 is shipped with the same DLL names (msxml4.dll, msxml4r.dll, and msxml4a.dll) as in preview releases. With version-independent ProgIDs removed, this guarantees that MSXML 4.0 will not interfere with any versions of MSXML (2.0, 2.6, or 3.0) previously installed. If you have code that uses version - independent ProgID's instantiating MSXML 3.0 or 3.0 SP1, the installation of MSXML 4.0 RTM should have no effect whatsoever. Windows XP Side-by-Side installation does this in an even more integrated manner. This means that with Windows XP, you can use the special side-by-side functionality to manage how your applications are using MSXML and which versions (starting from MSXML 4.0) that they are using. You'll create a Windows XP application manifest which will link your application to the specific version of MSXML 4.0. Important Notes If you have MSXML 4.0 Previews installed (April or July Technical Preview Release of MSXML 4.0): Direct upgrade from previews to RTM is still supported. You will have to uninstall preview, and after that install RTM. You might have to manually unregister and remove msxml4*.dll files from your system32 directory. To unregister the MSXML 4.0 preview, run: regsvr32 /u msxml4.dll If you have the MSXML 4.0 April Technical Preview Release of MSXML 4.0 installed: Note that version-independent ProgIDs have been removed from MSXML 4.0 (despite having existed in the April release), so installing this release will make them non-functional. This might seriously affect a number of applications (such as Microsoft Visual Studio® .NET setup) that use MSXML 3.0. To avoid this problem, run the following two commands from the command line and delete msxml4*.dll files from the system32 directory before installing this release. regsvr32 /u msxml4.dll regsvr32 msxml3.dll Note that after unregistration you might have to manually delete msxml4*.dll files from your system32 directory. Some Final Comments: MSXML 4.0 RTM represents what I believe is the final stage in the evolution of Microsoft's COM - compliant XML technologies, ex "Dot Net". Developers and organizations who want to be able to upgrade their code base will be well-advised to use global constants application - wide for the instantiation of Version Specific ProgID's. In. this manner an entire application's source code base can be upgraded by simply doing a search and replace of ,for example "strMSXMLVersionNum='.3.0' and changing the 3 to a "4". In addition, it is possible to use the temporary "NewParser" property in code in an "if" test such that: if(strMSXMLVersionNum=".4.0") xmlDoc.setProperty("NewParser", true). However, conversion may not be that smooth. Developers should be ready to run in to additional problems most of which will revolve around that fact that older, non standards - compliant code in XPATH - type statements, replacement - type variables such as "$any$" and other previously acceptable constructs will no longer work under MSXML 4.0 RTM. It's time to meet the W3C and bring the company's code base up to standards if you wanna play, and unfortunately it may not be a picnic.
http://www.eggheadcafe.com/articles/20011102.asp
crawl-002
refinedweb
2,141
58.38
Join the community to find out what other Atlassian users are discussing, debating and creating. Hi, I am looking for a method to update a parent issue custom field with data from a sub-task when I transition a sub-task. (perhaps some post-function?) Each of our issues will have one sub-task that will be assigned to a different user from the owner of the parent issue. Part of our workflow requires the owner of the sub-task to input some text into a custom field. When the sub-task is closed I would like to copy that text to a custom field on the parent issue. (for reporting purposes - we only look at the parent issue, but want to see this text update captured on the sub-task) I hope this makes sense and any suggestions are welcomed thanks Hi James, The JIRA Misc Workflow Extensions plugin has the post function you need : Best regards, Peter These are all paid extensions. Are there any plans to add a post-build step to copy a field value to parent?, in a sub-task when the parent changes, but you could reverse this too so a sub-task update, updates the parent) For more details see Another option is the newest version of the plugin Workflow Essentials for JIRA. It contains a post function to copy values from sub-task to parent (and vice versa if you need that). Run a script as postfunction on the tansition to Done on the subtask. def parentKey = issue.fields.parent.key def issueKey = issue.key def tradeTermsKey def parent = get("/rest/api/2/issue/" + parentKey) .header('Content-Type', 'application/json') .asObject(Map).body def parentUpdate = put('/rest/api/2/issue/' + parentKey) .header('Content-type', 'application/json') .body([ fields: [ customfield_11901: issue.fields.customfield_11901, // Customer updated from number field to text field customfield_11900: issue.fields.customfield_11900 // Vendor updated from number field to text field ] ]) .as.
https://community.atlassian.com/t5/Jira-questions/Can-I-copy-a-field-value-from-sub-task-to-the-parent-issue/qaq-p/64798
CC-MAIN-2020-24
refinedweb
321
55.34
Why does this code has a runtime error? #include <cstdio> #include <map> #include <string> #include <iostream> using namespace std; map <int, string> A; map <int, string>::iterator it; int main(){ A[5]="yes"; A[7]="no"; it=A.lower_bound(5); cout<<(*it).second<<endl; // No problem printf("%s\n",(*it).second); // Run-time error return 0; } If you use cout, it works fine; however, if you use printf it gives runtime error. How do I correct it? Thanks! You're passing in a std::string to something that expects a char * (as you can see from the documentation on printf, which is a C function, which doesn't have classes, let alone string). To access a const version of the underlying char *, use the c_str function: printf("%s\n",(*it).second.c_str()); Also, (*it).second is equivalent to it->second, but the latter is easier to type and, in my opinion, makes it clearer what's happening.
https://www.codesd.com/item/why-does-this-code-have-a-run-time-error-using-the-card-with-strings-c.html
CC-MAIN-2020-45
refinedweb
157
65.32
async_storeconfigs Whether to use a queueing system to provide asynchronous database integration. Requires that puppetqd be running and that 'PSON' support for ruby be installed. o Default: false authconfig The configuration file that defines the rights to the different namespaces and methods. This can be used as a coarse-grained authorization system for both puppet agent and puppet master. o Default: $confdir/namespaceauth.conf autoflush Whether log files should always flush to disk. o Default: false autosign Whether to enable autosign. Valid values are true (which autosigns any key request, and is a very bad idea), false (which never autosigns any key request), and the path to a file, which uses that configuration file to determine which keys to sign. o Default: $confdir/autosign.conf bindaddress The address a listening server should bind to. Mongrel servers default to 127.0.0.1 and WEBrick defaults to 0.0.0.0. bucketdir Where FileBucket files are stored. o Default: $vardir/bucket ca Wether the master should function as a certificate authority. o Default: true ca_days How long a certificate should be valid. This parameter is deprecated, use ca_ttl instead ca_md The type of hash used in certificates. o Default: md5 ca_name The name to use the Certificate Authority certificate. o Default: Puppet CA: $certname ca_port The port to use for the certificate authority. o Default: $masterport ca_server The server to use for certificate authority requests. It's a separate server because it cannot and does not need to horizontally scale. o Default: $server ca_ttl The default TTL for new certificates; valid values must be an integer, optionally followed by one of the units 'y' (years of 365 days), 'd' (days), 'h' (hours), or 's' (seconds). The unit defaults to seconds. If this parameter is set, ca_days is ignored. Examples are '3600' (one hour) and '1825d', which is the same as '5y' (5 years) o Default: 5y cacert The CA certificate. o Default: $cadir/ca_crt.pem cacrl The certificate revocation list (CRL) for the CA. Will be used if present but otherwise ignored. o Default: $cadir/ca_crl.pem cadir The root directory for the certificate authority. o Default: $ssldir/ca cakey The CA private key. o Default: $cadir/ca_key.pem capass Where the CA stores the password for the private key o Default: $caprivatedir/ca.pass caprivatedir Where the CA stores private certificate information. o Default: $cadir/private capub The CA public key. o Default: $cadir/ca_pub.pem catalog_format (Deprecated for 'preferred_serialization_format') What format to use to dump the catalog. Only supports 'marshal' and 'y. o Default: compiler cert_inventory A Complete listing of all certificates o Default: $cadir/inventory.txt certdir The certificate directory. o Default: $ssldir/certs certdnsnames The DNS names on the Server certificate as a colon-separated list. If it's anything other than an empty string, it will be used as an alias in the created certificate. By default, only the server gets an alias set up, and only for 'puppet'. certificate_revocation Whether certificate revocation should be supported by downloading a Certificate Revocation List (CRL) to all clients. If enabled, CA chaining will almost definitely not work. o Default: true certname The name to use when handling certificates. Defaults to the fully qualified domain name. o Default: magpie.puppetlabs.lan classfile The file in which puppet agent stores a list of the classes associated with the retrieved configuration. Can be loaded in the separate puppet executable using the --loadclasses option. o Default: $statedir/classes.txt client_datadir The directory in which serialized data is stored on the client. o Default: $vardir/client_data clientbucketdir Where FileBucket files are stored locally. o Default: $vardir/clientbucket clientyamldir The directory in which client-side YAML data is stored. o (mostly used during testing with TextMate), and false, which produces no color. o Default: ansi confdir The main Puppet configuration directory. The default for this parameter is calculated based on the user. If the process is running as root or the user that Puppet is supposed to run as, it defaults to a system directory, but if it's running as any other user, it defaults to being in the user's home directory. o Default: /etc/puppet config The configuration file for doc. o Default: $confdir. configprint Print the value of a specific configuration parameter. If a parameter is provided for this, then the value is printed and puppet exits. Comma-separate multiple values. For a list of all values, specify 'all'. This feature is only available in Puppet versions higher than 0.18.4. configtimeout How long the client should wait for the configuration to be retrieved before considering it a failure. This can help reduce flapping if too many clients contact the server at one time. o Default: 120 couchdb_url The url where the puppet couchdb database will be created o Default: csrdir Where the CA stores certificate requests o Default: $cadir/requests daemonize Send the process into the background. This is the default. o Default: true dbadapter The type of database to use. o Default: sqlite3 dbconnections The number of database connections for networked databases. Will be ignored unless the value is a positive integer. dblocation The database cache for client configurations. Used for querying within the language. o Default: $statedir/clientconfigs.sqlite3 dbmigrate Whether to automatically migrate the database. o Default: false dbname The name of the database to use. o Default: puppet dbpassword The database password for caching. Only used when networked databases are used. o Default: puppet dbport The database password for caching. Only used when networked databases are used. dbserver The database server for caching. Only used when networked databases are used. o Default: localhost dbsocket The database socket location. Only used when networked databases are used. Will be ignored if the value is an empty string. dbuser The database user for caching. Only used when networked databases are used. o Default: puppet downcasefacts Whether facts should be made all lowercase when sent to the server. o Default: false dynamicfacts Facts that are dynamic; these facts will be ignored when deciding whether changed facts should result in a recompile. Multiple facts should be comma-separated. o. o Default: production evaltrace Whether each resource should log when it is being evaluated. This allows you to interactively see exactly what is being done. o Default: false external_nodes An external command that can produce node information. The output must be a YAML dump of a hash, and that hash must have one or both of classes and parameters, where classes is an array and parameters is a hash. For unknown nodes, the commands should exit with a non-zero exit code. This command makes it straightforward to store your node mapping information in other data sources like databases. o Default: none factdest Where Puppet should store facts that it pulls down from the central server. o Default: $vardir/facts/ factpath Where Puppet should look for facts. Multiple directories should be colon-separated, like normal PATH variables. o Default: $vardir/lib/facter:$vardir/facts facts_terminus The node facts terminus. o Default: facter factsignore What files to ignore when pulling down facts. o Default: .svn CVS factsource From where to retrieve facts. The standard Puppet file type is used for retrieval, so anything that is a valid file source can be used here. o Default: puppet://$server/facts/ factsync Whether facts should be synced with the central server. o Default: false fileserverconfig Where the fileserver configuration is stored. o Default: $confdir/fileserver.conf filetimeout The minimum time to wait (in seconds) between checking for updates in configuration files. This timeout determines how quickly Puppet checks whether a file (such as manifests or templates) has changed on disk. o Default: 15 freeze_main Freezes the 'main' class, disallowing any code to be added to it. This essentially means that you can't have any code outside of a node, class, or definition other than in the site manifest. o Default: false genconfig Whether to just print a configuration to stdout and exit. Only makes sense when used interactively. Takes into account arguments specified on the CLI. o Default: false genmanifest Whether to just print a manifest to stdout and exit. Only makes sense when used interactively. Takes into account arguments specified on the CLI. o Default: false graph Whether to create dot graph files for the different configuration graphs. These dot files can be interpreted by tools like OmniGraffle or dot (which is part of ImageMagick). o Default: false graphdir Where to store dot-outputted graphs. o Default: $statedir/graphs group The group puppet master should run as. o Default: puppet hostcert Where individual hosts store and look for their certificates. o Default: $certdir/$certname.pem hostcrl Where the host's certificate revocation list can be found. This is distinct from the certificate authority's CRL. o Default: $ssldir/crl.pem hostcsr Where individual hosts store and look for their certificate requests. o Default: $ssldir/csr_$certname.pem hostprivkey Where individual hosts store and look for their private key. o Default: $privatekeydir/$certname.pem hostpubkey Where individual hosts store and look for their public key. o. o Default: false http_proxy_host The HTTP proxy host to use for outgoing connections. Note: You may need to use a FQDN for the server hostname when using a proxy. o Default: none http_proxy_port The HTTP proxy port to use for outgoing connections o Default: 3128 httplog Where the puppet agent web server logs. o. o Default: false ignoreimport A parameter that can be used in commit hooks, since it enables you to parse-check a single file rather than requiring that all files exist. o Default: false ignoreschedules Boolean; whether puppet agent should ignore schedules. This is useful for initial puppet agent runs. ldapattrs The LDAP attributes to include when querying LDAP for nodes. All returned attributes are set as variables in the top-level scope. Multiple values should be comma-separated. The value 'all' returns all attributes. o Default: all ldapbase The search base for LDAP searches. It's impossible to provide a meaningful default here, although the LDAP libraries might have one already set. Generally, it should be the 'ou=Hosts' branch under your main directory. ldapclassattrs The LDAP attributes to use to define Puppet classes. Values should be comma-separated. o Default: puppetclass ldapnodes Whether to search for node configurations in LDAP. See for more information. o Default: false ldapparentattr The attribute to use to define the parent node. o Default: parentnode ldappassword The password to use to connect to LDAP. ldapport The LDAP port. Only used if ldapnodes is enabled. o Default: 389 ldapserver The LDAP server. Only used if ldapnodes is enabled. o Default: ldap ldapssl Whether SSL should be used when searching for nodes. Defaults to false because SSL usually requires certificates to be set up on the client side. o Default: false ldapstackedattrs The LDAP attributes that should be stacked to arrays by adding the values in all hierarchy elements of the tree. Values should be comma-separated. o Default: puppetvar ldapstring The search string used to find an LDAP node. o Default: (&(objectclass=puppetClient)(cn=%s)) ldaptls Whether TLS should be used when searching for nodes. Defaults to false because TLS usually requires certificates to be set up on the client side. o Default: false ldapuser The user to use to connect to LDAP. Must be specified as a full DN. lexical Whether to use lexical scoping (vs. dynamic). o o. o Default: false localcacert Where each client stores the CA certificate. o Default: $certdir/ca.pem localconfig Where puppet agent caches the local configuration. An extension indicating the cache format is added automatically. o Default: $statedir/localconfig logdir The Puppet log directory. o Default: $vardir/log manage_internal_file_permissions Whether Puppet should manage the owner, group, and mode of files it uses internally o Default: true manifest The entry-point manifest for puppet master. o Default: $manifestdir/site.pp manifestdir Where puppet master looks for its manifests. o Default: $confdir/manifests masterhttplog Where the puppet master web server logs. o Default: $logdir/masterhttp.log masterlog Where puppet master logs. This is generally not used, since syslog is the default log destination. o Default: $logdir/puppetmaster.log masterport Which port puppet master listens on. o Default: 8140. o Default: 4294967290 mkusers Whether to create the necessary user and group that puppet agent will run as. o Default: false modulepath The search path for modules as a colon-separated list of directories. o Default: $confdir/modules:/usr/share/puppet/modules name The name of the application, if we are running as one. The default is essentially $0 without the path or .rb. o Default: doc node_name How the puppet master determines the client's identity and sets the 'hostname', 'fqdn' and 'domain' facts for use in the manifest, in particular for determining which 'node' statement applies to the client. Possible values are 'cert' (use the subject's CN in the client's certificate) and 'facter' (use the hostname that the client reported in its facts) o Default: cert node_terminus Where to find information about nodes. o Default: plain noop Whether puppet agent should be run in noop mode. o Default: false onetime Run the configuration once, rather than as a long-running daemon. This is useful for interactively running puppetd. o Default: false passfile Where puppet agent stores the password for its private key. Generally unused. o Default: $privatedir/password path The shell search path. Defaults to whatever is inherited from the parent process. o Default: none pidfile The pid file o Default: $rundir/$name.pid plugindest Where Puppet should store plugins that it pulls down from the central server. o Default: $libdir pluginsignore What files to ignore when pulling down plugins. o Default: .svn CVS .git pluginsource From where to retrieve plugins. The standard Puppet file type is used for retrieval, so anything that is a valid file source can be used here. o Default: puppet://$server/plugins pluginsync Whether plugins should be synced with the central server. o Default: false. o Default: pson prerun_command A command to run before every agent run. If this command returns a non-zero return code, the entire Puppet run will fail. privatedir Where the client stores private certificate information. o Default: $ssldir/private privatekeydir The private key directory. o Default: $ssldir/private_keys publickeydir The public key directory. o Default: $ssldir/public_keys puppetdlockfile A lock file to temporarily stop puppet agent from doing anything. o Default: $statedir/puppetdlock puppetdlog The log file for puppet agent. This is generally not used. o Default: $logdir/puppetd.log puppetport Which port puppet agent listens on. o Default: 8139 queue_source Which type of queue to use for asynchronous processing. If your stomp server requires authentication, you can include it in the URI as long as your stomp client library is at least 1.1.1 o Default: stomp://localhost:61613/ queue_type Which type of queue to use for asynchronous processing. o Default: stomp rails_loglevel The log level for Rails connections. The value must be a valid log level within Rails. Production environments normally use info and other environments normally use debug. o Default: info railslog Where Rails-specific logs are sent o Default: $logdir/rails.log report Whether to send reports after every transaction. o Default: reports The list of reports to generate. All reports are looked for in puppet/reports/name.rb, and multiple report names should be comma-separated (whitespace is okay). o Default: store reportserver (Deprecated for 'report_server') The server to which to send transaction reports. o Default: $server reporturl The URL used by the http reports processor to send reports o Default: req_bits The bit length of the certificates. o Default: 2048 requestdir Where host certificate requests are stored. o Default: $ssldir/certificate_requests rest_authconfig The configuration file that defines the rights to the different rest indirections. This can be used as a fine-grained authorization system for puppet master. o Default: $confdir/auth.conf route_file The YAML file containing indirector route configuration. o Default: $confdir/routes.yaml rrddir The directory where RRD database files are stored. Directories for each reporting host will be created under this directory. o Default: $vardir/rrd rrdinterval How often RRD should expect data. This should match how often the hosts report back to the server. o Default: $runinterval run_mode The effective 'run mode' of the application: master, agent, or user. o Default: master rundir Where Puppet PID files are kept. o Default: $vardir/run runinterval How often puppet agent applies the client configuration; in seconds. o Default: 1800 sendmail Where to find the sendmail binary with which to send email. o Default: /usr/sbin/sendmail serial Where the serial number for certificates is stored. o Default: $cadir/serial server The server to which server puppet agent should connect o Default: puppet server_datadir The directory in which serialized data is stored, usually in a subdirectory. o Default: $vardir/server_data servertype The type of server to use. Currently supported options are webrick and mongrel. If you use mongrel, you will need a proxy in front of the process or processes, since Mongrel cannot speak SSL. o Default: webrick show_diff Whether to print a contextual diff when files are being replaced. The diff is printed on stdout, so this option is meaningless unless you are running Puppet interactively. This feature currently requires the diff/lcs Ruby library. o Default: false signeddir Where the CA stores signed certificates. o Default: $cadir/signed smtpserver The server through which to send email reports. o Default: none splay Whether to sleep for a pseudo-random (but consistent) amount of time before a run. o Default: false splaylimit The maximum time to delay before runs. Defaults to being the same as the run interval. o Default: $runinterval ssl_client_header The header containing an authenticated client's SSL DN. Only used with Mongrel. This header must be set by the proxy to the authenticated client's SSL DN (e.g., /CN=puppet.puppetlabs.com). See for more information. o Default: HTTP_X_CLIENT_DN ssl_client_verify_header The header containing the status message of the client verification. Only used with Mongrel. This header must be set by the proxy to 'SUCCESS' if the client successfully authenticated, and anything else otherwise. See for more information. o Default: HTTP_X_CLIENT_VERIFY ssldir Where SSL certificates are kept. o Default: $confdir/ssl statedir The directory where Puppet state is stored. Generally, this directory can be removed without causing harm (although it might result in spurious service restarts). o Default: $vardir/state statefile Where puppet agent and puppet master store state associated with the running configuration. In the case of puppet master, this file reflects the state discovered through interacting with clients. o Default: $statedir/state.yaml storeconfigs Whether to store each client's configuration. This requires ActiveRecord from Ruby on Rails. o Default: false strict_hostname_checking Whether to only search for the complete hostname as it is in the certificate when searching for node information in the catalogs. o Default: false summarize Whether to print a transaction summary. o Default: false syslogfacility What syslog facility to use when logging to syslog. Syslog has a fixed list of valid facilities, and you must choose one of those; you cannot just make one up. o Default: daemon tagmap The mapping between reporting tags and email addresses. o Default: $confdir/tagmail.conf tags Tags to use to find resources. If this is set, then only resources tagged with the specified tags will be applied. Values must be comma-separated. templatedir Where Puppet looks for template files. Can be a list of colon-seperated directories. o Default: $vardir/templates thin_storeconfigs Boolean; wether storeconfigs store in the database only the facts and exported resources. If true, then storeconfigs performance will be higher and still allow exported/collected resources, but other usage external to Puppet might not work o Default: false trace Whether to print stack traces on some errors o Default: false use_cached_catalog Whether to only use the cached catalog rather than compiling a new catalog on every run. Puppet can be run with this enabled by default and then selectively disabled when a recompile is desired. o Default: false usecacheonfailure Whether to use the cached configuration when the remote configuration will not compile. This option is useful for testing new configurations, where you want to fix the broken configuration rather than reverting to a known-good one. o Default: true user The user puppet master should run as. o Default: puppet vardir Where Puppet stores dynamic and growing data. The default for this parameter is calculated specially, like confdir_. o Default: /var/lib/puppet yamldir The directory in which YAML data is stored, usually in a subdirectory. o Default: $vardir/yaml zlib Boolean; whether to use the zlib library o Default: true This page autogenerated on Wed Jun 08 17:09:41 -0700 2011
http://manpages.ubuntu.com/manpages/oneiric/en/man5/puppet.conf.5.html
CC-MAIN-2015-11
refinedweb
3,459
59.7
Presentation Writing a Simple Parser Now, let's try writing a very simple parser. We'll be using the Parsec library (Install parsec if you haven't already installed it, either by itself or through the Haskell Platform. Consult your distribution's repository to find the correct package or port or build). Start by adding this line to the import section: import Text.ParserCombinators.Parsec hiding (spaces) import System.Environment This makes the Parsec library functions available to us, except the spaces function, whose name conflicts with a function that we'll be defining later. Now, we'll define a parser that recognizes one of the symbols allowed in Scheme identifiers: symbol :: Parser Char symbol = oneOf "!#$%&|*+-/:<=>?@^_~". Let's define a function to call our parser and handle any possible errors: parser we defined above to the Parsec function parse. The second parameter to parse is a name for the input. It is used for error messages. later on. Finally, we need to change our main function to call readExpr and print out the result (need to add import System.Environment in the beginning of the file now): main :: IO () main = do args <- getArgs putStrLn (readExpr (args !! 0)) To compile and run this, you need to specify --make on the command line, or else there will be link errors. For example: debian:/home/jdtang/haskell_tutorial/code# ghc --make -o simple_parser :: Parser () no longer); also called a proper list - A DottedList, representing the Scheme form (a b . c); also called an improper list. cons operator : for this. Instead of :, we could have used the concatenation operator ++ like this [first]++rest; recall that first is just a single character, so we convert it into a singleton list by putting brackets around it. Then we use a case expression to determine which LispVal to create and return, matching against the literal strings for true and false. The underscore _ alternative is a readability trick: case blocks continue until a _ case (or fail any case which also causes the failure of the whole case expression), think of _ as a wildcard. So if the code falls through to the _ case, it always matches, and returns the value of atom. Finally, we create one more parser, for numbers. This shows one more way of dealing with monadic values: parseNumber :: Parser Control :: Parser Next, we add a few more parser actions to our interpreter. Start with the parenthesized lists that make Lisp famous: parseList :: Parser.
http://en.m.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours/Parsing
CC-MAIN-2013-48
refinedweb
411
62.48
Back to article How do you find out what's going on in your running processes? All the techniques discussed here require a process ID. If you know the name of the process that's stuck or running wild, you can get its PID with ps aux | grep processname. Otherwise, you can usually find high-CPU processes with top: Tasks: 114 total, 1 running, 113 sleeping, 0 stopped, 0 zombie Cpu(s): 1.2%us, 0.6%sy, 0.6%ni, 96.0%id, 1.6%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4053756k total, 1059196k used, 2994560k free, 305236k buffers Swap: 2249060k total, 0k used, 2249060k free, 465112k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3055 akkana 20 0 160m 39m 18m S 39 1.0 0:02.83 plugin-containe 2223 akkana 20 0 330m 107m 26m S 16 2.7 0:51.33 firefox-bin 65 root 20 0 0 0 0 S 2 0.0 0:00.34 kondemand/0 1586 root 20 0 71712 22m 8244 S 2 0.6 0:24.87 Xorg 1 root 20 0 2748 1612 1216 S 0 0.0 0:00.37 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 ...and so on By default, top starts with the processes that are eating the most CPU. In this case, Firefox isn't stuck, but it is running flash, so the browser and its helper app are together taking up 45% of CPU. That's not a killer, but if the system is slow and you see a process using around 99% CPU, you've found the culprit. Once you've identified the process, how do you find out more about what it's doing? strace is a useful program that shows system calls as they happen. System calls include file operations like read, write and open, timeouts and signals, network operations, and assorted other ways to get or set system information. You can read a general overview with man 2 intro or a long list of all available calls with man 2 syscalls. This all may sound a bit arcane, but sometimes watching strace output will tell you why a program is failing -- maybe it's waiting for the network, or repeatedly trying to open a file that doesn't exist. You can run a program under strace, e.g. strace firefox. But more often, you'll want to attach to a process that's already running. Get the process ID from ps or top, then use strace -p. Suppose I have a process that seems to have hung: top says it's not using any CPU, but it's stuck and hasn't done anything in half an hour. $ strace -p 3672 Process 3672 attached - interrupt to quit recv(3, The program is waiting for the recv system call. Hit Ctrl-C to exit strace, then use apropos: $ apropos recv recv (2) - receive a message from a socket recvfrom (2) - receive a message from a socket recvmsg (2) - receive a message from a socket So the process is waiting to read something from a network socket. That's some progress, anyway. As you build up a library of diagnostic tools, you may sometimes wish you had an easier way to experiment with them. It's also handy if you're writing articles! Naturally, when I wanted a program to misbehave so I could show how to debug it, everything on my system worked perfectly. What's a poor girl to do? Write a misbehaving program! It's easy to simulate a network hang if you have a web server handy. On the server side, write a script like this one: #! /usr/bin/env python import time print """Content-Type: text/html Hello, world. Now we'll hang for a bit ... """ for i in range(50) : # Don't run forever and clog up the server time.sleep(300) # sleep for 5 minutes print "\nAnother line" \nAnother line" You can test it with wget or curl, or write a Python script: #!/usr/bin/env python import urllib2 response = urllib2.urlopen("") Of course, if you just want a program to take up all available CPU, just type something like this into your bash shell, or the equivalent in any other programming language: while /bin/true; do echo x done
http://www.linuxplanet.com/linuxplanet/print/7232
CC-MAIN-2014-52
refinedweb
752
79.3
Rohan, did you look at /Library/Python/2.5/site-packages/python-igraph ? What happens if you just type import igraph into a python shell? Gabor ---- On Tue, Sep 09, 2008 at 03:21:28AM -0700, Rohan Dixit wrote: > Hi Gabor, > > Thank you very much for helping me. Running the .mkpg installer on a Mac OSX > Leopard box did not deposit files in the /site-packages directly with all > other eggs, and I wasn't able to find the modules listed in the online > documentation anywhere. I'm a Python neophyte, though, so I'm probably > missing something here. > > Thanks again. > Rohan > > On Tue, Sep 9, 2008 at 2:39 AM, Csardi Gabor <address@hidden> wrote: > > > You need XCode to be installed if you want to use easy_install. > > I would say that for igraph easy_install is not easy, especially > > not on a Mac. Moreover, you would also need to compile and > > install the igraph C library before easy_installing python-igraph. > > > > I would suggest to use the installer, what was the problem with that? > > > > Best, > > Gabor > > > > On Mon, Sep 08, 2008 at 10:42:22AM -0700, Rohan Dixit wrote: > > > I should add the slightly different error message I get when using > > > "easy_install python-igraph install".. > > > > > > bash-3.2$ easy_install python-igraph install > > > Searching for python-igraph > > > Reading > > > Best match: python-igraph 0.5.1 > > > Downloading > > > > > > > > Processing python-igraph-0.5.1.tar.gz > > > Running python-igraph-0.5.1/setup.py -q bdist_egg --dist-dir > > > > > /var/folders/fK/fK0NqzhMELikM5PgTtfKhE+++TI/-Tmp-/easy_install-84BZbn/python-igraph-0.5.1/egg-dist-tmp-enNdcu > > > Include path: /usr/include/igraph /usr/local/include/igraph > > > Library path: > > > warning: no files found matching 'debian/changelog' > > > warning: no files found matching 'debian/control' > > > warning: no files found matching 'debian/prepare' > > > Compiling with an SDK that doesn't seem to exist: > > > /Developer/SDKs/MacOSX10.4u.sdk > > > Please check your Xcode installation > > > unable to execute gcc: No such file or directory > > > error: Setup script exited with error: command 'gcc' failed with exit > > status > > > 1 > > > bash-3.2$ > > > > > > > > > > > > > > > > > > On Mon, Sep 8, 2008 at 10:28 AM, Rohan Dixit <address@hidden> > > wrote: > > > > > > > Hello. > > > > > > > > First, thank you for taking time to read this. My problem is that I'm > > > > unable to properly unpackage the igraph egg on a Mac OSX platform > > (Leopard), > > > > although I downloaded and ran the python_igraph-0.5.1-py2.4....5.mpkg > > file. > > > > When I look in site-packages directory, I don't see any of the igraph > > > > modules. I tried using "easy_install igraph install" but got an error > > about > > > > no packages found at pypi.python.org/simple. I would really be > > grateful if > > > > you could help me to understand how I should install these modules! :) > > > > > > > > Thanks again, > > > > > > > > Rohan Dixit > > > > > > > > > > _______________________________________________ > > > igraph-help mailing list > > > address@hidden > > > > > > > > > -- > > Csardi Gabor <address@hidden> MTA RMKI, ELTE TTK > > > > > > -- > Rohan Dixit > Research Associate > Comprehensive Epilepsy Center > Department of Neurology and Neurological Sciences > Stanford University Csardi Gabor <address@hidden> MTA RMKI, ELTE TTK
http://lists.gnu.org/archive/html/igraph-help/2008-09/msg00066.html
CC-MAIN-2015-48
refinedweb
491
53.71
_SETEXTATTR — set named extended attribute for a vnode #include <sys/param.h> #include <sys/vnode.h> #include <sys/extattr.h> int VOP_SETEXTATTR(struct vnode *vp, int attrnamespace, const char *name, struct uio *uio, struct ucred *cred, struct thread *td);). The vnode will be locked on entry and should remain locked on return. If the extended attribute is successfully set, then zero is returned. Otherwise, an appropriate error code is returned. address. [EINVAL] The name, namespace, or uio argument is invalid. [EOPNOTSUPP] The file system does not support VOP_SETEXTATTR(). [ENOSPC] The file system is out of space. [EROFS] The file system is read-only. extattr(9), vnode(9), VOP_GETEXTATTR(9), VOP_LISTEXTATTR(9) This manual page was written by Robert Watson. All copyrights belong to their respective owners. Other content (c) 2014-2017, GNU.WIKI. Please report site errors to [email protected] load time: 0.200 seconds. Last modified: November 09 2017 18:38:06.
http://gnu.wiki/man9/VOP_SETEXTATTR.9freebsd.php
CC-MAIN-2018-22
refinedweb
155
55
Does it matter if I indent the print statement? It seems like I get the same thing whether I indent it or not. Does the print statement need to be indented? My solution was this one import json with open("message.json") as message_json: pass message=json.load(message_json) print(message['text']) looks like python needs the variable to be indented under the Pass block in order to save the contento of the json Print doesn’t need to be indented, as shown here. However, pass is just a null statement. You use it to test your code when you’re using with before writing anything after it so you don’t get an end of frame error. Once you have the message portion, you can delete pass.
https://discuss.codecademy.com/t/does-the-print-statement-need-to-be-indented/463776
CC-MAIN-2020-16
refinedweb
128
75.1
Starting with the Windows Vista PlatformSDK, defining the symbol STRICT_ before including shell header files changes declarations that previously had simply used ITEMIDLIST now use one of various types which are more clear about what type of ID list is being used. Think of it as the STRICT macro for the shell. The more precise names emphasize the form of the ID list: ITEMID_represents an item ID relative to some implied shell folder. The item ID is followed by a null terminator and is therefore exactly one CHILD SHITEMIDlong. In file-system speak, this is a "file name." IDLIST_represents an item ID list relative to some implied shell folder. It can consist of any number of RELATIVE SHITEMIDstructures concatenated together, followed by a null terminator. This item ID list must be used in conjunction with the IShellFolderit is associated with. In file-system speak, this is a "relative path." IDLIST_represents an item ID list relative to the desktop root folder. It too can be any length. This item ID list must be used in conjunction with the ABSOLUTE IShellFolderreturned by SHGetDesktopFolder. In file-system speak, this is a "fully-qualified absolute path." (An absolute ID list is the special case of a relative ID list where the implied shell folder is the desktop root folder.) ITEMID_represents an array of pointers to CHILD_ ARRAY ITEMID_objects, where all of the CHILD ITEMID_objects are children of the same shell folder parent. The array must be used in conjunction with that parent shell folder. CHILD These new types were introduced to help catch common programming errors when using the shell namespace. For example, if you try to pass an array of absolute pidls to IShellFolder::, you will get a type mismatch compiler error because that method takes an ITEMID_, and the thing you passed is not an array of ITEMID_ pointers. You are encouraged to turn on strict mode when compiling code that uses the shell namespace, but to preserve source code backward compatibility, the setting is off by default. I'm pretty sure some of those methods that only accepts child pidls today used to accept absolute pidls back in the Win9x days… Anyway, I'd love a followup to the IContextMenu series; how to display the context menu for two files when they don't share a common parent folder (Like Win+F/File search). Using CDefFolderMenu_Create2 is not easy either because MSDN does not tell you which HKEYs to pass nor their order. (HKCRFolder before HKCRDirectory before HKCRAllFilesystemObjects? And what about files? SystemFileAssociations? And that is just for XP, after that you also have to deal with Kind?) @WndSks: They accepted them because they are the same type – but that's like saying that CreateFileW() accepts a SQL statement as the filename. Just because they are the same type, doesn't mean it does what you expect it to. The whole point of the "strict" macro which is the whole point of this article is that it makes functions give you a compile time error rather than just break at runtime if you do something silly like send an absolute PIDL to a function that expects a child pidl. @Matt, when I said accept I meant that they did something and returned S_OK. Doing something and then returning S_OK does not mean that it did what you wanted it to do. Just that it didn't encounter an error while doing something that may or may not have been what you wanted. IShellFolder::GetDisplayNameOf: "At one time, pidl could be a multilevel PIDL, relative to the parent folder, and could contain multiple SHITEMID structures. However, this is no longer supported and pidl should now refer only to a single child item." OT, but I wonder why VBA was forgotten when mouse wheel support was added to Office 97. Yeah man; that's been eating away my soul for 16 years. And this forum is where the answer will lie. Yuhong Bao: "OT" Nominated for understatement of the year. I'll check back December, but I doubt someone will top that. Maybe he meant to comment on yesterday's post? Oh the infamous ShiteMid, reminds me of IShitTestVisible, useful when you need to find not so obvious bug. This seems pretty hard to use in practice. For example, ILClone takes only a relative IIDLIST. Why not an absolute IIDLIST? What could the difference between these possibly mean in terms of cloning them? Ah, well, there is an ILCloneFull(). But then MFC itself seems to use the non-strict model, which makes using it with this unpleasant (going to have to cast _AFX_SHELLITEMINFO members, e.g.)
https://blogs.msdn.microsoft.com/oldnewthing/20130124-00/?p=5453
CC-MAIN-2017-17
refinedweb
773
71.34
EDIT: After receiving a lot of comments on this post, I realized that not all of the information I presented is accurate. I just released an updated version of this article that you can read here. I will keep this article for historical reasons, but note that I don't hold all of the same views that I have presented here. I've been a long time VueJS fan and still think it's a great framework with a lot of potential. It was the first JS framework I learned and will always have a special place in my heart. In fact, when I first started learning React, I was convinced that I would never leave Vue. It's easy to learn and with the Vue CLI, you can create a functional site in minutes and easily deploy it with something like Netlify (which is what I use for my blog). I liked the organization of the .vue files with their separate HTML, JS, and CSS sections. When I got a job as a React developer, I found it easy to get confused by React files since the logic for rendering JSX could easily get out of hand. I missed Vue files where if I wanted to know what the DOM would look like, I just had to scroll to the top of the file and I would see everything HTML related. I've been working professionally with React for about 7 months now, and in that time I've gradually seen the beauty of React and have decided that it will from now on be my JS framework of choice (at least, until it's outdated and something even better comes along! Welcome to the front-end world...). I even decided to rewrite this blog with React, having originally created it with Vue. I'd like to explain a few reasons why React won me over. 1. There's no magic in React One of the things I love most about React is that it is literally just JavaScript. To create a React component, all I have to do is write a regular JavaScript function that happens to return JSX. That's it, it just works! The way I think of it, JSX is basically the one thing that sets a functional React component apart from a normal JS function. Even React hooks are just functions - yes, you would only use them for React, but at the end of the day they're just functions. There's really nothing magical about them. Since React is just JavaScript, I don't have to guess at all about where the code that's being used is coming from. Compare that to Vue where you have these "magic" functions and directives like $emit or v-for. In React, I don't have to "emit" an event. I just pass a callback function. That's pure JS, no magic there. In React, I don't need to remember some React specific directive to render a list of objects - I just use the JS map function and return JSX. Let's take the following as an example: a component that renders a list of users with a button allowing you to follow that user. Maybe we could use this in a social media app. Here's the Vue version: <!-- UserComponent.vue --> <template> <ul> <li v- {{ user.name }} <button @Follow</button> </li> </ul> </template> <script> export default { data: () => ({ users: [ { id: 1, name: 'Rick', }, { id: 2, name: 'Morty', }, { id: 3, name: 'Summer', }, ], }), }; </script> Pretty simple, right? We have a list of users that we render along with a button next to each of the user's names. When clicking the follow button, a followUser event is emitted along with the ID of the user we followed. Here's the same idea with React: // UserComponent.jsx import React from 'react'; const users = [ { id: 1, name: 'Rick', }, { id: 2, name: 'Morty', }, { id: 3, name: 'Summer', }, ]; export default function ({ onFollowUser }) { return ( <ul> {users.map(user => ( <li key={user.id}> {user.name} <button onClick={() => onFollowUser(user.id)}>Follow</button> </li> ))} </ul> ); } The beauty I find in the React implementation is what I was saying before - this is just JavaScript that happens to be able to return HTML. If I were a new developer, as long as I knew regular JS I would be able to look at the React version and basically know what was going on. If I were a new Vue developer looking at the Vue version, I'd have to know what v-for is and where on earth $emit is coming from. I would also probably want to know more about that data property in the default export of the Vue file. Those are all things I'd have to go learn from the Vue docs. Of course, there's nothing wrong with that - to master the tools you use as a developer, you must be familiar with the docs. But when I was a Vue developer, I had those docs open every single day. As a React developer, I occassionally look at the Hooks API reference in the React docs when I'm fuzzy on what one of the hooks does. Other than that, I don't need to look at the React docs because I'm just writing JavaScript. 2. React has better TypeScript support As I described in my last blog post, I've recently become quite fond of TypeScript. One of the things I love most about TypeScript is the Intellisense you get from your IDE when developing. When dealing with dynamic objects such as network or database responses, your editor can't give you any kinds of hints as to what kinds of properties exist on those objects when you're using regular old JavaScript. With TypeScript, however, all you have to do is define a type for such responses, and all of the sudden it's so much easier to manipulate those data since your editor knows what properties you're dealing with. No more accidentally spelling a property name wrong and then wondering why your code is crashing! The internet is already saturated with articles containing long praises for TypeScript, so I'll stop myself there. At the end of the day, TypeScript scales far better than regular JavaScript, and I've found that React plays a lot nicer with TypeScript than Vue does. A big part of the reason goes back to the fact that React is pretty much just JavaScript while Vue kind of lives in its own little world. Creating a TypeScript React app is as easy as running npx create-react-app my-app --template typescript, and everything just works. Now, the Vue CLI also lets you create a TypeScript project. Just run vue create my-project-name, and then you can choose to create a TypeScript project. There are a couple of problems with this, though. As explained in the Vue composition API RFC, the only way to really make Vue play nicely with TS is by using class component decorators, which I am not a fan of. I used TS with the Vue class component decorators for a class project, and I felt like it was hard to find good documentation and that there just wasn't a big enough community using Vue this way such that I could easily find online answers to what I thought would be common problems. For another project, I actually decided to use the experimental Vue composition API plugin, which meant I didn't have to use the class components I despised and could still enjoy pretty nice TS support. Technically it's not recommended to use this plugin in production code, but I did anyways because I really didn't want to use class components. Also, the project where I used it will only ever be heavily used by a handful of ancient Assyrian researchers, so I wasn't too concerned about massive scalability. The nice thing is, the composition API will be available by default in Vue 3, so I will give Vue credit for improving its TS support. For me, though, what makes React win the battle is the Intellisense available in the JSX. Vue still has its template section at the top, and even with TS, there isn't a great way for your editor to error check it. On the other hand, linters with React + TS will work just fine with JSX since you're just writing JavaScript inside. Let's create a simple counter app in Vue and React using TypeScript as an example. Both apps will contain a typo. Here's the Vue version (using the composition API plugin): <template> <div> <!-- Typo! But ESLint has no idea! --> <button @Click me</button> You've clicked the counter {{ counter }} times <div> </template> <script lang="ts"> import { defineComponent, ref, Ref } from "@vue/composition-api"; export default defineComponent({ name: "CounterApp", setup() { const counter: Ref<number> = ref(0); const increaseCounter = (): void => { counter.value += 1; } return { counter, increaseCounter }; } }); </script> Here's the same app in React: import React, { useState } from 'react'; const CounterApp = () => { const [counter, setCounter] = useState(0); const increaseCounter = (): void => { setCounter(prevCounter => prevCounter + 1); }; return ( <div> {/* Typo! But this time, ESLint spots it for us! */} <button onClick={increaseCouter}>Click me</button> You've clicked the counter {counter} times </div> ); }; export default CounterApp; In both of the apps, "increaseCounter" is misspelled "increaseCouter". You can set up ESLint in both projects no problem, but it's not going to catch the typo in the Vue project. You'll be just fine in the React project since React is just JavaScript and ESLint will immediately recognize that "increaseCouter" is not defined. Now, to Vue's credit, it does give pretty good error messages, so for this example when you run your app you will get an error about "increaseCouter" being undefined. However, you may not always get such instant feedback once you start dealing with more complicated code. Of course, just using TypeScript in React is not a guarantee that your code will be bug free. But you can automate error catching silly mistakes like the one above much easier than with Vue. With some configuration, there actually is a way to use JSX with Vue, so that could solve this problem. But at the moment, there doesn't seem to be a big community doing this, so you might have a hard time finding answers when you run into problems. At that point, you might as well just be using React which supports JSX out of the box. 3. React is easier to test Back when I worked as Vue developer, I began learning about the importance of test driven development. It took me quite a while to get used to the mindset of writing my tests at the same time that I wrote my application code, but now I'm at the point where I feel like I can't live without a decent test suite even for small side projects. I began developing this mindset around the same time that I began to embrace TypeScript. I found quite difficult to test my Vue components, even when I got them working with TypeScript. While using the Vue composition API plugin, I found that Vue Test Utils a lot of the times weren't able to correctly render the components I was creating. This probably shouldn't have come as a surprise to me. I doubt the team maintaining Vue Test Utils was too focused on getting tests to work with the composition API plugin when the composition API is going to ship natively with Vue 3 anyways. Vue Test Utils is actually pretty decent when you're using Vue 2's options API with regular JavaScript. Once I started using Vuetify though, which is a fantastic library, I immediately began running into problems. Getting the Vue test utils to recognize the Vuetify components was a bit of a pain and I don't think I ever really figured out how to get tests working properly with Vue + Vuetify or Vue + TypeScript. Maybe there's something I was missing. If so, I'd love to learn about it. With React, I've never really run into super weird errors when trying to set up unit testing, even when using TypeScript or a component library like Material UI. Once again, this all basically goes back to the fact that React is just JavaScript. There's no magic - all of its dependencies are imported in each file, which makes mocking them with Jest trivial. With something like Vuetify, all the components are kind of "magically" available, which is why I was running into so many problems trying to test them. Now, I know that shallow rendering the components would have solved those problems easily, but I agree with Kent C. Dodds that shallow rendering components doesn't really get them tested the way they should be. Conclusion The purpose of this post was not to say that Vue is bad - in fact, during the year and a half time span that I worked professionally with Vue, for the most part I was quite pleased with it and still believe it to be a fantastic frontend framework. I think that it is an easy framework to learn and a good starting place for new web developers. I developed the reasons I have for switching to React as a result of my own experiences. As I mentioned, TypeScript is almost a must have for me, and I find it a lot easier to use with React than Vue. But for someone who doesn't necessarily want to use TypeScript, React may not provide such a clear advantage over Vue. I'll also readily admit that some of the problems I mentioned with Vue almost certainly have solutions that I am not aware of, and I'm willing to learn about them! In my own experience, I just found it a lot easier to solve the problems I was facing with React than Vue. At the end of the day, this post really just represents my own opinion and is shaped by what I consider important as a web developer. Someone with a different set of experiences may prefer Vue, and that's totally fine. But for now, I'll be sticking with React. Top comments (31) Uff, a nice write-up, but I so disagree. Professionally I have to work with React way more frequently than Vue, but I highly prefer the latter. While ultimately it's a question of taste in most scenarios, here are a few rebuttals: There is no magic in React, you simply return JSX? Ignoring the fact that JSX itself is "magic", the fact that React's function to pass JSX on is called return doesn't mean it is return. You pass JSX into a function which I believe I don't have to argue is confusing as you seem to have confused it. Edit: as one if the commentators (Birtles) has made me aware of that paragraph being misleading, let's clarify that you DO use a simple return. The point should have been to point out where you return to. The same is true for your hooks explanation: saying that you declare callback functions is the same as hooks only being functions is like saying a vue-component is only an object: it's not wrong from a declarative standpoint but of course there is so much more to it. The complete section about your IDE not being able to be smart enough is not really an argument for the framework itself. I use jetbrain's PHPStorm and it's so helpful that the usefulness of typescript in general is limited to preventing other developers to break code. However, you explain it with a huge community for React which is a solid argument for choosing technologies. About vuetify It's indeed a bit tricky to test. What bothers me about this comparison is that you compare it directly with React. If you wanted to compare on that level, you might want to explore something like next.js so can compare apples with apples (or at least fruit to fruit) Hey neoan, I appreciate the feedback! I probably ought to be clearer about what I mean by "magic". I could see why someone may think of JSX as magic - what's all this HTML doing in my JS code? One of the most "magical" things I found about Vue but didn't touch on in this article was the properties that plugins attach to this. For example, Vuetify creates the $vuetifyproperty. There have been multiple times when I've been looking at my company's code and had to do a double take when I see global properties like that, unsure if it was native to Vue or coming from some third party or in-house plugin. I actually am a big fan of Vuetify and think it's a lot more intuitive to get started with than Material UI. But for React in general, I like its lack of "magic" variables. Everything used in a React file is imported. Going back to Vuetify, the $vuetifyobject has some pretty great methods on it, especially for breakpoints, but I will have to say I prefer Material UI's method of handling breakpoints. It just feels more declarative. That's only one example, of course, but overall I like how I almost never have to guess where any piece of code is coming from in React. This has been a huge plus for me when working with large code bases at the companies I've worked for. To be clear, I am not making the argument that there isn't a lot of magic in Vue. However, at the end of the day a vue-component is valid markup and JSX simply isn't. That is why you will never be able to run a React component using JSX without compiling. It's the opposite of pure JavaScript. As for vuetify: Again, I think you shouldn't compare a wrapper like that (shall we call it a framework-framework?) with pure React. React has similar wrappers to simplify (and therefore magicify) loading. And again, I am not claiming that Vue doesn't dip into the magic cookbook a lot. It is not intuitive to assume that something you declare in data() to be magically available directly in "this". It's something you need to learn/read. Hi Neoan, can you please explain why do you think JSX is magic? (comparing to Vue's magic, i think vue is more magical than jsx) I'm actually planning to write an article about JSX so i wanted to know your thoughts :D I will try: Vue's markup is valid HTML. In the most primitive form, what Vue does is to create observers for props and data and bind it to HTML. The result is relatively straight forward: This setup makes the use of .vue-files optional and doesn't even require a development server. One could import Vue via CDN and write components directly in "hard" declaration: So while there is a lot of magic in Vue, the markup and bindings are pretty straight forward. React uses JSX (you don't have to use it, BTW, but then React makes little sense). JSX is NOT valid HTML. It cannot render without being compiled. It isn't JavaScript either. The following code snippet is therefore neither valid JS nor HTML markup: Is that bad? No, but it's pure magic, of course. I mean, it has it's own name (JSX) because of that (otherwise it would just be a template)!? Now, as every React-dev knows, this means that some interpretational oddities arise. For example, we have to use "className" instead of "class" and "onClick" isn't "onclick". But all of that is relatively easy to get used to and not an issue compared to what is offered. What bothered me about React was how state was handled and bound (this got sooo much better with hooks) and that JSX has a very ugly solution to the two most common things in a dynamic template: conditionals and iteration. Given the following data: Let's look at Vue: And JSX: Looking at the above example: it there more magic in Vue? Yes. But you tell me which is more straight forward and approachable. Hmm i see, good points, thanks! one thing, why react makes lesser sense when not using JSX? It's just pretty straightforward createElement calls. "you don't have to use it, BTW, but then React makes little sense" Really? Please try writing the following JSX with createElement-calls and surprise me: And BTW, since we have a template here. Compare the following valid markups: VueJS Or even more native and including the complete logic in declarative form: AlpineJS here: (you said "React makes little sense", that's what i'm referring to, JSX is not binded to React elements you can build up any Tree like structures) So, how does it feel? Does it still make sense to use React? Do you think you could actually efficiently read a React app when built like that? What is the remaining advantage over vanilla JS? My point is: JavaScript is moving to a declarative style. We want a clear separation of view/template and controller/component-logic, even if there is a move towards having these concepts in one file (component based architecture). So my question was not if it is possible, but if it makes sense. You would certainly use a different solution than React if you couldn't use JSX. You seem to think that you are calling some magical return function. You're not. You are simply returning the JSX, the brackets there are not a function call, they're there for formatting the JSX over multiple lines as a return can't have a newline after it. Yes, good point. At first I didn't know what you meant, but reading my comment again, I see what you mean. I make it sound like there is a function named "return". While there is no function "return" in the React framework, you are effectively passing it on to the render function. I should have been more specific/less ambiguous. My point was that declaration shouldn't be confused with how things work. I felt like many of the arguments were comparable with saying "webpack is just a json". Edit: and reading my comment once more, you actually can't read it any other way than you have. I will have to change that. Surprisingly enough few weeks ago I wrote a similar post, which I think can explain the word "magic" more clearly. Using a tool vs. knowing how a tool works internally. Anurag Hazra ・ May 26 ・ 3 min read Your article is golden. I'm working on rewriting this article right now and found your points to be very helpful. Thank you. Having used both extensively as well, here are my thoughts: Vue requires a little bit of upfront learning (though most things like $emitcan be learned progressively very easily). But the more you learn inside the framework, the less you have to learn outside of it. React is a lot more low level in that regard. You even need to decide between several CSS solutions, forcing you to make impactful long-lasting decisions early on. From the rails doctrine (rubyonrails.org/doctrine/): This of course becomes less of a problem after a while... Another example: Sure, you can just pass in a callback for onFollowUserand don't get me wrong, I love the simplicity of this. But unless you wrap that callback in a useCallback, its child components will rerender unnecessarily leading to noticeably slow performance and lag in a fairly complex UI. Until you understand how React Hooks actually works, these are very surprising side effects, i.e. not "the pit of success". There are also a few quirks with useEffect: Reference: blog.logrocket.com/frustrations-wi... Vue takes care of these things for you. Not even once had I have to deal with rerendering/performance issues in Vue. The reactivity system just works and lets me focus on building the product. For me, it's okay to give up "JavaScript purity" for those reasons. I do enjoy the simplicity of writing React components nowadays. But I don't think the Hooks paradigm is "just JavaScript" that everyone just grasps immediately, it requires a lot of exposure to get to know its various edge cases. I completely understand why React renders the way it renders. But I hope they find some solution making the process more obvious. Some of it is already solved by linters, but definitely not everything gets caught and some things likely won't be possible to get caught. Like React Hooks simplified react components, Vue 3's <script setup>(github.com/vuejs/rfcs/blob/sfc-imp...) will also reduce the boilerplate to write components. Looking forward to that. These are all great points. I agree wholeheartedly that Vue is very easy to learn. It was the first FE framework I learned, and as I mentioned, it took me a while to finally decide to move to React because I do appreciate its simplicity. To someone just starting out with JS/web dev, I'd most likely recommend Vue because it fairly easy to learn. I probably ought to make it clearer in this post that I chose React not for ease of learning the framework. Rather, as I've come to understand JavaScript better over the years, I've actually come to really appreciate and even rely on the "lower level" features of React to write the kind of software I want. Plus, at this point I just feel like I can't live without TypeScript, and overall I've found it a lot easier to setup with React. Yes, hooks definitely take some getting used to! When I first learned about useEffect, I remember thinking "seriously, I have to declare its dependencies? Vue's computedand watchdo that for me!". However, I came to appreciate the simplicity that useEffectoffered - I didn't have to worry about the difference between things like mounted, created, beforeCreate, etc. Granted, Vue 3's composition API will remove a lot of these nuances. There's no perfect framework, and I'm not here to say that one is right and another is wrong. At the end of the day, I personally feel a lot safer making bigger applications with React. Hey guys, this post got a lot more attention than I was expecting and I've received a lot of great feedback and comments. I've read all the comments and am currently working on rewriting this article to clarify the points I made and removing parts that I now know to be false information. I'm pretty new to blogging / writing about techy things so I appreciate the patience! Your struggles with Typescript in Vue really surprise me. I have been using Vue with the class API and Typescript for over a year and I personally find them a joy to work with. I use Vuetify too. I also cannot complain about the lack of type check in the templates since Jetbrains IDEs offer the same intelisense in template as it does in script (the typo example you demonstrated would appear as an error in my IDE). Not only that, I get the full power of refactoring offered by Typescript - renaming, extracting, referencing, etc - and that includes Javascript code in template as well. Obviously, your experience is likely to be different if you use VS Code with Vetur which I personally believe to be a poor plugin (with all respect to the Vetur developers). I also dont see any black "magic" with the way Vue handles things. Using the class API, the 'this' basically gives you access to functions defined in the base class, which is a very common approach in OOP in general. Also, $emit() is just a function that happens to be prefixed with the $ symbol for identification purposes. The $ symbol itself doesnt do anything special. I also worked with the Vue Composition API plugin for Vue 2 and I had exactly the same pleasant experience using Jetbrains IDEs. I personally find JSX the only real black magic in the entire javascript ecosystem. I have come to believe that JSX is the best thing to use if you intend to scare away new developers. Hi Paulo, can you clarify why do you think JSX is the only read black magic in the entire js ecosystem? i'm planning to write a article about JSX so i wanted to know your opinions. why JSX is black magic to you? Black magic is not the right wording. What I mean is that JSX is not standard. If you look at Vue, it is essentially standard Javascript and HTML (with custom attributes) that can be understood by any web developer. JSX on the other hand is this weird and unfriendly mix of HTML and Javascript in the same code block. There is nothing wrong with having custom syntax that requires a compilation step. Typescript falls into the same category, and I love it. I just don't like JSX cause it looks terribly ugly. Okay let me go line by line and give my opinions :) "What I mean is that JSX is not standard" ?? facebook.github.io/jsx/ "If you look at Vue, it is essentially standard Javascript and HTML (with custom attributes)" - i don't think vue, sevlte, or angular's templates are in any ways valid html markup "that can be understood by any web developer" - JSX is way simpler to understand than vue's template, if i ask you "how does Vue's v-for directive work" can you answer it in simple words? but if i ask you "How does jsx loops works" i can tell you the answer "JSX is javascript so we can just use Array.map to map out the childrens directly in the template" "JSX on the other hand is this weird and unfriendly mix of HTML and Javascript in the same code block." - JSX isn't a mix of HTML or js its Just javascript. it represents Tree like structures and we use it to build up the virtual dom. "I just don't like JSX cause it looks terribly ugly." - well, personal preference :D That's all. :) Needless to say it all comes down to preference since you could use any of these frameworks to build most web apps. JSX is not just Javascript since it has markup in it and there is special syntax to separate markup from the actual js code. If it were just Javascript, it wouldn't need a special compiler in the first place. This is more than obvious. Anyway, tools are just tools. Languages are just languages. Pick whatever gets the job done. Javascript will die some die, just like Vue and React will die too, just like the almighty Jquery is dying just now. As a developer, what matters to me is to spend my time getting the job done rather than advocating in favor of a particular framework. Btw, I develop frontends using Angular, React, Vue, XAML, Blazor and vanilla javascript/html. As I said, they are nothing more than tools. First off, I believe anyone should be able to use any JS framework they want. You can get the job done with just about any of them, and their are successful big applications written in all of them. Though I will say I disagree with the point that Vue is doing magic, but React is just JavaScript. If we wanted to do that I'd say that Vue is just an object. I disagree because React isn't just JavaScript. It returns JSX. That is not valid JavaScript. It's an extension of it, but it has to be compiled to regular JavaScript still. There is still a lot of magic happening for your "regular JavaScript function" to actually be able to be ran in a browser. Again, you can definitely use which ever framework you prefer and don't even need to apologise for moving away from one framework. Just thought that with that point you ignored the "magic" that React is also doing. Thank you for this writeup. I definitely understand the drive for a DSL to provide defined structures, which driving with free structure of the underlying language may not provide enough guidance to doing it right. As you mentioned in both cases you have to learn how things fit together. With a lot of "magic" going under the hood is harder and usually necessary at some point. So personally I'd have the same leaning you do. Thinking about this more, I think that you need to learn from the Java developers and add a layer of abstraction. This way you can switch out the framework without changing your code. All would need to do was implement a new backend to hook up vue or react. This would mean you would be ready for the next new framework as well. Actually create your own language and compile it to JS so that when JS is replaced in browsers you don't need a complete rewrite of the code. Well I didn't say you wouldn't be tied down to a framework, it would just be a framework a level above the other frameworks. And we could create competing frameworks at this level so programmers can choose the best one for their needs. I dabbled with React for 1-2 months, then gave up due to complexities involving webpack and redux. Then I found VueJS, got hooked on it and never looked back. Fast forward 2 years, your post got me intrigued with React again. Your point about TypeScript and Vue is spot on... I think I might revisit React and see how it goes. Imo Mithirl.js it's closer native js than React. Good catch, thanks for pointing those out! I'll be sure to update those.
https://dev.to/brettfishy/why-i-converted-from-vue-to-react-1abn
CC-MAIN-2022-40
refinedweb
5,720
70.43
Tutorial How To Use Typescript with Create React App Introduction Create React App provides you with a set of essential packages and configurations to start building a React application. Version 2.0 introduced official TypeScript support. This allowed for JavaScript users to write with TypeScript conventions in the React frontend framework. TypeScript is a powerful tool that helps write safer, self-documenting code, allowing developers to catch bugs faster. In this article, you will set up a React app with TypeScript using Create React App. Prerequisites To follow along with this article, you will need: - Node.js installed locally, which you can do by following How to Install Node.js and Create a Local Development Environment. - Some familiarity with React. You can take a look at our How To Code in React.js series. - Some familiarity with TypeScript conventions. - A modern code editor that supports code hinting is recommended. Visual Studio Code provides this through IntelliSense. This tutorial was verified with Node v15.13.0, npm v7.8.0, react-scripts v4.0.3, react v17.0.2, and typescript v4.2.3. Starting a TypeScript Create React App First, open your terminal window and navigate to the directory you want to build your project in. Then, use create-react-app with the --template typescript flag: - npx create-react-app cra-typescript-example --template typescript Your terminal window will display an initial message: Creating a new React app in [..]/cra-typescript-example. Installing packages. This might take a couple of minutes. Installing react, react-dom, and react-scripts with cra-template-typescript... The --template typescript flag instructs the Create React App script to build using cra-template-typescript template. This will add the main TypeScript package. Note: In previous versions of Create React App, it was possible to use the --typescript flag, but this option has since been deprecated. Once installation is complete, you will have a new React application with TypeScript support. Navigate to your project directory and open it in your code editor. Examining the tsconfig.json File You may have noticed that your terminal window displayed the following message: We detected TypeScript in your project (src/App.test.tsx) and created a tsconfig.json file for you. Your tsconfig.json has been populated with default values. The tsconfig.json file is used to configure TypeScript projects, similar to how package.json is for JavaScript projects. The tsconfig.json generated by Create React App will resemble the following: { -jsx" }, "include": [ "src" ] } This configuration establishes several compilation rules and versions of ECMAScript to compile to. Examining the App.tsx File Now, let’s open the App.tsx file: import React from 'react'; import logo from './logo.svg'; import './App.css'; function App() {; If you have used Create React App before, you may have noticed that this is very similar to the App.js file that Create React App generates for non-TypeScript builds. You get the same base as the JavaScript projects, but TypeScript support has been built into the configuration. Next, let’s create a TypeScript component and explore the benefits it can provide. Creating a TypeScript Component Start by creating a functional component in the App.tsx file: function MyMessage({ message }) { return <div>My message is: {message}</div>; } This code will take a message value from the props. It will render a div with the text My message is: and the message value. Now let’s add some TypeScript to tell this function that its message parameter should be a string. If you’re familiar with TypeScript, you may think you should try to append message: string to message. However, what you have to do in this situation is define the types for all props as an object. There are a few ways you can accomplish this. Defining the types inline: function MyMessage({ message }: { message: string }) { return <div>My message is: {message}</div>; } Defining a props object: function MyMessage(props: { message: string }) { return <div>My message is: {props.message}</div>; } Using a separate interface: interface MyMessageProps { message: string; } function MyMessage(props: MyMessageProps) { return <div>My message is: {props.message}</div>; } You can create an interface and move that into a separate file so your types can live elsewhere. This may seem like a lot of writing, so let’s see what we gain from writing a bit more. We’ve told this component that it only accepts a string as the message parameter. Now let’s try using this inside our App component. Using TypeScript Components Let’s use this MyMessage component by adding it to the render logic. Start typing out the component: <MyMessage If your code editor supports code hinting, you will notice that the component’s signature will appear as you start to type out the component. This helpfully provides you with the expected values and types without having to navigate back to the component. This is especially useful when dealing with multiple components in separate files. Examining Prop Types Now, start typing out the props: <MyMessage messa As soon as you start typing message, you can see what that prop should be: This displays (JSX attribute) message: string. Examining Type Errors Try passing a numeric value for message instead of a string: <MyMessage message={10} /> If we add a number as a message, TypeScript will throw an error and help you to catch these typing bugs. React won’t even compile if there are type errors like this: This displays Type 'number' is not assignable to type 'string'. Conclusion In this tutorial, you set up a React app with TypeScript using Create React App. You can create types for all your components and props. You can benefit from code hinting with modern code editors. And you will be able to catch errors faster since TypeScript won’t even let the project compile with type errors. If you’d like to learn more about TypeScript, check out our TypeScript topic page for exercises and programming projects.
https://www.digitalocean.com/community/tutorials/using-create-react-app-v2-and-typescript
CC-MAIN-2021-21
refinedweb
992
57.57
It is eight and final part of Say hello to x86_64 Assembly and here we will take a look on how to work with non-integer numbers in assembler. There are a couple of ways how to work with floating point data: - fpu - sse First of all let’s look how floating point number stored in memory. There are three floating point data types: - single-precision - double-precision - double-extended precision As Intel’s 64-ia-32-architecture-software-developer-vol-1-manual described: The data formats for these data types correspond directly to formats specified in the IEEE Standard 754 for Binary Floating-Point Arithmetic. Single-precision floating-point float point data presented in memory: - sign - 1 bit - exponent - 8 bits - mantissa - 23 bits So for example if we have following number: | sign | exponent | mantissa |-------|----------|------------------------- | 0 | 00001111 | 110000000000000000000000 Double precision number is 64 bit of memory where: - sign - 1 bit - exponent - 11 bit - mantissa - 52 bit Result number we can get by: value = (-1)^sign * (1 + mantissa / 2 ^ 52) * 2 ^ exponent - 1023) Extended precision is 80 bit numbers where: - sign - 1 bit - exponent - 15 bit - mantissa - 112 bit Read more about it - here. Let’s look at simple Of course we will not see all instructions here provided by x87, for additional information see 64-ia-32-architecture-software-developer-vol-1-manual Chapter 8. There are a couple of data transfer instructions: FDL- load floating point FST- store floating point (in ST(0) register) FSTP- store floating point and pop (in ST(0) register) Arithmetic instructions: FADD- add floating point FIADD- add integer to floating point FSUB- subtract floating point FISUB- subtract integer from floating point FABS- get absolute value FIMUL- multiply integer and floating point FIDIV- device integer and floating point and etc… FPU has eight 10 byte registers organized in a ring stack. Top of the stack - register ST(0), other registers are ST(1), ST(2) … ST(7). We usually uses it when we are working with floating point data. For example: section .data x dw 1.0 fld dword [x]: ;; ;; adds st0 value to st3 and saves it in st0 ;; fadd st0, st3 ;; ;; adds x and y and saves it in st0 ;; fld dword [x] fld dword [y] fadd Let’s look on simple example. We will have circle radius and calculate circle square and print it: extern printResult section .data radius dq 1.7 result dq 0 SYS_EXIT equ 60 EXIT_CODE equ 0 global _start section .text _start: fld qword [radius] fld qword [radius] fmul fldpi fmul fstp qword [result] mov rax, 0 movq xmm0, [result] call printResult mov rax, SYS_EXIT mov rdi, EXIT_CODE sysc: #include <stdio.h> extern int printResult(double result); int printResult(double result) { printf("Circle radius is - %f\n", result); return 0; } We can build it with: build: gcc -g -c circle_fpu_87c.c -o c.o nasm -f elf64 circle_fpu_87.asm -o circle_fpu_87.o ld -dynamic-linker /lib64/ld-linux-x86-64.so.2 -lc circle_fpu_87.o c.o -o testFloat1 clean: rm -rf *.o rm -rf testFloat1 And run:
https://0xax.github.io/asm_8/
CC-MAIN-2017-47
refinedweb
511
51.28
Created on 2002-11-29 11:37 by pmoore, last changed 2002-12-30 21:56 by gvanrossum. This issue is now closed. This patch supersedes patch 492105. This is an updated version of the patch for imports from a zip file. This version is against current CVS, and includes documentation and tests. The patch builds on Windows, and runs OK. I haven't (yet) run any comprehensive tests, as this is the first time I've set up a build environment, and I'm not sure how to run the tests yet :-( Logged In: YES user_id=113328 Uploaded file foo.zip, needed (in lib\test directory) for the changes to test_import.py Logged In: YES user_id=113328 Bad news. I've just hit problems with the patched Python - when I start python.exe, it reports "cannot import site" and seems unable to locate the standard library. (This when I run python from the PCBuild directory). I built a clean CVS version, and don't see this problem. Can anyone else reproduce this, or help me diagnose it? I don't understand Jim's changes well enough to see what might be the problem (I updated the patch purely mechanically). Logged In: YES user_id=33168 Paul, thanks for taking this up. I made very little progress on this I have attached my version of the diffs. I had some problems with the original patch which I was never able to correct. For example, I think it was not possible to run python from a local build env't. It appeared that python had to be installed for everything to work. See the problem in bug 586680 (also in site.py). That should probably be addressed and was part of my problem. I tried to simplify the code some, so you should see some differences between the patch I attached and the original. These are not as important as making the patch work properly, both when installed and running from a build env't. I will try to take a look at your problem later, but it may take a while. Let me know if you make progress. Logged In: YES user_id=113328 My problem turns out to be an interaction with the new "universal newline support". In find_module, within the switch(path_type), case PY_IMP_LISTDIR, the section of code marked /* Search the list of suffixes: .pyc, .pyd, ... */ tries to do fopen(buf, fdp->mode). But .py files are now mode 'U' (universal newline) which is not a valid mode for fopen (). I've made a fix, but I'm not sure how reliable it is. On a related note, I'm not 100% sure how robust the code is in the presence of case-insensitive filesystems. I'm way out of my depth in this code, so I don't know if I'll be able to fix it, but I'll have a look (and I'll look at your version of the patch, as well). Logged In: YES user_id=113328 I've now taken Neal's patch and integrated my changes. I'll check it through again, and run a few more tests, and then upload the result (I've had problems with cvs update, so I'm downloading a new copy of CVS python first). Most things look OK. There are a couple of issues with case- insensitive filesystems. The first is that the cache of os.listdir results is (case- sensitively) keyed on directory name. I don't see this as an issue as (a) I'd expect the case used to remain constant (who's going to change a directory in sys.path at runtime to the same name with a different case???) and (b) it's only an efficiency hit, not an error, in any case. The second issue is more problematic. Zip files on sys.path are recognised (see add_zip_names()) by the extension ".zip" (case sensitively). It's arguable that on case-insensitive filesystems, this check should also be case insensitive. But finding that a file is on a case-insensitive filesystem is, I believe, not possible. And using a case-insensitive test all the time isn't right either. I'd say that we should just *define* the algorithm as only treating files ending in ".zip" (case sensitive) as zip files. It doesn't seem to be a problem in practice. I'll try to find some way to fit that into the documentation... Apart from this issue, I've seen no other problems with the patch. Logged In: YES user_id=21627 I certainly agree with only treating .zip as indication for zipfiles. On a case insensitive file system, you can put .zip into sys.path and it will still find .ZIP. Logged In: YES user_id=113328 Aargh. It looks like Neal's patch is wrong - it didn't include the changes to the files in the PC directory. Also it has a lot of strange changes, like replacing PyEval_CallObject with PyObject_Call - I don't see how this relates to zipfiles. I wasted most of this morning trying to juggle too many patches, to no avail. I think I'm going to have to revert to my patch, get it applied to the CVS version, and upload that. Neal, I agree with a lot of the changes you made, but they'll need redoing - can you apply my patch to a clean checkout and redo your tidying up? The patch I'm uploading with this comment builds cleanly for me on Windows 2000, and passes the supplied tests. You need to apply the patch, then put the 2 included zipfiles in the lib/test directory. I'd be very grateful if someone could test this on Unix and/or Mac, and update it where necessary. I've not tested these platforms, but I'm assuming that Jim's original code worked there. There are also other areas which seem strange (the big block of deleted code in sysmodule.c worries me - but I see no failures because of it, so I've left it unchanged) and anything better I'm sticking with "it worked when I tried it"... Logged In: YES user_id=33168 Hmmm, I thought I used to have changes to PC. Not sure, though. I'll try to take a look at the diffs and test on unix later this evening. I can also add back in any of my changes after you get everything working. Logged In: YES user_id=21627 Paul, there is sentiment on python-dev that this patch is unacceptable, because it is incomprehensible. I can sympathize with these feelings, so we need to take the possibility into account that this won't go into Python 2.3. Logged In: YES user_id=113328 Understood (I've been watching python-dev). I'm happy to keep working on this while a consensus is reached. Actually, I don't think there's much more I can do, I'm hoping that users of other OS's can test this now.... Logged In: YES user_id=33168 Paul, I didn't get a chance to review/test the patch. But as I just said in python-dev, I think it's worthwhile. Guido seems to agree. I'll try to test on Linux soon. Logged In: YES user_id=21627 I've now tested this patch on Unix. Support for compressed files didn't work from the build directory, since the zlib import happens before site.py is run. To work around this issue, I import zlib in site.py if we run from the build directory. I wonder whether the zlib import could be delayed until a compressed zipfile is encountered. Regenerated patch, attached as Newpatch-Dec06.zip (there, zipped-uncompressed.zip actually does contain uncompressed files). Logged In: YES user_id=64929 > Support for compressed > files didn't work from the build directory, since the zlib > import happens before site.py is run. I imported zlib before site.py is imported because site.py may itself be in a compressed zip archive. It looks like your trick of importing zlib in site.py works, and there is no need to limit it to running from the build directory since "import zlib" is cheap. An alternative is to try a zlib import after site.py is imported. As noted, this patch may not be accepted anyway, so I will stand by on this and any other problems. Logged In: YES user_id=6380 I'm going with Just's version. However, some of the code here may still make it into 2.3a2 (the bootstrap related patches).
http://bugs.python.org/issue645650
crawl-003
refinedweb
1,433
84.27
In the first part, you learned the core concepts of clean architecture as it pertains to Flutter. We also created a bunch of empty folders for the presentation, domain and data layers inside the Number Trivia App we're building. Now it's time to start filling those empty folders with code, using TDD, of course. Where to Start? Whenever you are building an app with a UI, you should design the UI and UX first. I've done this homework for you and the app was showcased in the previous part. The actual coding process will happen from the inner, most stable layers of the architecture outwards. This means we'll first implement the domain layer starting with the Entity. Before we do that though, we have to add certain package dependencies to pubspec.yaml. I don't want to bother with this file later, so let's fill in everything right now. pubspec.yaml dependencies: flutter: sdk: flutter # Service locator get_it: ^2.0.1 # Bloc for state management flutter_bloc: ^0.21.0 # Value equality equatable: ^0.4.0 # Functional programming thingies dartz: ^0.8.6 # Remote API connectivity: ^0.4.3+7 http: ^0.12.0+2 # Local cache shared_preferences: ^0.5.3+4 dev_dependencies: flutter_test: sdk: flutter mockito: ^4.1.0 Entity What kind of data will the Number Trivia App operate with? Well, NumberTrivia entities, of course. To find out which fields this class must have, we have to take a look at the response from the Numbers API. Our app will work with responses from concrete or random number URL, e.g. response.json { "text": "42 is the answer to the Ultimate Question of Life, the Universe, and Everything.", "number": 42, "found": true, "type": "trivia" } We're interested only in the text and number fields. After all, the type will always be "trivia" in our case and the value of found is irrelevant. If a number is not found, we'll get the following response. It's still perfectly fine to display it in the app. not_found.json { "text": "123456 is an unremarkable number.", "number": 123456, "found": false, "type": "trivia" } NumberTrivia is one of the few classes which we aren't going to write in a test-driven way and it's for one simple reason - there's nothing to test. It extends Equatable to allow for easy value comparisons without all the boilerplate (which we'd have to test), since Dart supports only referential equality by default. number_trivia.dart import 'package:equatable/equatable.dart'; import 'package:meta/meta.dart'; class NumberTrivia extends Equatable { final String text; final int number; NumberTrivia({ this.text, this.number, }) : super([text, number]); } Use Cases Use Cases are where the business logic gets executed. Sure, there won't be much logic in the Number Trivia App - all a UseCase will do is getting data from a Repository. We are going to have two of them - GetConcreteNumberTrivia and GetRandomNumberTrivia. Data Flow & Error Handling We know that Use Cases will obtain NumberTrivia entities from Repositories and they will pass these entities to the presentation layer. So, the type returned by a UseCase should be a Future<NumberTrivia> to allow for asynchrony, right? Not so fast! What about errors? Is it the best choice to let exceptions freely propagate, having to remember to catch them somewhere else in the code? I don't think so. Instead, we want to catch exceptions as early as possible (in the Repository) and then return Failure objects from the methods in question. Alright, let's recap. Repositories and Use Cases will return both NumberTrivia and Failure objects from their methods. How is something like this possible? Please, enter functional programming. The Either Type The dartz package, which we've added as a dependency, brings functional programming (FP) to Dart. I won't pretend that I'm some FP pro, at least not yet. You don't need to know a lot of things either. All we're interested in for the purposes of better error handling is the Either<L, R> type. This type can be used to represent any two types at the same time and it's just perfect for error handling, where L is the Failure and R is the NumberTrivia. This way, the Failures don't have their own special "error flow" like exceptions do. They will get handled as any other data without using try/catch. Let's leave the details of how to work with Either for when we need it in the next parts of this course. Defining Failures Before we can proceed with writing the Use Cases, we have to define the Failures first, since they will be one part of the Either return type. Failures will be used across multiple app features and layers, so let's create them in the core folder under a new error subfolder. Failures go into the error folder There will be one base abstract Failure class from which any concrete failure will be derived, much like it is with regular exceptions and the base Exception class. failures.dart import 'package:equatable/equatable.dart'; abstract class Failure extends Equatable { // If the subclasses have some properties, they'll get passed to this constructor // so that Equatable can perform value comparison. Failure([List properties = const <dynamic>[]]) : super(properties); } This is enough for now, we will define some concrete Failures, like ServerFailure, in the next parts of this course. Repository Contract As you hopefully remember from the last part, and as is siginified on the diagram above, a Repository, from which the UseCase gets its data, belongs both to the domain and data layer. To be more precise, its definition (a.k.a. contract) is in domain, while the implementation is in data. This allows for a total independence of the domain layer, but there is another benefit which we haven't talked about yet - testability. That's right! Testability and separation of concerns go together extremely well. Oh, the beauty of good architecture... Writing a contract of the Repository, which in the case of Dart is an abstract class, will allow us to write tests (TDD style) for the UseCases without having an actual Repository implementation. So, how will the contract look like? It will have two methods - one for getting concrete trivia, another for getting random trivia and the return type of these methods is Future<Either<Failure, NumberTrivia>>, ensuring that error handling will go like a breeze! number_trivia_repository.dart import 'package:dartz/dartz.dart'; import '../../../../core/error/failure.dart'; import '../entities/number_trivia.dart'; abstract class NumberTriviaRepository { Future<Either<Failure, NumberTrivia>> getConcreteNumberTrivia(int number); Future<Either<Failure, NumberTrivia>> getRandomNumberTrivia(); } GetConcreteNumberTrivia Although this part is getting quite long and information-packed already, I don't want to leave you hanging. We're finally going to write some tests while implementing the GetConcreteNumberTrivia use case. In the next part, we will add the GetRandomNumberTrivia use case, so definitely stay tuned for that! As is the case with TDD, we are going to write the test before writing the production code. This ensures that we won't add a bunch of things that we "ain't gonna need" and we'll also get the confidence that our code isn't going to fall apart like dominoes. Writing the Test In Dart apps, tests go into the test folder and it's a custom to make the test folders map the lib folders. Let's create all the root ones and also a folder called "usecases" under "domain". The test we're about to write goes into the usecases folder Create a new file under the "usecases" test folder called get_concrete_number_trivia_test.dart and while we're at it, also a get_concrete_number_trivia.dart under "usecases" lib folder. Let's set up the test first. We know that the Use Case should get its data from the NumberTriviaRepository. We'll mock it, since we only have an abstract class for it and also because mocking allows us to check, among other things, if a method has been called. {} To operate with this NumberTriviaRepository instance, the GetConcreteNumberTrivia use case will get it passed in through a constructor. Tests in Dart have a handy method called setUp which runs before every individual test. This is where we will instantiate the objects. NOTE that the code we're writing will be full of errors - we don't even have a GetConcreteNumberTrivia class yet.); }); } Although we haven't really written any tests yet, now it's a good time to start writing the production code. We want to make a skeleton for the GetConcreteNumberTrivia class, so that the setup code above will be error-free. get_concrete_number_trivia.dart import '../repositories/number_trivia_repository.dart'; class GetConcreteNumberTrivia { final NumberTriviaRepository repository; GetConcreteNumberTrivia(this.repository); } Now comes the time to write the actual test. Since the nature of our Number Trivia App is simple, there won't be much logic in the Use Case, actually, no real logic at all. It will just get data from the Repository. Therefore, the first and only test will ensure that the Repository is actually called and that the data simply passes unchanged throught the Use Case. get_concrete_number_trivia_test.dart import 'package:clean_architecture_tdd_prep/features/number_trivia/domain/entities/number_trivia.dart'; import 'package:clean_architecture_tdd_prep/features/number_trivia/domain/repositories/number_trivia_repository.dart'; import 'package:clean_architecture_tdd_prep/features/number_trivia/domain/usecases/get_concrete_number_trivia.dart'; import 'package:dartz/dartz); }); final tNumber = 1; final tNumberTrivia = NumberTrivia(number: 1, text: 'test'); test( 'should get trivia for the number from the repository', () async { // "On the fly" implementation of the Repository using the Mockito package. // When getConcreteNumberTrivia is called with any argument, always answer with // the Right "side" of Either containing a test NumberTrivia object. when(mockNumberTriviaRepository.getConcreteNumberTrivia(any)) .thenAnswer((_) async => Right(tNumberTrivia)); // The "act" phase of the test. Call the not-yet-existent method. final result = await usecase.execute(number: tNumber); // UseCase should simply return whatever was returned from the Repository expect(result, Right(tNumberTrivia)); // Verify that the method has been called on the Repository verify(mockNumberTriviaRepository.getConcreteNumberTrivia(tNumber)); // Only the above method should be called and nothing more. verifyNoMoreInteractions(mockNumberTriviaRepository); }, ); } When you think about it, the test above reads like documentation, even without all of my comments. Running the test now is pointless, since there are even compilation errors, so we can jump into implementation. All we need to add to the GetConcreteNumberTrivia use case is the following function which will do everything prescribed by the test. get_concrete_number_trivia.dart import 'package:dartz/dartz.dart'; import 'package:meta/meta.dart'; import '../../../../core/error/failure.dart'; import '../entities/number_trivia.dart'; import '../repositories/number_trivia_repository.dart'; class GetConcreteNumberTrivia { final NumberTriviaRepository repository; GetConcreteNumberTrivia(this.repository); Future<Either<Failure, NumberTrivia>> execute({ int number, }) async { return await repository.getConcreteNumberTrivia(number); } } When you now run the test (if you don't know how, refer to your IDE documentation), it's going to pass! And with that we've just written the first Use Case of the Number Trivia App using TDD. In the next part, we're going to refactor the above code, create a UseCase base class to make the app easily extendable and add a GetRandomNumberTrivia use case. Subscribe to the mailing list below to get notified about new tutorials and much more from the world of Flutter! Excellent understanding Matej.. Just one concern with regards to using dartz package vs RxDart. Can RxDart be used instead of Dartz or is there any particular reason we are going with Dartz. We are going to be using RxDart indirectly through the flutter_bloc package. Dartz is here only for the Either type so that we can have a clean “error flow” without catching exceptions everywhere. Hi. thanks for this very informative series and what you are doing for the community. I think the equatable package has changed and now the properties are taken via a getter instead of through the constructor. I’m using Equatable 0.6.1 Thank you and yes, that’s correct! the getter in the Failure-class is causing my test to fail… it says expected Right<dynamic, … but actual Right<Failure, … What can I do to fix this? changing Right(tNumberTrivia)); to Right(tNumberTrivia); fixed it for me! Hi @Matej, dart-import Extension showing this error… And googling can’t fix my problem. Current file is not on project root or not on lib folder? File must be on $root/lib. Your current file path is: ‘d:Flutternumber_exclibfeaturesnumber_triviadomainrepositoriesnumber_trivia_repository.dart’ and the lib folder according to the pubspec.yaml file is ‘d:Flutternumber_exc/lib’. Are you trying to format imports in a test file? That’s not going to work. If you try to fix the import in the test folder, it gives an error. So first go to the lib folder and then try to fix the import. Thanks a lot Matej for taking your time to teach others. This series are great stuff. Thanks for teaching us. If possible please provide buttons to move to the previous and next article at the end of the page. Why doesn’t the entity have the id? For example: Suppose the data source is a sqflite database, how could I get a user’s data by id if the entity doesn’t take the user id into the widget? Great work! I love your tutorials! Could you explain how you would implement additional features without code duplication and how you handle communication for shared data? Hi Matej! First of all, thank you so much for such a great job! The tutorial is amazing and project results in a really clean and organized architecture. I’m implementing this with a simple UseCase of a SignIn. In my case, I’ve got a UserRepository with the following method (User is the entity): Future<Either> signIn(String email, String password); In the other hand my UseCase has its method execute returning the same: Future<Either>. However, when running my test the very first time, I’m getting error: Expected: Right: Actual: <Instance of 'Future<Either>’> Why is that? My code on the test is exactly the same as yours, but instead of a tNumber, I have a tEmail and tPassword. Could you please shed some light on this? Thank you so much! Ok. For some reason the blog comments are getting parts of the comments as HTML tags I guess. The type of the Future is Either Failure or User. The error I’m getting is: Expected: Right[dynamic, User]:[Right(User)] Actual: [Instance of ‘Future[Either[Failure, User]]’] (I’ve replaced “lower than” and “greater than” with box brackets) Forget it… I’m an idiot.. I forgot await keyword before usecase.execute(), that’s why I was getting a Future instead of the actual Right(User) Thank you so much Matej for this serie I used to get in touch with Flutter! Now a few months after than you publication, Equatable is now at version 1.0.2, and it seems that the syntax used for the super() in the Failure abstract class is no longer supported. I’ve been struggling to fix that issue, but I’m not so experienced with Dart nor Equatable, and I couldn’t manage to adapt the code there… Any idea maybe ? you need to remove the super part as we don’t rely on the superclass constructor and you have to implement props so the new class will be something like this: ///////////////////////////////////////////////////////////////////////////////////////// import ‘package:equatable/equatable.dart’; import ‘package:meta/meta.dart’; class NumberTrivia extends Equatable { final String text; final int number; NumberTrivia({ @required this.text, @required this.number, }); @override List get props => [text, number]; } Really great stuff, thanks for your great work, extremely useful indeed. Hi, First of all, excellent series! Just loved it. Excited for your DDD series. My doubt is, that when I run these tests on Docker, the docker is unable to find the fixture files. Any idea why? (Although it runs the tests even though they are also right under the test directory.) Uh, no idea why that happens 🙁 I have the same issue using json-files with Travis Ci and Github Actions. So I had to do a new class that returns objects of it instead. Works good but would rather find a better solution. Sound like it might be the same issue, have you found any solution? Dhruvam please how to test on Docker, I need it now on my project so I have’nt make it. (I’m French so apologize for my english) Hi Matt, I’m new to TDD Why did you overide the getConcreteNumberTrivia functionality, using when(..).thenAnswer() ? this will return the same result even when the implementation of the mock is not complete or return unexpected results, this always gives green indicator. I know I’m wrong, but unable to see how? thank No worries, progressing in the series, I’m now more familiar with how mocking works ^_^
https://resocoder.com/2019/08/29/flutter-tdd-clean-architecture-course-2-entities-use-cases/
CC-MAIN-2020-24
refinedweb
2,807
57.06
class I2S – Inter-IC Sound bus protocol¶ I2S is a synchronous serial protocol used to connect digital audio devices. At the physical level, a bus consists of 3 lines: SCK, WS, SD. The I2S class supports controller operation. Peripheral operation is not supported. The I2S class is currently available as a Technical Preview. During the preview period, feedback from users is encouraged. Based on this feedback, the I2S class API and implementation may be changed. I2S objects can be created and initialized using: from machine import I2S from machine import Pin # ESP32 sck_pin = Pin(14) # Serial clock output ws_pin = Pin(13) # Word clock output sd_pin = Pin(12) # Serial data output or # PyBoards sck_pin = Pin("Y6") # Serial clock output ws_pin = Pin("Y5") # Word clock output sd_pin = Pin("Y8") # Serial data output audio_out = I2S(2, sck=sck_pin, ws=ws_pin, sd=sd_pin, mode=I2S.TX, bits=16, format=I2S.MONO, rate=44100, ibuf=20000) audio_in = I2S(2, sck=sck_pin, ws=ws_pin, sd=sd_pin, mode=I2S.RX, bits=32, format=I2S.STEREO, rate=22050, ibuf=20000) - 3 modes of operation are supported: blocking non-blocking uasyncio blocking: num_written = audio_out.write(buf) # blocks until buf emptied num_read = audio_in.readinto(buf) # blocks until buf filled non-blocking: audio_out.irq(i2s_callback) # i2s_callback is called when buf is emptied num_written = audio_out.write(buf) # returns immediately audio_in.irq(i2s_callback) # i2s_callback is called when buf is filled num_read = audio_in.readinto(buf) # returns immediately uasyncio: swriter = uasyncio.StreamWriter(audio_out) swriter.write(buf) await swriter.drain() sreader = uasyncio.StreamReader(audio_in) num_read = await sreader.readinto(buf) Some codec devices like the WM8960 or SGTL5000 require separate initialization before they can operate with the I2S class. For these, separate drivers are supplied, which also offer methods for controlling volume, audio processing and other things. For these drivers see: Constructor¶ - class machine.I2S(id, *, sck, ws, sd, mck=None, mode, bits, format, rate, ibuf)¶ Construct an I2S object of the given id: ididentifies a particular I2S bus; it is board and port specific Keyword-only parameters that are supported on all ports: sckis a pin object for the serial clock line wsis a pin object for the word select line sdis a pin object for the serial data line mckis a pin object for the master clock line; master clock frequency is sampling rate * 256 modespecifies receive or transmit bitsspecifies sample size (bits), 16 or 32 formatspecifies channel format, STEREO or MONO ratespecifies audio sampling rate (Hz); this is the frequency of the wssignal ibufspecifies internal buffer length (bytes) For all ports, DMA runs continuously in the background and allows user applications to perform other operations while sample data is transfered between the internal buffer and the I2S peripheral unit. Increasing the size of the internal buffer has the potential to increase the time that user applications can perform non-I2S operations before underflow (e.g. writemethod) or overflow (e.g. readintomethod). Methods¶ - I2S.readinto(buf)¶ Read audio samples into the buffer specified by buf. bufmust support the buffer protocol, such as bytearray or array. “buf” byte ordering is little-endian. For Stereo format, left channel sample precedes right channel sample. For Mono format, the left channel sample data is used. Returns number of bytes read - I2S.write(buf)¶ Write audio samples contained in buf. bufmust support the buffer protocol, such as bytearray or array. “buf” byte ordering is little-endian. For Stereo format, left channel sample precedes right channel sample. For Mono format, the sample data is written to both the right and left channels. Returns number of bytes written - I2S.irq(handler)¶ Set a callback. handleris called when bufis emptied ( writemethod) or becomes full ( readintomethod). Setting a callback changes the writeand readintomethods to non-blocking operation. handleris called in the context of the MicroPython scheduler. - static I2S.shift(*, buf, bits, shift)¶ bitwise shift of all samples contained in buf. bitsspecifies sample size in bits. shiftspecifies the number of bits to shift each sample. Positive for left shift, negative for right shift. Typically used for volume control. Each bit shift changes sample volume by 6dB.
https://docs.micropython.org/en/latest/library/machine.I2S.html
CC-MAIN-2022-21
refinedweb
671
50.53
22 March 2006 12:04 [Source: ICIS news] LONDON (ICIS news)--Industrial new orders rose 11.1% in the European Union (EU) in January compared with the same month last year, with increases in all sectors including chemicals, official statistics agency Eurostat said on Wednesday. ?xml:namespace> Orders for chemicals rose 7.0% in the EU compared with January 2005. In the euro zone, overall industrial new orders rose 9.7%, with chemicals orders up 7.1%, Eurostat reported. Compared with December, industrial new orders declined 3.8% in the EU and were 5.9% lower in the euro zone. Chemicals orders rose 0.2% in the EU in January compared with the previous month, although in the euro zone they fell 0.3%. Industrial new
http://www.icis.com/Articles/2006/03/22/1050871/eu-industrial-orders-rise-11-in-jan-chems-up-7.html
CC-MAIN-2013-48
refinedweb
126
61.73
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum! Paul Clapham wrote:I'm pretty sure you can't set environment variables inside a process in any language at all. But as far as I can see there isn't any point in setting the PATH environment variable in your Java code, since the process where your application is running is never going to use it anyway. Or is it? Paul Clapham wrote:Those installers will be setting the system-level environment variables, not the copies of the variables which are provided to the process. I assume that to change the former, you would have to change some registry entries or call a low-level API or something like that. The latter can't be changed by the process itself. And as I said, there's no need for your Java class to change its PATH environment variable because it can't possibly use it anyway. It might possibly create a new process which needs a modified version of the PATH variable, but the Process class allows you to do that in a straightforward way in any case. If your specific requirement was to change some other environment variable, then it wasn't very helpful to use PATH as your example, I don't think. So perhaps you could explain your actual requirement and why you think that changing an environment variable is the way to implement that requirement? Paul Clapham wrote:That depends on how you're going to be running ANT and ffmpeg. My suggestion would be to set the environment variables as the first steps in the batch script which does those things. Rob Spoor wrote:FYI, both ProcessBuilder and Runtime.exec allow you to pass additional environment variables. The problem with Runtime.exec is that you need to provide either null or all of the existing environment variables as well. If you only provide the new ones, most of the others will be dropped. ProcessBuilder is better for that. For example: import java.io.*; import java.util.*; public class Test { public static void main(String[] args) throws Exception { ProcessBuilder pb = new ProcessBuilder("CMD.exe", "/C", "SET"); // SET prints out the environment variables pb.redirectErrorStream(true); Map<String,String> env = pb.environment(); String path = env.get("Path") + ";C:\\naved\\bin"; env.put("Path", path); Process process = pb.start(); BufferedReader in = new BufferedReader(new InputStreamReader(process.getInputStream())); String line; while ((line = in.readLine()) != null) { System.out.println(line); } } } naved momin wrote:but if i do this then any time i invoke ant or ffmpeg commands i have to rewrite this steps or it will set the environment variable till my java process killed ?
http://www.coderanch.com/t/584969/java/java/set-environment-variable
CC-MAIN-2015-11
refinedweb
460
65.32
Guys, I have got a weird problem. Installed Kali Linux 2020.1 arm image on my Rasberry PI 4 and got my wifi working fine. Both the internal Wifi broadcom onboard driver and my Alfa AWUS036ACH Adapter (USB) with RTL8812AU driver worked flawless using the aircrack-ng tools. Now here comes the strange thing. After a week or so, I have the following error below "Ndiswrapper doesn't support monitor mode" when I put my wifi cards in monitor mode and start to use airdump-ng to capture my wifi traffic. I have searched the web for possible solutions because I know that aircrack-ng tools do not support Ndiswrapper and it will never be supported. But the strange thing is that i did not install any ndiswrapper drivers in the first place , so I am really puzzled why I have suddenly this error. And the other strange thing is that both my wifi adapters are having the same problem. Just wondering if somebody ran into a similar problem. My kali linux version is: All suggestions are welcome. 3 Responses Just an update on the problem... Performed an apt update && apt dist-upgrade and it started working again after launching kismet. Still a bit fishy why aircrack-ng started to complain about the ndiswrapper and it might come back again. Checked the code snippet below in aircrack-ng/lib/osdep/linux.c checking for ndiswrapper and I would like to understand what is actually tested to determine that the driver is using ndiswrapper. My suspicion is that this test is not full proof on the ARM distribution. // Check if the driver is ndiswrapper */ static int is_ndiswrapper(const char iface, const char path) { int n, pid; if (!path || !iface || strlen(iface) >= IFNAMSIZ) { return 0; } if ((pid = fork()) == 0) { close(0); close(1); close(2); IGNORE_NZ(chdir("/")); execl(path, "iwpriv", iface, "ndis_reset", NULL); exit(1); } waitpid(pid, &n, 0); return ((WIFEXITED(n) && WEXITSTATUS(n) == 0)); } That's pretty weird, I haven't ever had those problems before. Sorry I can't help solve them but you might want to google some of those errors someone on stack-overflow has probably had a similar problem before. Hi HOID, It is indeed a strange problem. I am getting it back to work if I first use kismet to monitor my wifi interface. Quitting kismet leaves the wifi interface in monitor mode and then the aircrack-ng tools like airodump-ng are working fine for me. But if I start fresh after reboot with the traditional commands # airmon-ng check kill # airmon-ng start wlan1 # airodump-ng --band abg wlan1 then airodump-ng throws the warning "Ndiswrapper doesn't support monitor mode" and exits. Search the Internet, but I did not found the golden bullet yet. Anyhow, I will figure it out but for now I got a workaround working with kismet... Will keep you all posted when I found the real solution to the problem... Share Your Thoughts
https://null-byte.wonderhowto.com/forum/ndiswrapper-not-supported-with-aircrack-ng-tools-0250103/
CC-MAIN-2020-45
refinedweb
494
63.29
I have a JSON in the following format (from couchDB view) {"rows":[ {"key":["2015-04-01","524",""],"value":1}, {"key":["2015-04-01","524",""],"value":2}, {"key":["2015-04-01","524",""],"value":1} ]} I need to create a "service" to get this data from couchDB and insert it on SQL Server (for generating reports..) in a efficient way. My first bet was to bulk insert this json into SQL Server, like this: Bulk Insert from Generic List into SQL Server with minimum lines of code The problem is, how can I map this JSON into a c# class? Ultil now this is what I did: public class Row { public List<string> key { get; set; } public int value { get; set; } } public class RootObject { public List<Row> rows { get; set; } } var example = Newtonsoft.Json.JsonConvert.DeserializeObject<RootObject>(jsontext); This gives me a list of "Rows". Each row has a Key and each key is an array containing a Date, Url and a number. I could loop through the "rows" and create the objects on my own but this doens't sound very performant to me. Also, the JSON will be big, something like 5MB more or less. The structure that I want is something like this: public class Click { public DateTime Date { get; set; } public string Code { get; set; } public string Url { get; set; } public int Count { get; set; } } How can I extract the "key" array and map it into separated properties. This way, I wouldn't need a for loop. Any ideas?
http://www.howtobuildsoftware.com/index.php/how-do/bsX/c-json-couchdb-deserializing-json-into-c-class
CC-MAIN-2017-43
refinedweb
251
65.66
Revision history for the Perl binding of libcurl, WWW::Curl. 4.17 Fri Feb 21 2014: - Balint Szilakszi <szbalint at cpan.org> - Fixing build process for old libcurl versions without CURLOPT_RESOLVE. - License is now MIT only. 4.16 Thu Feb 20 2014: - Balint Szilakszi <szbalint at cpan.org> - Support for CURLOPT_RESOLVE (an slist option) [Theo Schlossnagle] - Fixing t/19multi.t test failures when using a threaded resolver for libcurl. - Improved constant parsing when using ISO-compliant CPP. [tsibley] 4.15 Sun Nov 28 2010: - Balint Szilakszi <szbalint at cpan.org> - Refactored constant handling and added thorough testing for it. - Fixed CURLOPT_PRIVATE, it is now a string and can be set/get accordingly. 4.14 Sun Oct 24 2010: - Balint Szilakszi <szbalint at cpan.org> - Scalar references can now be used to receive body/header data [gfx]. - Speed optimizations for threaded perl. [gfx, szbalint]. - Added a more generic libcurl constant detection. - Added the pushopt method for appending strings to array options. - Documentation improvements. 4.13 Wed Sep 01 2010: - Balint Szilakszi <szbalint at cpan.org> - Fixed WWW::Curl::Form (again, formadd and formaddfile working now). - Made constant constant handling more robust and added tests [Fuji, Goro]. - Modernized *.pm and AUTOLOAD now throws an error on unknown method calls [Fuji, Goro]. - Fixed code depending on CURLINFO_SLIST to be optional [Fuji, Goro].. 4.11 Fri Dec 18 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixed t/19multi.t for libcurl versions compiled with asyncronous dns resolution. 4.10 Fri Dec 18 2009: - Balint Szilakszi <szbalint at cpan.org> - Added support for CURLINFO_SLIST in getinfo (patch by claes). - Merging documentation fixes upstream from the FreeBSD port (thanks Peter). - Added support for curl_multi_fdset. 4.09 Thu Jul 09 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixing broken version check. 4.08 Tue Jul 07 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixed a memory leak in setopt. - Added a check to Makefile.PL for the minimum libcurl version. - Mentioned WWW::Curl hosting on github. - Upgraded bundled Module::Install to 0.91. 4.07 Sun May 31 2009: - Balint Szilakszi <szbalint at cpan.org> - Fixed >32bit integer option passing to libcurl on 32bit systems. (Thanks to Peter Heuchert for the report and fix suggestion!) - The CURL_CONFIG environment variable can now be used to specify which curl-config to use (contributed by claes). - Fixed segfault when a string option with setopt was set to undef (contributed by claes). - Fixed incomplete cleanup routine at destruction time (contributed by claes). - Readded Easy.pm and Share.pm stubs so that they are indexed by CPAN, thus avoiding complications with outdated versions appearing. 4.06 Sun Apr 05 2009: - Balint Szilakszi <szbalint at cpan.org> - 2.00 Tue Apr 22 2003: - Cris Bailiff <c.bailiff+curl at devsecure.com> - New top level package name of WWW::Curl in preparation for entry to CPAN - Rename "Curl::easy" to "WWW::Curl::easy" - Add backwards compatability namespace module for existing scripts - Implement initial curl_easy_duphandle support - Started on curl_easy_form support (WWW:Curl::form) - NOT FUNCTIONAL YET - Fixup use of env vars in t/07ftp-upload.t (jellyfish at pisem.net) - Adjust IP addresses for t/08ssl.t tests due to moved https servers 1.35 Sun Sep 22 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fixed progress function callback prototype [ curl-Bugs-612432 ], reflecting the fix made in curl-7.9.5. Tested in t/05progress.t to now return sensible values! 1.34 Wed Aug 7 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix off-by-one error in setting up of curl_slists from perl arrays, which was causing the last item of slists to be dropped. Added regression test case. 1.33 Mon Aug 5 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix serious bug in read callback support (used for POST and upload requests), introduced in 1.30, which uploaded random data (due to a reversed src/dest in a memory copy). 1.32 Thu Aug 1 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Minor Makefile.PL fixes to build cleanly with curl 7.8 as found on redhat 7.2. 1.31 Tue Jul 16 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Generate better switch() statement syntax in C code, to fix build issues on some systems with strict compilers. Reported by Ignasi Roca. 1.30 Mon Jul 15 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Testing release after complete code overhaul. Now supports cleaner object interface, supports multiple handles per process, uses PerlIO for portable I/O (which should be perl 5.8 ready) and maybe even supports ithreads. Should be fully backwards compatible, but please read the man page for change details and report any issues. - Fixed warning caused by slist functions accessing past the end of the perl array. - Fixed leak caused by consuming slist arguments without freeing. - Updates test scripts to OO style, cleaned up output. - Deprecated USE_INTERNAL_VARS. 1.21 Thu Jul 11 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Minor fixes to assist windows builds from Shawn Poulson - Allow passing curl include location on the command line when running perl Makefile.PL 1.20 Sat Feb 16 2002: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Use standard perl module numbering syntax (valid decimal) - Skipped 1.10 in case anyone confuses it with 1.1.0 - Made every build a rebuild and removed 'pre-built' files - no point worrying about not finding curl.h - if we can't find it, we can't compile anyway. Obviates bug in 1.1.9 preventing rebuilds. - Add support for redefining CURLOPT_STDERR (file handle globs only!) 1.1.9 Sat Dec 8 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Enhance Makefile.PL to re-build easy.pm and 'constants' xs function from local installed curl.h. CURLOPT_ and CURLINFO_ Constants up-to-date for libcurl-7.9.2, but can be re-built for almost any libcurl version by removing easy.pm and curlopt-constants.c and re-running 'perl Makefile.PL' - Use curl-config to find include and library compile options - Updated test scripts to work better under 'make test' (You need to set the environment variable 'CURL_TEST_URL' though!) - Added test script to display transfer times using new time options - Merge changes in Curl_easy 1.1.2.1 by Georg Horn 1.1.8 Thu Sep 20 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Re-generate CURLOPT_ constants from curl.h and enhance makefile to allow this to be repeated in future or for older versions of libcurl. Constants up-to-date for libcurl-7.9(pre) - Split tests into t/*.t to simplify each case - Add test cases for new SSL switches. This needs ca-bundle.crt (from mod_ssl) for verifying test cases. 1.1.7 Thu Sep 13 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Documentation Update only - Explicitly state that Curl_easy is released under the MIT-X/MPL dual licence. No code changes. 1.1.6 Mon Sep 10 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix segfault due to changes in header callback behaviour since curl-7.8.1-pre3 1.1.5 Fri Apr 20 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Add latest CURLOPT_ and CURLINFO_ constants to the constants list 1.1.4 Fri Apr 20 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Fix case where curl_slists such as 'HTTPHEADERS' need to be re-set over persistant requests 1.1.3 Wed Apr 18 2001: - Cris Bailiff <c.bailiff+curl at devsecure.com> - Change/shorten module function names: Curl::easy::curl_easy_setopt becomes Curl::easy::setopt etc. This requires minor changes to existing scripts.... - Added callback function support to pass arbitrary SV * (including FILE globs) from perl through libcurl to the perl callback. - Make callbacks still work with existing scripts which use STDIO - Initial support for libcurl 7.7.2 HEADERFUNCTION callback feature - Minor API cleanups/changes in the callback function signatures - Added Curl::easy::version function to return curl version string - Callback documentation added in easy.pm - More tests in test.pl 1.1.2 Mon Apr 16 2001: - Georg Horn <horn at koblenz-net.de> - Added support for callback functions. This is for the curl_easy_setopt() options WRITEFUNCTION, READFUNCTION, PROGRESSFUNCTION and PASSWDFUNCTION. Still missing, but not really neccessary: Passing a FILE * pointer, that is passed in from libcurl, on to the perl callback function. - Various cleanups, fixes and enhancements to easy.xs and test.pl. 1.1.1 Thu Apr 12 2001: - Made more options of curl_easy_setopt() work: Options that require a list of curl_slist structs to be passed in, like CURLOPT_HTTPHEADER, are now working by passing a perl array containing the list elements. As always, look at the test script test.pl for an example. 1.1.0 Wed Apr 11 2001: - tested against libcurl 7.7 - Added new function Curl::easy::internal_setopt(). By calling Curl::easy::internal_setopt(Curl::easy::USE_INTERNAL_VARS, 1); the headers and content of the fetched page are no longer stored into files (or written to stdout) but are stored into internal Variables $Curl::easy::headers and $Curl::easy::content. 1.0.2 Tue Oct 10 2000: - runs with libcurl 7.4 - modified curl_easy_getinfo(). It now calls curl_getinfo() that has been added to libcurl in version 7.4. 1.0.1 Tue Oct 10 2000: - Added some missing features of curl_easy_setopt(): - CURLOPT_ERRORBUFFER now works by passing the name of a perl variable that shall be crated and the errormessage (if any) be stored to. - Passing filehandles (Options FILE, INFILE and WRITEHEADER) now works. Have a look at test.pl to see how it works... - Added a new function, curl_easy_getinfo(), that for now always returns the number of bytes that where written to disk during the last download. If the curl_easy_getinfo() function is included in libcurl, (as promised by Daniel ;-)) i will turn this into just a call to this function. 1.0 Thu Oct 5 2000: - first released version - runs with libcurl 7.3 - some features of curl_easy_setopt() are still missing: - passing function pointers doesn't work (options WRITEFUNCTION, READFUNCTION and PROGRESSFUNCTION). - passing FILE * pointers doesn't work (options FILE, INFILE and WRITEHEADER). - passing linked lists doesn't work (options HTTPHEADER and HTTPPOST). - setting the buffer where to store error messages in doesn't work (option ERRORBUFFER).
https://metacpan.org/changes/distribution/WWW-Curl
CC-MAIN-2017-26
refinedweb
1,725
70.6
Consider the following code. I am starting with a task that does nothing, and then using ContinueWith() to start 10 calls to a method that increments a counter. When I run this program, it prints "0", indicating that the increment() method hasn't been called at all. I was expecting it to be called 10 times, since that's how many times I called ContinueWith(). If I uncomment the "Thread.Sleep(20)" line, then it prints "10" as expected. This happens in either release or debug mode. My system is a core 2 quad with hyperthreading (8 logical cores) running Windows 7 x64. I assume I have some kind of fundamental misunderstanding about how Task.ContinueWith() works.... using System; using System.Threading; using System.Threading.Tasks; namespace ConsoleApplication4 { class Program { static void Main() { using (var task = Task.Factory.StartNew(()=>{})) { for (int i = 0; i < 10; ++i) { task.ContinueWith(_=> increment()); // Thread.Sleep(20); // Uncomment to print 10 instead of 0. } task.Wait(); } // This prints 0 UNLESS you uncomment the sleep above. Console.WriteLine(counter); } static void increment() { Interlocked.Increment(ref counter); } private static int counter; } } Can anyone shed any light on what's going on here? -------------Problems Reply------------ The reason is simple: You wait on the task that is already finished. What you really want is to wait for the ten tasks you created in the loop: var tasks = new List<Task>(); for (int i = 0; i < 10; ++i) { tasks.Add(task.ContinueWith(_=> increment())); } Task.WaitAll(tasks.ToArray()); Sure. You're printing out the value when the initial task has completed - you're not waiting for the continuation to occur. In other words, when the initial task completes, 11 things happen: - 10 new tasks start executing, each incrementing the counter - You print out the counter You could effectively chain all these together if you want: task = task.ContinueWith(_=> increment()); Then when the original task finishes, one incrementer will fire. When that finishes, the next incrementer will fire. When that finishes [...]. And when the final incrementer has finished, your Wait call will return. If you change the main body as follows: using (var task = Task.Factory.StartNew(() => { })) { var t = task; for (int i = 0; i < 10; ++i) { t = t.ContinueWith(_ => increment()); // Thread.Sleep(20); // Uncomment to print 10 instead of 0. } t.Wait(); } You will get 10 without sleeping. The main change is t = t.ContinueWith(...) and t is necessary because you're not allowed to change the using() variable.
http://www.dskims.com/task-continuewith-not-working-how-i-expected/
CC-MAIN-2018-30
refinedweb
412
68.87
21 December 2011 11:22 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The average operating rate has been 40–45% of full capacity since the beginning of December, while it was above 50% in November, according to data from ICIS China. Demand from the construction sector in northern As a consequence, EPS sellers have seen reduced sales, said one producer in “We are suffering losses these days because of high costs,” one major domestic EPS producer added. He said the styrene monomer (SM) spot price is about yuan (CNY) 9,900/tonne, and the price of general grade EPS is CNY10,500–10,600/tonne, a difference of about CNY600–700/tonne, but the normal production cost of EPS is at least CNY800/tonne. Both difficult sales and high costs have put Chinese EPS producers in a dilemma, industry sources said. They can’t reduce prices further because of high costs, and also face difficulty in raising prices to cover losses because of extremely poor demand. And demand will be weaker in the near term because of the upcoming Lunar New Year Holiday, market players said.
http://www.icis.com/Articles/2011/12/21/9518339/chinas-eps-plants-keep-operating-rates-low-in-off-season.html
CC-MAIN-2015-14
refinedweb
186
54.97
The objective of this post is to explain how to install and run a Flask server on the LinkIt Smart 7688 Duo. Introduction Flask is a web micro framework for Python which allows us to create and deploy simple web applications very easily. In this tutorial, we will install Flask and create a very simple hello world application, as we did in a previous post, which introduces Flask. For this tutorial, we assume that the LinkIt Smart is already configured, connected to a WiFi network and is accessible using WinSCP and Putty. You can check all the tutorials needed to configure these tools for the LinkIt Smart in the related posts section. So, to install Flask on the LinkIt Smart 7688 Duo, just open Putty, connect to the LinkIt Smart and type the following command: pip install flask The installation may take a while since it will install all the required dependencies. The code The code needed to run the server is very simple and is similar to the one specified in this previous post. To make the editing of the code easier, we can use WinSCP to interact with the LinkIt Smart. So, I’m assuming the use of this tool to upload the Python file with the code for the Flask server. You can check a detailed guide here on how to use WinSCP with the LinkIt Smart. Open WinSCP and create a file called testFlask.py. You can create a custom folder to develop your applications or use an existing one. In my case, as seen in figure 1, I created the file on the /IoT/examples folder, which is a default folder that comes with the Operating System installation. Figure 1 – Creating the Flask Python script with WinSCP. Now, double click the created file to write the code for the server.__) Now, we will define a route. A route is basically a decorator that allows to specify an URL associated with a Python function. When a request is done to that URL, the corresponding function is executed. For our example, we will specify an URL called “/hello” that will trigger a function that returns a greeting message, as shown bellow. @app.route('/hello') def helloWorldHandler(): return 'Hello World from Flask on LinkIt Smart!' Finally, we run our application with the run method. app.run(host='0.0.0.0', port= 8090) from flask import Flask app = Flask(__name__) @app.route('/hello') def helloWorldHandler(): return 'Hello World from Flask on LinkIt Smart!' app.run(host='0.0.0.0', port= 8090) Testing the code To test the code, we need to go back to Putty and navigate to the folder where the Python file was created. First of all, we need to know the IP address of the LinkIt Smart 7688 Duo on the WiFi network. To find it, just type the command bellow. You can read more about the ifconfig command here. ifconfig You should get something similar to figure 3. The local IP of the LinkIt Smart in the WiFi network should be the one in the apcli0 interface, as highlighted in the figure. Figure 3 – Using ifconfig command on LinkIt Smart. After finding the IP of the device, from the same folder, just type the following command, to run the Flask server application: python testFlask.py You should get something similar to figure 4. Please note that the command may take a while to run. Figure 4 – Running the Flask server on the LinkIt Smart. Now, open a web browser and type, where IPaddress is the IP address of the LinkIt Smart found previously with the ifconfig command. You should get the greeting message defined, as shown in figure 5. Figure 5 – Testing Flask server on the LinkIt Smart 7688 Duo. In alternative, if your computer supports mDNS resolution, you don’t need to use the IP address of the LinkIt Smart. Instead, as shown in figure 6, you can just type the following URL on a web browser: mylinkit.local:8090/hello. The name will be automatically resolved to a local IP using mDNS. Figure 6 – Contacting the Flask server using mDNS. We can check on Putty that the HTTP requests are being received and the corresponding information is being printed to the command line, as shown in figure 7. Figure 7 – Received HTTP requests. To quit the server, just press Ctrl+C on Putty. Related Posts - Flask: Hello World - LinkIt Smart Duo: Connection to WiFi Network - LinkIt Smart Duo: Transferring files with SCP - Linkit Smart Duo: Getting started - Linkit Smart Duo: Python support Pingback: Flask: Parsing JSON data | techtutorialsx
https://techtutorialsx.com/2016/12/27/linkit-smart-duo-running-a-flask-server/
CC-MAIN-2017-26
refinedweb
768
71.65
A/B testing using Django Whenever we roll out an improvement on our platform, we at HackerEarth love to conduct A/B tests on the improvement to understand which iteration helps our users more in using the platform in a better way. Since the available third party libraries did not quite meet our needs, we wrote our own A/B testing framework in Django. In this post we will share a few insights as to how we accomplished this. The basics A lot of products, especially on web, use a method called A/B testing or split testing to quantify how well a new page or layout performs as compared to the old one. The crux of the method is to show layout ‘A’ to a certain set or bucket of users and layout ‘B’ to another set of users. The next step is to track user actions leading to certain milestones, which would provide critical data regarding the ‘effectiveness’ of both the pages or layouts. Before we began writing code for the framework, we made a list of all the things that we wanted the framework to do - - Route users to multiple views (with different templates) - Route users to a single view with different templates - Make the views/templates stick for users - A/B test visitors who do not have an account on HackerEarth (anonymous users) - Sticky views/templates for anonymous users as well - Support for A/A/B or A/B/C/D…./n/ testing (just for the heck of it!) - Analytics We went out to grab some pizza and beer, and when we got back we came up with this wire-frame - A/B for Views A/B for Templates Getting the logic right To begin with, we had to categorize our users into buckets. So all our users were assigned a bucket number ranging from 1 to 120. This numbering is not strict and the range can be arbitrary or as per your needs. Next, we defined two constants - the first one specifies which view a user is routed to, and the second one specifies the fallback or primary view. The tuples in the first constant are the bucket numbers assigned to users. The primary view in the second constant will be used when we do not want to A/B test on anonymous users. AB_TEST = { tuple(xrange(1,61)): 'example_app.views.view_a', tuple(xrange(61,121)): 'example_app.views.view_b', } AB_TEST_PRIMARY = 'example_app.views.view_a' Next we wrote two decorators which we could wrap around views - one for handling views and the other for handling templates. In the first scenario, the decorator would take a dictionary of views i.e. the first constant that we defined, a primary view i.e. the second constant, and a boolean value which specifies if anonymous users should be A/B tested as well. Here’s what the decorator essentially does for logged in users - - Get the user’s bucket number - Check which view is assigned to that bucket number - Return the corresponding view The flow is a bit different in case of anonymous users. If we do not want to perform A/B testing on anonymous users, then we just return the primary or fallback view that we had defined earlier. However, if we want to include anonymous users in the A/B tests, we need a couple of extra things to begin with - - Set a unique cookie for the user which is independent of the session - A simple and fast key-value pair storage e.g. Redis Once we have these things in place, here’s what we need to do - - Get the user’s unique cookie - Check if a key exists in redis for that cookie value - If a key is found, get the value of the key and return it - If no key is found, choose a view randomly from the view dictionary - Set a key in redis corresponding to the user with the chosen view as value - Return the chosen view Now, the A/B will work perfectly for anonymous users as well. Once an anonymous user gets routed to one of the views, that view will stick for him or her. Let’s dive into code An example for the view decorator is given below - """ Decorator to A/B test different views. Args: primary_view: Fallback view. anon_sticky: Determines whether A/B testing should be performed on anonymous users as well. view_dict: A dictionary of views(as string) with buckets as keys. """ def ab_views( primary_view=None, anon_sticky=False, view_dict={}): def decorator(f): @wraps(f) def _ab_views(request, *args, **kwargs): # if you want to do something with the dict returned # by the view, you can do it here. # ctx = f(request, *args, **kwargs) view = None try: if user_is_logged_in(): view = _get_view(request, f, view_dict, primary_view) else: redis = initialize_redis_obj() view = _get_view_anonymous(request, redis, f, view_dict, primary_view, anon_sticky) except: view = primary_view view = str_to_func(view) return view(request, *args, **kwargs) def _get_view(request, f, view_dict, primary_view): bucket = get_user_bucket(request) view = get_view_for_bucket(bucket) return view def _get_view_anonymous(request, redis, f, view_dict, primary_view, anon_sticky): view = None if anon_sticky: cookie = get_cookie_from_request(request) if cookie: view = get_value_from_redis(cookie) else: view = random.choice(view_dict.values()) set_cookie_value_in_redis(cookie) else: view = primary_view return view return _ab_views return decorator The noteworthy piece of code here is the function str_to_func(). This returns a view object from a view path (string). def str_to_func (func_string): func = None func_string_splitted = func_string.split('.') module_name = '.'.join(func_string_splitted[:-1]) function_name = func_string_splitted[-1] module = import_module(module_name) if module and function_name: func = getattr(module, function_name) return func We can write another decorator for A/B testing multiple templates using the same view in a similar way. Instead of passing a view dictionary, pass a template dictionary and return a template. Putting things together Now, let’s assume that we have already written the ‘A’ and ‘B’ views which are to be A/B tested. Let’s call them ‘view_a’ and ‘view_b’. To get the entire thing working, we will write a new view. Let’s call this view as ‘view_ab’. We will wrap this view with one of the decorators we wrote above and create a new url to point to this new view. You may refer to the code snippet below - @ab_views( primary_view=AB_TEST_PRIMARY, anon_sticky=True, view_dict=AB_TEST, ) def view_ab(request): ctx = {} return ctx Just for the sake of convenience we require that this new view returns a dictionary. Finally, we need to integrate analytics into this framework so that we have quantifiable data regarding the performance or effectiveness of both the views or layouts. We decided to use mixpanel at the JavaScript end to track user behaviour on these pages. You can also use any analytics or event tracking tool out there for this purpose. This is just one of the ways you can do A/B testing using Django. You can always take this basic framework and improve it or add new features. P.S. : If you want to experiment with an A/A/B or A/B/C testing, all you need to do is change the first constant that we defined i.e. AB_TEST Feel free to comment below or ping us at [email protected] if you have any suggestions! Posted by Arindam Mani Das. Follow me @arindammanidas
http://engineering.hackerearth.com/2016/01/29/ab-testing-using-django/
CC-MAIN-2018-09
refinedweb
1,205
58.52
Can not connect to MongoLab... I am trying to use this module and I am unable to connect. Also, I am not sure how to view the DEBUG messages in cmonary.c. I have tried with the 'with' statement as well as through the interpreter. import monary m = monary.Monary("ds######.mongolab.com", 33828) print m._connection None I also want to authenticate, so this is what I was hoping to use (or something like it). with monary.Monary(db_host, db_port) as m: success = m.authenticate(db_name, db_user, db_pass) if success: arrays = m.query( db_name, "my_collection", spec, fields, types ) I have no problem connecting with pymongo btw. This works: db = pymongo.Connection(db_host, db_port)[db_name] db.authenticate(db_user, db_pass) Thanks, Will I forgot to sign in before creating the issue. Sorry... The formatting of the python code is ugly in the above. Here it is better... This works with pymongo... The C driver upgrade from pull request #7 should allow authenticating with a MongoDB URI. The latest Monary code has no more "authenticate()" method, we now include auth credentials in the URI, like: Monary("mongodb://user:password@host/database_name/") I'm going to close this issue, feel free to re-open if you still experience an issue.
https://bitbucket.org/djcbeach/monary/issues/2/can-not-connect-to-mongolab
CC-MAIN-2019-18
refinedweb
207
61.12
Bug #6935 TestEtc#test_getgrgid on OSX Description =begin I saw similar issue as #6831. Is GID also not unique on OSX? [ 69/110] TestEtc#test_getgrgid = 0.02 s 1) Failure: test_getgrgid(TestEtc) [/Users/hiroshi/src/ruby/test/etc/test_etc.rb:84]: <#> expected but was <#>. I confirmed this patch fixes the failure. diff --git a/test/etc/test_etc.rb b/test/etc/test_etc.rb index 5bc8db4..c105122 100644 --- a/test/etc/test_etc.rb +++ b/test/etc/test_etc.rb @@ -76,13 +76,18 @@ class TestEtc < Test::Unit::TestCase end def test_getgrgid - groups = {} - Etc.group do |s| - groups[s.gid] ||= s - # group database is not unique on GID, and which entry will be - # returned by getgrgid() is not specified. - groups = Hash.new {[]} - # on MacOSX, same entries are returned from /etc/group and Open - # Directory. - Etc.group {|s| groups[s.gid] |= [s]} - groups.each_pair do |gid, s| - assert_include(s, Etc.getgrgid(gid)) end - groups.each_value do |s| - assert_equal(s, Etc.getgrgid(s.gid)) - assert_equal(s, Etc.getgrgid) if Process.egid == s.gid - s = groups[Process.egid] - unless s.empty? - assert_include(s, Etc.getgrgid) end end =end Associated revisions History #1 [ruby-core:47314] Updated by Nobuyoshi Nakada over 3 years ago No unixen systems guarantees that GID is unique. Please commit it. #2 Updated by Anonymous over 3 years ago - Status changed from Open to Closed - % Done changed from 0 to 100 This issue was solved with changeset r36833. Hiroshi, thank you for reporting this issue. Your contribution to Ruby is greatly appreciated. May Ruby be with you. test_etc.rb: fix for non unique GID Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/6935
CC-MAIN-2016-18
refinedweb
266
72.83
The Find Results window is used to display results of searching for usages . It displays results of the latest search as well as results of previous searches in different tabs. Refresh Helps rerun usage search, which is useful after modifying your code. Expand All/Collapse All Expands/collapses all nodes in the current tab. Previous/Next Navigate to the previous/next result in the result panel and scrolls through the source code accordingly. Show Preview Hides or shows the preview pane in the position specified using the drop-down list (in the bottom of the window or to the right of the usages list). Export Opens the Export Data dialog box where you can: Merge Occurrences on the Same Line If this button is selected, occurrences that belong to the same line are merged. Show All Usages When this button is selected, all usages are displayed. Show Only Read Usages When this button is selected, only references to read usages are displayed. Available for the usages of fields or local variables. Show Only Write Usages When this button is selected, only references to write usages are displayed. Available for usages of fields or local variables. Filter Usages With this drop-down list, you can select a specific type of usages to display. Several options are provided: Group by Select a mode of grouping usages from the drop-down list. Possible options help display results as a tree grouped by type, member, namespace, folder etc., or as a flat list (option None). Closes the current tab. Procedures
http://www.jetbrains.com/resharper/webhelp60/Reference__Windows__Find_Results_Window.html
CC-MAIN-2015-32
refinedweb
256
65.12
@IvanGV GH = Git Hub Best posts made by turigeza - - RE: [solve] Open/close children's dialog from parent => Avoid mutating a prop directly since the value will be... @Shoooryuken This is a common Vue error. You should never modify the props. Instead create a computed property which returns options and on update emit an event to parent. Something on the line of this: <template> <q-card> <q-dialog square full-width <q-card-section <div class="text-h6">{{ title }}</div> <q-space /> <q-btn </q-card-section> <q-card-section....</q-card-section> </q-card> </template> <script> export default { props: { options: { type: Boolean, default: false } }, computed: { $_options: { get: function () { return this.options; }, set: function (val) { this.$emit('update-options', val); } } }, methods: { activate: function (el) { this.$emit('update-options', false) this.filter = el }, close () { this.$emit('update-options', false) } } }... - RE: How can I insert my brand logo with a spinner in Loading plugin ? @IvanGV btw there is a way of doing it if you are very keen : ) You have to create your own custom spinner component. There is a vague example here: - RE: Search missing in docs of v1.0?. - RE: YARN or NPM? Choose yarn. Right now! I was using yarn both globally and locally. Then two days ago I was following the upgrade guide to quasar 1. And I could not get passed the line module.exports = { presets: [ '@quasar/babel-preset-app' ] } that is until switched to npm for global use. Why that is I have no idea : ) but now I have npm for global use and yarn for local / per project use. Worth noting their suggestion: 6. We recommend yarn whenever possible because of its speed and efficient use. However, when using globals, we still recommend using npm, especially if you use nvm (Node Version Manager). I would love not to have to deal with both. - RE: Custom Component & Mixins & avoid repeating props> ``` - RE: Drag And Drop. - RE: [v1] .q-field--with-bottom being added to field @ssuess I think the rules :rules="['date']"will cause this to be added. This is where the error messages go. Which are position: fixed because of the animation I think. Might be interesting to read See also - RE: Quasar DistDir do not be deleted in build. @leon I don’t know but if you need a file or two in the dist directory you can configure webpack to copy it there. In my case I needed the file .realsync. extendWebpack (cfg) { cfg.module.rules.push({ enforce: 'pre', test: /\.(js|vue)$/, loader: 'eslint-loader', exclude: /node_modules/, options: { fix: true } }); if (cfg.output) { const CopyWebpackPlugin = require('copy-webpack-plugin'); cfg.plugins.push( new CopyWebpackPlugin([{ from: '.realsync', to: cfg.output.path } ]) ); } } - RE: [v1] Quasar v1.0.0-beta.15 released! . - RE: QTable hide pagination controls . - RE: [How To] Building components with Quasar @a47ae said in [How To] Building components with Quasar: Most of Quasars components are also distributed as single file components, you can check out their source here. Link is broken … no major just thought I mention. - RE: Axios Headers @nqaba In my case this file is under /src/plugins/axios.js import { LocalStorage } from 'quasar'; // make sure you edit quasar.conf.js import axios from 'axios'; export default ({ app, router, Vue }) => { axios.defaults.baseURL = ''; // Add a response interceptor axios.interceptors.response.use(function (response) { // Do something with response data // Loading.hide(); return response; }, function (error) { // Do something with response error return Promise.reject(error); }); axios.interceptors.request.use(function (config) { // default options // Loading.show(); return config; }); axios.defaults.method = 'POST'; axios.defaults.transformRequest = [function (data, headers) { // Do whatever you want to transform the data return data; }]; // axios.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded'; axios.defaults.headers.post['Content-Type'] = 'application/json'; // axios.defaults.headers.common['Authorization'] = AUTH_TOKEN; Vue.prototype.$axios = axios; // window.$axios = axios; super super super global : ) }; in quasar.conf.js // Quasar plugins plugins: [ 'Notify', 'SessionStorage', 'LocalStorage', /** this line **/ 'Dialog', - RE: Global Event Bus vs Vuex The chances are you will need Vuex to share application data and state and at the same time it is far less likely you need a Global Event Bus. The data in vuex is reactive. Most of the time you have to act on data changes and not events. From the docs: If you are writing components, and subcomponents like a table, table row, table column, table header … you should use $emit and listen to the events in the parent components. If you look at the quasar ui source code you will find lots of examples for this. And I don’t think there is a Global Event Bus in Quasar source code but I could be wrong. However where I find it’s a bit tedious is when you have Parent>Child>SubChild>SubSubChild components events do not bubble up like native events. So if you $emit(‘someevent’) on SubSubChild and you want to listen to it in Parent then you have to pass it up in SubChild and Child. I ended up using this gist here for this: - RE: Are there some awesome websites for beginners to learn? The best resource IMO is still the docs here. This is how I learnt it. There are tons of examples with source code. Go straight to v1 beta no point learning the old stuff the difference is big between the two. Good luck. - RE: computed refresh too often @amoss Watch as many properties as you need to. Calculate your display number which isn’t watched or computed with a function. - - RE: q-drawer hide/show event object is undefined @kasparesky I think that those events only get passed to the listeners if you pass them in on the first place when you use the functions on the drawer. I might be wrong though … just a guess. It kind of make sense. I mean if you set the model of the drawer what sort of event are you expecting to receive ? I don’t think there is way to know what event caused the model change. So it seems - Reacting to tab changes within tab panel. I have some components inside q-tab-panel and I would like those components to react to tab changes (load their data). I thought I would pass down the tab model as a prop and watch that inside the component but for some reason this isn’t working. I can not get my head around it why. It may just be I fundamentally misunderstood something in Vue. Do you know why ? Would you be so kind to explain or just point me in the right direction. How would you do this if this is not the way? Basically any suggestion would be welcome. : ) Here is the pen: Thank you all, - RE: [Solved] How to set focus on input element dynamically when leaving from the value help dialog ).
https://forum.quasar-framework.org/user/turigeza/best
CC-MAIN-2020-34
refinedweb
1,141
67.76
fputws - put a wide-character string on a stream #include <stdio.h> #include <wchar.h> int fputws(const wchar_t *ws, FILE *stream); The fputws() function writes a character string corresponding to the (null-terminated) wide-character string pointed to by ws to the stream pointed to by stream. No character corresponding to the terminating null wide-character code is written. The st_ctime and st_mtime fields of the file will be marked for update between the successful execution of fputws() and the next successful completion of a call to fflush() or fclose() on the same stream or a call to exit() or abort(). Upon successful completion, fputws() returns a non-negative number. Otherwise it returns -1, sets an error indicator for the stream and errno is set to indicate the error. Refer to fputwc(). None. The fputws() function does not append a newline character. None. fopen(), <stdio.h>, <wchar.h>. Derived from the MSE working draft.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/fputws.html
CC-MAIN-2014-42
refinedweb
155
65.22
Jonathan Dell'OvaPro Student 6,283 Points Don't understand why there I get "TypeError: 'str' object cannot be interpreted as an integer" in my code from Yatzy import dice class Hand(list): def __init__(self, size=0, die_class=None, *args, **kwargs): if not die_class: raise ValueError("You must provide a die class!") super().__init__() for _ in range(size): self.append(die_class()) def __len__(self): str_num = str(super().__len__()) return "There are {} die in this hand".format(str_num) hand = Hand(size=5, die_class=dice.D6) print(hand) print(len(hand)) When I try to run this code it gives me this error: TypeError: 'str' object cannot be interpreted as an integer I've tried analyzing the problem with is instance() for example but I've not come to any conclusions where the problem lies. I know that print() can take a string as an argument. I also know that if I check the instance of super()__len__() I get back True when I check for an int. If I then do str(super()__len__()) and check what type of instance it is, I get back True for a string and False for an int. When I then return it to be used in print() I get this weird error. What is happening here? This is just something extra I wanted to add to keep practicing with overwriting inbuilt methods but I would really like to learn from this. 1 Answer Steven Parker179,610 Points The override of " __len__" is returning a string message, but the system expects it to return a numeric value. Did you perhaps intend to override " __str__" instead using this code? Steven Parker179,610 Points That's right, because "print" implicitly performs string conversion so you get the same output either way. Jonathan Dell'OvaPro Student 6,283 Points Jonathan Dell'OvaPro Student 6,283 Points Yes, this was the issue. I didn't realize that the system was expecting a numeric value that had to be filled. Overriding __str__with the exact same syntax for the body made it work when using print(hand)and also print(str(hand))which obviously give the same output. Thanks!
https://teamtreehouse.com/community/dont-understand-why-there-i-get-typeerror-str-object-cannot-be-interpreted-as-an-integer-in-my-code
CC-MAIN-2020-05
refinedweb
360
70.43
Pass parameters to React event handler without wrapping the function rating } return (<button onClick={() => rate(1)}>Rate 1</button>); Or maybe bind the required value to create a new function that has the argument(s) bound. const rate = (val) => { // Send rating }return (<button onClick={rate.bind(null, 1)}>Rate 1</button>); We could also create a factory function that will return a new function that calls rate with the correct value. const rate = (val) => { // Send rating }const rateFactory = (val) => () => rate(val);return (<button onClick={rateFactory(1)}>Rate 1</button>); The problem with the above options is that we are creating new functions each time. This is probably fine with one or two elements, but if we are attaching click events to a large number of elements we need a better solution. We can take advantage of data-* attributes as medium for passing the data to the handler function. They can be accessed via the React event argument passed to the function. const rate = () => { const { value, category } = event.target.dataset; // Send rating }return (<button data-value={1} data-category={'sandwich'} onClick={rate}>Rate 1</button>); Caveat Values assign to dataset are always serialized to strings, so to pass a JSON blob you would have to `JSON.stringify()` the value. Likewise with numbers, in the handler function you will need to convert the string to a number. Bonus data-* attributes can be accessed by your css
https://milkjar.medium.com/pass-parameters-to-react-event-handler-without-wrapping-the-function-ee76499271b6?source=post_internal_links---------0----------------------------
CC-MAIN-2021-21
refinedweb
234
53.81
when you show, how you did the one, maybe somebody can help you how to adapt the 2. ... This is the code of the single. Yes I do nextion. But I was weak on the arduino side. This guy uses two ds18b20s. Lets look at the Nextion side once you have both sensors reading. This is what I get when I do it on the sheet // // Start oneWireSearch.ino // uint8_t pin7[][8] = { { 0x28, 0xEE, 0xE8, 0x76, 0x1F, 0x16, 0x02, 0x8E }, { 0x28, 0xEE, 0xAB, 0x82, 0x1F, 0x16, 0x02, 0x56 }, }; // nr devices found: 2 // // End oneWireSearch.ino // On the other hand, I get this output. I arranged my nextion file this way. 2.4 inches Try embedding these into your ino. When #include "Nextion.h" is used, it includes Nex*****.h #include "Nextion.h" // #include "NexText.h" // #include "NexGauge.h" NexText temp8E = NexText(0,1,"tem"); NexText temp56 = NexText(0,1,"tem1"); NexGauge ptr8E = NexGauge(0,5,"z0"); NexGauge ptr56 = NexGauge(0,6,"z1"); if(addr[7]==0x8E) { celsius = (float)raw / 16.0; number = ((float)celsius/60)*180+30; ptr8E.setValue(number); memset(buffer, 0, sizeof(buffer)); itoa(celsius, buffer, 10); temp8E.setText(buffer); } if(addr[7]==0x56) { celsius = (float)raw / 16.0; number = ((float)celsius/60)*180+30; ptr56.setValue(number); memset(buffer, 0, sizeof(buffer)); itoa(celsius, buffer, 10); temp56.setText(buffer); } Insert the if statements just after your temperature reading where your data line is printed out. This will use the last byte of the ROM identifier to know which text/gauge pair to use in the Nextion HMI I placed it but I get an error. Add a closing brace to the function around line 105. Easy mistake, I used to make it often ;) memset(buffer, 0, sizeof(buffer)); itoa(celsius, buffer, 10); temp56.setText(buffer); } }<---------This one is missing. void setup(void) Thank you. :) fixed. I could send 2 data to the screen. But one shows 87 degrees and the other shows 88 degrees. There is a mistake in the measurement, but where. I did this as a link.; else if (cfg == 0x20) raw = raw << 2; else if (cfg == 0x40) raw = raw << 1; /* default is 12 bit resolution, 750 ms conversion time */ } The measurement error has improved when you remove these codes. Thank you very much for your help :) Erhan Silkin my English is bad google translate How can I show the temperature range of 2 DS18B20 on the nextion screen. I've done one, but I do not know how to make 2 of them.
http://support.iteadstudio.com/support/discussions/topics/11000009663
CC-MAIN-2018-39
refinedweb
419
78.04
gsm_explode, gsm_implode — GSM 06.10 supplementary functions for testing #include "gsm.h" void gsm_explode(g, frame, xframe) gsm g; gsm_frame frame; gsm_signal xframe[ 76 ]; void gsm_implode(g, xframe, frame) gsm g; gsm_signal xframe[ 76 ]; gsm_frame frame; Gsm is an implementation of the final draft GSM 06.10 standard for full-rate speech transcoding. Test data for implementations of this particular document can be bought and used to verify an implementation. The encoded test data uses a format different from what one would use to transmit frames with the least number of bits. Gsm_explode() and gsm_implode() convert between the internal, small, 33-byte format and the 76-word format used by the test data. gsm_explode() returns -1 if the passed frame is invalid, else 0. Please direct bug reports to [email protected] and [email protected] berlin.de. gsm(3)
http://huge-man-linux.net/man3/gsm_explode.html
CC-MAIN-2017-13
refinedweb
142
67.15
Easily Write Custom Gesture Recognizers for Your Tablet PC Applications Scott Swigart Swigart Consulting, LLC November 2005 Applies to: Microsoft Windows XP Tablet PC Edition Platform SDK 1.7 Summary: Learn how to easily write a custom gesture recognizer with the Simple Gesture Recognition library. Readers should be familiar with the Microsoft Windows XP Tablet PC Edition Platform SDK 1.7. (15 printed pages) Contents Overview Siger - At a Glance Siger Project SigerUnitTests Project InkTest Project Building Custom Recognizers Vector Strings and Regular Expressions The Gesture, the Whole Gesture, and Nothing But the Gesture Comparing Statistics Open and Shut Plugging It In Velocity Matters Flip-Flop Supported Languages Gesture Usability Conclusion Additional Resources Biography Overview Most applications are designed for interaction through a mouse or keyboard. However, when designing an application for Tablet PC, it's critical that you keep pen input and pen interaction at the forefront of your thinking. Though the pen provides direct interaction with the screen, traditional user interface metaphors, such as toolbar buttons and scroll bars, are typically small and sometimes difficult to target with the pen. For this reason, Tablet PC supports gestures. With gestures, specific pen motions can initiate commands. For example, a quick up or down flick of the pen may cause a document to scroll in the indicated direction. A check-mark gesture may select the nearest item, rather than forcing the user to accurately point at and tap in a tiny check box. The Tablet PC SDK ships with the ability to recognize a number of pre-defined application gestures. However, you may want to develop custom gestures that are specific to your application. For example, you might want a question mark gesture to display online Help, or a bracket gesture to group selected application items together. Figure 1. Possible custom gestures I wanted to construct a library that made it extremely simple to add custom gestures to an application. Ideally, a new gesture would only require a few lines of code, and not require the developer to use any complex algorithms or do any image processing. The result is Siger—the Simple Gesture Recognition library. As you will see, with this library you can use simple regular expressions as the basis of your gesture recognition. The Siger solution is available as an open-source project from SourceForge.net. The solution is under the BSD license, so you are free to use and distribute it even in closed-source, commercial applications. The first part of this article will orient you to the Siger project and show you what it includes. The second part of this article will walk you through building custom gesture recognizers for two gestures, the question mark and right bracket, equipping you with the information you need to build your own custom gesture recognizer using Siger. Siger - At a Glance The latest Siger source can be downloaded as a .Zip file from SourceForge.net. When you extract the .Zip file, you will find a root solution file called Siger.sln. When you open the solution file, you will see three projects, SiGeR, SigerUnitTests, and InkTest. Figure 2. Siger solution structure Siger Project The main recognition engine is in the Siger project. This project contains the classes needed to break down an ink stroke, perform analysis, and apply various recognizers to it. To ensure the robustness of the Siger engine, I built sample recognizers that mimic the functionality of some of the Tablet PC built-in recognizers. Specifically, Siger can recognize the check mark, circle, curlicue, double-circle, double-curlicue, square, star, and triangle gestures. In addition, Siger includes a few gestures that are not part of the Tablet PC SDK: the rectangle, question mark, and right-bracket gestures. SigerUnitTests Project The unit testing project runs pre-saved ink strokes through the recognizer engine, checking two important pieces of functionality. First, it ensures that the recognizer correctly detects all shapes of a particular type (for example, the triangle recognizer returns "true" for all the triangle shapes). Second, and just as important, the unit tests ensure that the recognizer returns no false positives. For example, it ensures that the star recognizer doesn't also return "true" for a triangle gesture. In the development of this project, the unit testing was critically important as more and more recognizers were created and the code was refactored. Unit testing made it instantly apparent when a recognizer was not correctly detecting shapes and returned false positives for new shapes. Unit testing also made it apparent when refactoring broke previously working code. Note The unit tests are currently built for use with NUnit, a unit testing framework for all .NET languages. If you want to run the unit tests, you will need to download and install NUnit from. However, NUnit is not needed to build recognizers using the Siger engine or to use the InkTest project. in addition, with the release of Visual Studio 2005 Team System, the Siger unit tests will likely be converted to use the Team System unit testing functionality. InkTest Project When building custom gestures, it's useful to have a test bench where you can test gestures and determine whether they are correctly recognized. The InkTest project is such a test bench. Figure 3. Simple Gesture Recognition Test Bench With InkTest, you can do a number of things. First, when you test a gesture in the top-left pane, the application will attempt to recognize it using the existing Siger recognizers. For comparison, you can make gestures in the lower-left pane and the application will attempt to recognize them using the Tablet PC built-in recognizers. When a gesture is made, statistical information about the gesture appears in the properties pane at the bottom right corner. When creating a recognizer for a new gesture, this statistical information is useful in determining your recognition strategy (in other words, figuring out what makes this gesture different from other gestures). This statistical information will be covered in detail later in this article. In addition, with InkTest, you can load and save gestures from the File menu. This is useful when you've made a gesture that should be recognized, but for some reason is not recognized. You can work on your code and just keep re-loading the gesture until your recognizer is working correctly. You can also use saved gestures as part of the unit testing suite. Building Custom Recognizers Now that you know what functionality Siger makes available to you, you can examine two custom recognizers and see how they use specific gesture characteristics to aid in accurately recognizing each gesture. Vector Strings and Regular Expressions Assume that you want to create an application gesture that will display online Help when the user makes a question mark motion. This is not one of the gestures natively recognized by the Tablet PC, so you must build a custom recognizer. You will see that this is quite simple to do. Writing a recognizer for the actual question mark shape bitmap would be relatively complex. Generally, this involves sending the pixels into a neural network that must be trained to recognize them. This process makes the development of custom gestures overly complex, so Siger takes different approach, by generating a vector string. When a user makes a question mark stroke or gesture, the pen moves along a certain well-known path: Figure 4. Question mark pen path The pen starts off moving generally up (maybe up and to the right, maybe up and to the left, but generally up), then the pen moves over to the right, then down, and then to the left, and finally down again. To get at this directional information, Siger converts the points of the stroke into a series of vectors. Because a regular expression will eventually be used to match the stroke, the vectors are just encoded as a simple string. The following listing is an example of the question mark gesture vector string: Listing 1. Vector string Table 1. Key to vector string In this string, you can see that the first vector in the stroke indicates left-up, followed by a series of vectors indicating up, then right (including right-up, right, and right-down), then down, then left-down, and then down. While recognition of the pixel image would be difficult, recognition of this pattern of strings is easy. To generate the vector string, create an instance of the Siger.StrokeInfo class, and pass a Tablet PC Stroke object to the constructor. StrokeInfo decomposes the ink stroke into a vector string. The StrokeInfo class also exposes an IsMatch function that you can use to determine if the stroke matches a given regular expression pattern. The code to match a question mark is as follows: Listing 2. Code to match a question mark The Vectors class simplifies the process of building the regular expressions. Generally, you are only concerned about matching eight possible pen directions: right, up, left, down, right-up, right-down, left-up, and left-down. Vectors.Ups will match anything that is generally up, so it will match up, right-up, and left-up. Using the Vectors class makes it quick and easy to build the regular expression and makes your code readable. However, if the Vectors class limits you in any way, you can create any arbitrarily complex regular expression and pass it directly to the IsMatch function. The Gesture, the Whole Gesture, and Nothing But the Gesture The previous code will work, but it's not entirely robust. It will match a question mark pattern anywhere in the stroke, so even the following stroke would be matched as a question mark gesture: Figure 5. Gesture that shouldn't match, but does Therefore, it is important to match the entire vector string, not just a portion of it. This, however, introduces another problem. If you look closely at the question mark in Figure 4, you notice that at the bottom of the question mark, the line jags suddenly to the left. It is common for the pen to make little tick marks like this when the pen touches down, or when the pen is lifted up. These ticks must be ignored to correctly recognize the stroke. To facilitate this, a couple of regular expression fragments are available from the Vectors class that match the beginning and the end of the stroke, and trim off any tick marks. With them, the following code does a much better job of matching a question mark. Listing 3. Better question mark matching Notice the inclusion of Vectors.StartTick and Vectors.EndTick, which match the beginning and end of the vector string and trim off the ticks if present. Comparing Statistics In some cases, it's difficult to accurately recognize a gesture using just a regular expression. Consider a square and a circle. For both, the pen moves generally right, down, left, and up. However, when a circle is parsed into the vector string, the vector string will contain many more diagonal vectors (such as right-down, left-up, and so on) than a square would. This is where statistical information, mentioned previously, can improve recognition. The StrokeInfo class exposes additional properties that let you quickly examine these statistical characteristics of the stroke: Table 2. Properties exposed by the StrokeInfo class While a circle and square may both be matched by the same regular expression, a square should have a high value for the Straight property. For a circle, on the other hand, both the Straight and Diagonal properties should be roughly fifty percent. Open and Shut Circles and squares are closed shapes, while question marks and brackets are open shapes. It is easy to determine whether a stroke is closed or open, and this characteristic is another key to achieving accurate gesture recognition. The StartEndProximity property provides the distance between the stroke start point and the stroke end point. If it falls below a certain threshold, which is defined by the CLOSED_PROXIMITY constant, then the shape is considered closed. Listing 4. Ensuring that the stroke is an open shape If StartEndProximity is greater than the constant CLOSED_PROXIMITY, then the shape is considered open. Notice that the code checks for a closed shape before performing the regular expression match. This is intentional, because checking for the closed shape is fast compared to the regular expression match. Also note that "AndAlso" is used to make this a short circuit Boolean evaluation (in C#, this would not be needed, as all Boolean logic is short circuit). In general, do quick comparisons first and use short circuit logic so that the recognizer spends as little time as possible when rejecting a stroke. Plugging It In To determine just which gesture the user made, you will often want to test the stroke with a number of recognizers. Siger contains a number of classes to make this simple. The first step is to create a Recognizer class that inherits the CustomRecognizer base class. Listing 5. Creating a custom recognizer class Public Class Question Inherits CustomGesture Public Sub New() MyClass.New(Nothing) End Sub Public Sub New(ByVal strokeInfo As StrokeInfo) MyBase.New(strokeInfo) Name = "Question" End Sub Protected Overrides Function Recognize() As Boolean Return _ StrokeInfo.StrokeStatistics.StartEndProximity > CLOSED_PROXIMITY _ AndAlso StrokeInfo.IsMatch(Vectors.StartTick & Vectors.Ups & _ Vectors.Rights & Vectors.Downs & Vectors.Lefts & _ Vectors.Downs & Vectors.EndTick) End Function End Class You can see that not much is involved in creating a custom recognizer class. It simply contains two constructors, one which contains no arguments, and another which can be initialized with a StrokeInfo class. You also implement a Recognize function where you put your custom recognition logic. Write this function to return "true" if it recognizes the stroke. To use multiple recognizers against a stroke, create an instance of the SigerRecognizer and load it up with each recognizer, as shown in the following code example: Listing 6. Initializing the SigerRecognizer for specific shapes At the point when the user makes a stroke, simply pass it to the SigerRecognizer and see which custom recognizer indicates a match. For the InkTest application, the matches are shown in a text box. Listing 7. Looking for a match Velocity Matters Now that you've examined the completed recognizer for the question mark gesture, you'll see that building a recognizer for a bracket is slightly more complex. Consider the gestures in Figure 6. Starting from the top, both gestures move generally to the right, then down, and then left. However, one is clearly a bracket, and the other is an arc. Figure 6. Bracket and arc You can use a regular expression to start to recognize the bracket, but you must look at additional information to differentiate it from other shapes. As previously discussed, the StrokeInfo class exposes Straight and Diagonal properties which are beneficial here. Less than ten percent of the vectors in the bracket would be diagonal, but in the arc more than ten percent of the vectors would be diagonal. There's one other non-visible aspect of the stroke that can aid in recognition. To draw or gesture a corner, the user must slow the pen motion. If you attempt to draw or gesture a corner without letting the pen slow down, you end up with an arc. Figure 7 shows graphs of the pen velocity for the bracket and arc: Figure 7. Pen velocity graphs for the bracket and arc respectively When making the bracket gesture, the user moves the pen from the starting point and starts accelerating. However, when it is time to turn the first corner, the pen motion slows dramatically. It speeds away from the first corner, and then screeches almost to a stop as it makes the second corner. It speeds up again, and then slows as the user completes the bracket. The pen velocity for the arc looks very different. The user continues to accelerate the pen motion all the way around the arc, and then slows just before the end. Through the StrokeInfo.StopPoints property, you can determine how many times the pen slowed down. Counting the beginning and the end of the stroke and the two corners in the middle, the bracket should have four stop points. Note The Tablet PC SDK will also let you identify the "cusps" of a stroke. A cusp is the "point on the stroke where the direction of writing changes in a discontinuous fashion." However, I haven't found cusps to be as predictive as the velocity signature, but it's possible that a combination of the two, or a variation on cusps, would yield even higher accuracy. Flip-Flop A user will most likely make the bracket gesture starting from the top left. However, as shown in Figure 8, a bracket gesture can also be made starting at the bottom right. Figure 8. Both are valid brackets You wouldn't want to write custom matching code for every direction a shape could conceivably be drawn, so the base CustomGesture class can automatically rotate and flip the stroke while looking for a match. At this point, the analysis of the bracket has been done, and the recognizer can be coded. The recognition logic is just one (somewhat long) line of code: Listing 8. RightBracket recognizer Protected Overrides Function Recognize() As Boolean Return _ StrokeInfo.StrokeStatistics.StartEndProximity > CLOSED_PROXIMITY _ AndAlso StrokeInfo.StrokeStatistics.StopPoints = 4 _ AndAlso StrokeInfo.StrokeStatistics.Square > 0.9 _ AndAlso StrokeInfo.IsMatch(Vectors.StartTick & Vectors.Rights & _ Vectors.Downs & Vectors.Lefts & Vectors.EndTick, _ 0, False, True) End Function The IsMatch function looks for a stroke that moves right, down, and left. If the stroke doesn't match, the additional arguments to the IsMatch function indicate that the Y-coordinates should be flipped, and the match tried again. This handles the two ways in which the user could make the bracket gesture. The bracket is not a closed shape, and the StartEndProximity property checks this. If the stroke is a bracket, it should have four stop points. Also, the bracket should have a low percentage of diagonal lines. As you can see in Figure 9, this code correctly recognizes the right bracket. Figure 9. Right bracket is recognized Supported Languages There's one more topic worth discussing. The Siger engine is written in Microsoft Visual Basic .NET, but you are free to write recognizers in any .NET language you want. Simply add a reference to the Siger project or assembly, and create a class that inherits CustomRecognizer. Here's the right-bracket recognizer implemented in C#. Listing 9. RightBracket recognizer in C# public class CSRightBracket : CustomGesture { public CSRightBracket() : this(null) {} public CSRightBracket(StrokeInfo si) : base(si) { Name = "CS Right Bracket"; } protected override bool Recognize() { return ( StrokeInfo.StrokeStatistics.StartEndProximity > CLOSED_PROXIMITY && StrokeInfo.StrokeStatistics.StopPoints == 4 && StrokeInfo.StrokeStatistics.Square > 0.9 && StrokeInfo.IsMatch(Vectors.StartTick + Vectors.Rights + Vectors.Downs + Vectors.Lefts + Vectors.EndTick, 0, false, true)); } } Gesture Usability Having a library for recognition is but one aspect of integrating gestures into your application. If you are going to implement custom gestures, you must consider how you'll make them discoverable to your users. There's little point in implementing gestures if your users never know they exist. For the user, it can take a little practice to make accurate gestures. Having the gesture visible may make it easier for the user to realize how to accurately draw your custom shape. Finally, applications that respond to gestures also likely allow ink input. How will your application differentiate between handwriting (that is, ink input) or gesturing? Some applications require the user to hold down the barrel button while gesturing. Others may provide a specific area for gestures. Others may even require that the user explicitly enter a "gesture mode." For more thoughts on gesture usability, see Using Gestures in Tablet PC Applications by Mark Hopkins. Conclusion In this article, you have seen how you can use Siger for custom gesture recognition. The basis for recognition is a simple regular expression. Siger provides additional statistics about the ink stroke, which can improve the recognition accuracy. In most cases, you need only write a few lines of code to recognize a gesture. Again, recognizing gestures is only part of the problem. You need to ensure that they are incorporated in a usable and discoverable way. Feel free to download Siger, use it in any way you want, and freely distribute it as part of your Tablet PC applications. You can also post suggestions or bugs to the SourceForge project page. Additional Resources The following links were mentioned in this article. - You can find the Siger project and download page at. - The article, "Using Gestures in Tablet PC Applications" by Mark Hopkins is available from the MSDN Library. - Information about BSD and GPL open source licensing is available from Wikipedia.org. - NUnit makes a unit testing framework available at NUnit.org. - Find information about Visual Studio 2005 Team System testing tools in the article "Visual Studio 2005 Team System: Enabling Better Software Through Better Testing." Biography Scott Swigart, owner of Swigart Consulting LLC, spends his time helping organizations get the most out of today's technology, while preparing to leverage tomorrow's. In addition to consulting and training, Scott Swigart has authored many articles and books about .NET development. Feel free to contact Scott with any questions or comments at [email protected].
http://msdn.microsoft.com/en-us/library/aa480673(d=printer).aspx
CC-MAIN-2014-42
refinedweb
3,564
54.02
EYKOTITO A TOL. VII.-No. 34. PHILADELPHIA, FRIDAY, FEBRTJAKY 8, 1867. DOUBLE SHEET-THREE CENTS. CIFT CONCERT SWINDLES. Tne North American Kelly Ills Depr tmra from tha City Prospects for the Tlefcet-Holders Adventures of a Gift Concert Man In Memphis. From the Chicago Republican, 6th is(. The end of a gigantic, and because gigantic, populnr swindle, bns been attained. A targe number of too-contldlrig ones hare allowed themselves to become the victims of an indivi dual who, through the medium of certain Jour nals, advertised ills rascality, and by the palna ble falsehoods, published bis estimate of human crednllty. That he has been correct In the eitii mate there can be no doubt, for, after obtaining money under falne protons, be has fled. Mr. A. A. Kelly, manager of the North American OLft Ooueert, has gone to the city of New York, .nd the "North American Prlzo Conocrt" is indefinitely postponed.- It is probable that thla "eoneerl" which was to be Riven has continued aince the Inception of the swindle, and has been Blmply a concerfrof action betweeu A. A, Kelly and some unknown individuals nnder the title f Co., each performer of the troupe playing upon "a harp of a thousand strings." Hie "headquarters" of this lottery. No. 105 Randolph street, has been converted into a boot and shoe store, where, by permlsMiou of the proprietors, some fnsoinating yon n if man, or charming young woman, is still to preside at a def k for the sole of tickets. It is possible that we may hereafter be called upon to Rive to this unknown firm a gratuitous notice. Another tlesk has been allowed to the agents of Kelly, at Mos. El and 86 La balle street. 11 may not he an inappropriate curiosity on the part of some ticket-holders that prompts the inquiry as to how many persons constitute the "in" ni f nv & CO. This generous firm offered the magnificent sum ef $500,000 in prizes, and admission to a "grand concert," to live hundred thousand In dividuals who would give to them the sum of H each the entire expense of advertising, clerk hire, etc., to be paid out or the pockets of the firm. Of course, none but fools could be Induced .to believe such palpable lies, but even fools Should be protected from the cunning of rogues. The Republican denounced this swindle as ttoou as our notice was called to it by a corres pondent. The fellow Kelly became frightened .when we suggested an Investigation of his affairs by the police, and speedily made ar rangements to leave the city, but he need not bave made such haste; the police are too busy in looking after degraded women to soent the track of a first-class and prosperous knave. The attention of the police was called to Kelly about ten days ago, and that unnecessarily alarmed individual only loll the city on Monday night. Hot his agents arestill here. What will be done -with them? ADVENTURES OF A OIFT EXTETtmiSE MAN. The Memphis Avalanche contains the follow ing in reference to Wiggins, of Wiggins, Brad ford Co., who figured in Chicago a year ago in connection with a arlft swindle: Just at this juncture In our affairs, it appears :tbat Memphis is looked upon by the grand army of swindlers (of the republic) as the centre of attraction the safety spot ol the country Jor criminal refugees. We have an instance of it. When the Crosby Opera House was in the full tide of success, some two months ago. an ingenious and enterprising Yankee, suiting nnder the not uncommon appellation of Wig pins, started a grand gift enterprise scheme. That the Crosby Opera House might not be In jured by his competition, and to distinguish .' the two enterprises, he concluded he would terra his darling pet the Crosby Gift Enterprise, and located it in unicago. After making all the necessary arrangements, ench as printing tickets, securing offices and agencies, he "set tho ball in motion." Like a far-seeing man, he advertised his scheme far and near, and about three weeks after opening, sixteen clerks were employed where the ener- fetlo Wiggins at first sufficed to do the work, n that short space of time Wiggins' receipts amounted to over one hnndred and fifty thou sand dollars, and, strange to say. he was allowed to work all this time, and accumulate this im mense sum without detection. Finally the whole affair leaked out, and with it the in domitable Wiggins took his departure lor parts unknown, leaving his clerks without ill pi i i3iy The next thing we hear of Wiggins ho is in our midst here in Memphis. After he lelt Chicago, it appears he selected Indiana as his travelling around. He proceeded leisurely through tho (State undisturbed, reached Kvaus vllle, boarded a packet, and landed in Mem phis. He secured rooms at the Worsham House, otrolled through the city, viewed Its improve ments, deigned to comment upon Its future, and even spoke of Investing in various under takings. As instinctively as a cat sits by a rat hole, patiently watching the coming of its in tended victim. Just so did Stonebraker aud ' Jiarnes await and closely watch the movements of their victim poor Wiggins. It should bo known that Stonebraker and Barnes were both M. P. detectives, With these detectives aogging his footsteps, poor Wiggins was in a bad fix Indeed, it seems that llorton, of "diamond" notoriety, gave the cue of the matter to Stone- iialrAi. liorton was from Chlcano, and knew Wig gins; professed to be his friend, and in this wuy threw suspicious Wiggins off bis guard. He introduced Stonebraker to WiKgins, and Wig gins and Stonebraker "Broiled," and dined and supped and slept together. Wiggins bore the expenses. In a few days eternal friendship was mutually sworn. Matters weut on in this way lor soma time, until one evening Wiggins, who had beeu drinking pretty freely, and Hlonebraker and Barries, retired together to Wiggins' room in the Worsham House. After conversing on several subjects, btonebraker tamed to Wiggins, and, with a knowing look, said: "Wiggins, old" boy, you made a good thing out of that Chicago trick." Wiggins looked straight, and Inquired of Btonebraker what he meant. "Oh!" replied Stonebraker, "you know there's no use in disguising it we know all about it." Again Mr. Wiggins put on a look of astonishment, and asserted that he could not comprehend his meaning. Stonebraker could not, or at least had no de Hire to retrain from expressing more fully his meaning and told Wigglus more about his (Wiggins') affair in Chicago than Wiggins knew himself, accompanying his remarks by asking Wiggins to walk with him to the stution bouse. Wiggins was thunderstruck. Wiggins , -was dumbfounded. Wiggins waited. Stone braker and Barnes laid hold of Wiggins, I when lie with a truly SUKKbuve iuiuu, u.eu u mora ' could be no compromise. Over this proposition there was some discussion. Wiegins finally agreed to disgorge liberally. And lie did, to the tune of twenty tUousund dollars, in bills of the denomination of one thousand dollars each. uoti miprt ilmt thev received a much lamer M JVA"V - J Bum.? Of course, Wiggins was released, stone braker and Barnes then left I he city, and have xiot since been heard from. Wiggins then left the city departed for Texas. After Wiggins left the city, the matter leaked out and reached the ears of Sheriff Winters, lie ferreted out the matter, ascertained the whereabouts of Wiegius. and telegraphed to the Chief of Police of Chicago, Inquiring as to what course he should purbue in the matter, Having that he would send Wiggins back to rhicauo. if desired. He received a reply in return, stating that they did not want him "previous to fliis, however, Sheriff Winters learned that Wiggins had deposited lu the i?iV iS.(JI,.hI Bank of this city about 29.0O0. This sum he instantly attached, by serving the 4riiDPon Mr. Havis. President of the bank. It appears that in depositing it Wiggins gave Instructions that the entue umuuiiii luvoou i Kpven-thlrties. We understand, also, that Turtles in thisciiy who invested in Wiggins' rar"5?.i ...hJa mve also attached the money. Orleans, rlty who represented Himself a. the agent of WlKKins; and who threatened to make certain exposures among higu oiuciaia u' .u uv tJne. Thus the matter rests tor the present. A Hum Anchor. The largest anchor in the '.i.otMn feet six inches: tread ot arms, Mvenfoet four inches. The anchor has been proved, and found tg stand the strain, of one bondrcd tons. nmniiii) for thefireat Eastern I WWa.UM".r.r.T n. wpiht. U what 1 bene at woivernampiuu, - " -: ," i j feitfor my fiVht tons, exclusive of the.stock; length of the I ' 'llcltude. in V - f,.t oi inrthpa. lenertll Oi WOOd I u v,n.,a THE NAMELESS CRIME. Extraordinary Statement of Joel I.lds ley, the Clergyman who Beat his Son neath-Tvro aa a Half Hoars' Whipping The Defendant Admit "He vu Hot Angry." Etc. From the Rochenler Union and Advertitrr, Feb 6. That I should make this statement, I feel Is due both to myself and to the public. I have long waited for this opportunity, but lu view of the legal part of the matter, and the injunc tions of my counsel, my month taas been closed, and I have suffered lu silence. Had I been called before the coroner's jury to my own statement of the case, it would have saved me a great deal ofpaln, and the public much false impression. But this was not the case, aud the few words-1 spoke in respect to the matter were spoken to Dr. Chamberlain in a room alone, with' no one present but him self and me, aud the communication was but a verbal one. My little boy lost his mother when about one year and a half Old, a lady whom I loved intensely, and who loved me with all the tenderness an devotion of a woman's nature. After the death of my wife, tny affections were drawn more closely around my child, audi loved him most tenderly. He was a noblo, manly, and beautiful child, vtry affectionate in his disposition and bright ln his intellect. His father and friends looked upon him with pride and hope, aud I should not have been smithed to have had him abseat from mo lor a week. My little boy had a won derfully firm will, enough for an adult, but, there wasnothlugmaliclous in it. 1 do not speak of thisosanylhing ogalust him, but, on theolluir band, 1 consider il to have been a God-giveu talent of the highest importance hud it, boen rightly trained. On tho night of his dcutli, his mother had taken him to another room to put him to bed. My little boy hnd been trained by his own mother, durlne her life, to bu put to bed by himself alone, as on account of her feeble health she was not able to do as mothers ordinarily would, and he would do it just as cheerfully and happily as thouch put to bl in any other way. In this inslauce my child re fused to obey my wife, and she was striving to secure obedlenco. I supposed a word from me would be sufficient, as he very seldom refused to obey me. But it was not. We questioned whether it was best to try and force obedience P whether It were not best to drop the matter and consider about it. I llnally decluud, on. my own responsibility, to punish him; not that I cured so much for the particular thing he was to do, but I felt It was important to secure tho habit of - obedience. I anticipated no serious conflict at nil. The instrument J used was a niece of A shinele taken from some old shingles used for kindlings, thirty years old, or move, and tho wood I should think was hemlock, cor taiuly not hard wood. My impression Is that it might have been an inch and ouo-q lavtcr wide, possibly, but I cannot state with precise accuracy the width of .it. It very likely was less, as was stated in court. It was not a thick . shingle; nothing like a club about It. It occurred to be that obedi ence would be secured more readily by it than by using the hand, if it were necessary to use corporeal punishment at all. I recollect distinctly of feeling whon I commenced that I would lather the blows would fall upon myseir than upon my boy. As I continued to puulsli him, stopping at very frequent intervals aud talking to him, I felt convinced ttiat he knew what I -wished him to do, as his reply to my question, "Why will you not do it, Johnny? l'a is sorry Johnny will not mind," was, "1 do not wish to. I wish to do something else," stopping crying at the same time. I had undertaken to secure obedience, and 1 felt that I must accom plish it. It will be said that I erred ln com mencing at that hour, when my child was tirod by the pluv of the day. I did err in so doing, but, as 1 have said, I had no idea of any serious conflict at all. If I had hud I should not have commenced. As I went on I recollect that my anxiety be came very Intense that my boy should yield. Had the point been yielded I should have felt as though a mouutain had been removed from my breast. As to whether he understood what was required of him. probably most will think that he did not. I at the time believed that he did. Under this great anxiety, with my mind fixed on the necessity of my boy's yielding for his own future good, and expecting at every moment that he would yield. I was not aware of the extent to which 1 had gone, or of the effects 1 was producing upon my child. I suppose my detective eyesight had some thing to do in the case, by preventing my seeing the actual effects 1 was producing; but the main cause, I believe, was groat anxiety, and mv mind fixed upon its being neces sary for the good of my child that he should yield. I wbb not angry or in a passion in all this. I wish to state this fully and without re serve, that, if I know anything of the action of my mind, I was not angry. 1 never could have punished my child for an hour, or half an hour, or anything like it, in anger. Were it sol should have lelt very guilty, aud to have done so would have been monstrous. But never for one hour or one moment have I been conscious of any Criminal intent in this case. No true man or woman will wish me to belle my convictions in this matter, lio not misunderstand me here I have been greatly misrepresented upon this point. I consider the act to have been wrong, and very wrong; but as to a criminal Inteut there was none ln ny form. 1 greatly mis judged, and the fearful consequences, in the loss of my child, have come upou me with a crushing weight. What! might have said to the Coroueras to the length of time. I do not remember, but I think I have usually stated the time as from two to two and a half hours; and I should think it was ciuile us probable that I was not more than two hours, as that it was two and a hair. The Coroner testifies that I punished the child until 1 saw signs of weakness. Tills Is a mis take; I do not wish to provoke a controversy with the Coroner, nor am I accusing hiiuof intentional misrepresentation; but it is a mis take. 1 made a statement to Dr. Crawford on the same day, and he will testify to the truth of what I buy, though It could not be brought into court. I sloppod punishment because I felt that it wus useless to go further. I took my little boy and laid him upou the settee aud covered him up. At the time I ceased the pun ishment, at the time I luid him upon the settee, aud for some time afterwards, 1 saw no slgus that the child had oeen injured, xie uau iaiK.eu durimr this time, and appeared natural. Theu there was a change, aud soon afterwards the child died. Then it was I bezau to realize my fearful loss, and the terrible paug the circum stances of the case gave to it; then it was I felt id not loun after to my father, that I could have given my own Uiefor my darling hnv A word as to why the limbs and arms were so extensively discolored. It arose from tills, thai I aimed not to repeat the blows in tho same place. I carefully avoided the vital parts of the body. There were no blows at all upon the body of my child. I am not aware that 1 struck the hend at all with the instrument used, nor with anything else. I think the slight marks must have been produced accidentally, as they might have been. Perhaps I ought to say something in view of the conviction and sentence passed upon me. It may be inferred from what! have said, be lieving as I do that a former sickness had to do with this matter, a sickness the (severity aud particulars of whicu couia ie but imperfectly presented in courts, being for years under the most poworiui vouim buu.uuicuuus uuiiy, sickness from which my physical constitution never hua recovered and never can. 11 would Im Kt.rnnfA indeed, if my mind hud the same vigor which it hud before. My own Judgment Is. tliat hud it not been for this sickness and subsequent norvousprostratlon, more or less connected,! being so for months after I came home near four years ago, that l could not look into a newspaper, that this thing could never have occurreu. uui, is mi uie con vletion: If I did not know that I wusnot anitrv If I felt that from a hard, unfeeling temoer towards my child, I recklessly hazarded his life or health; if I felt that I was determined to con mir lilm at all hazards, come life or death, f would suffer silently, tmdfeel that I deserved to aiiflVr! but If I was actuated bv the highest mo tives m an v man s nature, in imm poijui miun veu to oe outy; n it was me very iove raid wnicn causea tuui. intense lelt after 1 saw Wie result that I iclven my own life for my child. then you may Judge how I felt in view of the sentence. The Judee says the majesty of the law must be tiustained. 1 had always supposed that human criminal law uiuct Ve founded I upon the law of Go', and derived from it alone Its force. Does i -od hold a man responsible for anything rnoro than wilful wrong-doing in the present or ln the past, or for a negloct to do right arising from a culpa ble indifference to the right or a preference for thewrone? Is crime to bo determined by the consequences of an act or by the intent of the heart? Can yon make crime without you can provesomounlawfuland criminal intent in some form? Can you make d ime of an error ln judgment, though it be a very serious one? But the Judge says tin majesty of the law must be sustained. 1 cannot S'o the point. Mich un application. Instead of sustaining the majesty of the law, in my opin'onsaps its very foundation. I feel that my home has been desolated by the death oi a dearly loved child. The consequences of my own act herncome upon me with crushing weight in th ut loss which many could never have lived through. 1 frankly confess my great error in thh'. My family has been broken np. My property 1ms been sweptaway by the neces sary expenses of my trial, and I greatly fear that the life of my wife is at least Jeopardized ln her present feeble and critical state of health. If you ask me what 1 think of the sentence aud conviction, ln view of all this, I must say that I leel it to be unjust and unworthy the code of an enlightened Christian nation. In saying this I do not at all impugn the lutcgrlty of the Judge or jury who sat upon my trial. While my repu tation is as precious to me as to any man, and I am us keenly alive to the Bood or ill opinion of my lcllow-men as any m.iu, yet I feel that chnrcicter is inllnltely more Important than reputation; and this has been my only support, my hope in Col in all my trouble, that truth could sooner or later bear sway In the minds of my fellow-men. .Ioki. Likosi.ky. T H E SO U T H. Rewards for Assassinations In Ten nessee 45000 Offered for Killing Colo nel "VV. 11. Stokes. The Nashville Press of the 4th has the fol lowing relative to a conspiracy which has been entered into , by certaio Kcbel citizen of Can non, White, and Warren counties to procure the afRcBination of a number of leading and influ ential Union men, all late otlicers of the Fede ral army, residing in the Third Congressional District. . The following, sa,y9 the Pre, are the names of' the persons marked out lov death, and the sums offered lor making way with tiiera: For Colonel W. B. tokes, $."0(M: lor Colonel Black burn, $4000; for Colonel Pleuoure. $4DH0; lor Captain Vanatla, $2000; for Captain Hatha way, $2000. K Rebel citizen livms near Alexumlria, in De Kalb county, has offered to give $1000 for the assa-sinfltion ot Captain Vanatta. It is under stood that three notorious cutthroats, who were once members of Champ Ferguson's tram? of bushwhackers, have been employed to do the work. Several Union citizens, tearful of their lives, have tied to Alexandria lor safety, where a number of discharged Federal soldiers, formerly belonging to Colonel Stokes' cavalry, rpsuieu. , Philadelphia Trade Keport. Fkmiay, February 8. The Flour Market con tinues excenslvoly dull, there being no demand, except from the home consumers, who operate with great caution, only purchasing enough to supply their immediate wants. Sales of a few bundl ed barrels, including superfine, at 888'75; extras at S910'50: Northwestern extra famllylat $11($12'S0; Pennsylvania and Ohio do. do. at$ll75 13-50; and fancy brands at iU'SO&W'HO, accord ing toquallty. Kye Flour is selling- la a small way at 7. Nothing doing in Corn Meal. There is a steady demand for good aud prime . Wheat, but in consequence of the limited receipts nnd stocks the transactions' are small. 8nlesof iJOO bush. Pennsylvania red at $3-15, and. 2(iU0 bush, white on private terms. Itye Is quiet. We quote at Sl-ilTK'i IMS. Corn Is ln good demand at yesterday's figures: sales of 8tH)0 bush, new yellow at il-lfcjU'Kj. for Pennsylvania, and 90c. for Southern; a small lot of white sold at 97c. Oats are not much inquired after; sales at 575Sc. Prices of liarley aud Malt are nominal. Nothing doing in Whisky, and prices are nominal. Count Philip Sepur, whose account of the disastrous Russian campaign of lbl2 charmed so menv readers nearlvnftv years aso. has nor. at the advanced age oi eignty-seven years, just completed a history of Napoleon I and his con temporaries. Active in a very different way ana in an nummer way is an oiu iany ui aniens. Maine, who is now in ueMioist year, naa, in tnis- stason, spun ana twisted a iar!?e quantity oi cotton yarn. The same State can boast ol an other old lady, apred eignty-seven years, wno has spun this season so far four hundred skeins of yarn, averaging from eight to ten skeins a day. . There have been important discoveries ot gold and silver in Carleton county, Minnesota. Recent usas of specimens show that the veins are rich enough toanoid large returns to those who will work them. It is anticipated that there will be a irreat rush ot capitalists ana other-) to these mines in the spring. The mines are one hundred aud twenty-five miles above bt. Paul. Bv a vote of tho Wept. Viminia Legislature, it has'been decided that Moreantown. in Monon ealia county, is to be the uw capital of that State. Heretofore the seat of goverunieot has been at Wheeling, in the extreme northwest comer of West Virginia. Morgantown is situated on the Monongahela river. The initiatory steps have been taken in Ten nessee to erect amagnincent monument over the remains oi the lamented General Patrick Cleburne. Large sums have already been sub scribed, and it is confidently believed that $25,000 will be raised. Southern Paper. H. F. Janes, who founded the city of Jaues Yllle; Wisconsin, in 18:!C, soon afterwards emi grated to the Pacific coast, and now writes to the Janesville Gazette, saying that, although he is sixty-three years of aae, he has never yet seen a railroad or a teiegrapn. Mr. Abraham Flavill, of Newark, ' N. J., havine prophesied that the end of the world would be last Thursday, his fellow-sectarians are after him. Most of them think the event will take place within the first six months. M. Nicoliades, a Orcek, has published ln Purls a work which undertakes to prove that all the editors have been wrong in the "venue'' of some of the most important scenes in the "IlJiad." The Mayor of Lynchburg, Va., haviu? been caught riding oa the sidewalk of that town, was reportea to tue mayor, no msi rn.-u.iu me evv dence, and lined himself one dollar. Carme, the French billiard player, has been in New Orleans two years, and can only speak two English words, "Scratch" ana "cocttaus." The Duke ot Hamilton has broken up his model farm in Scotland, and sold the stock. Cause, tmpecunioslty. Major H. T. Duncan, of Kentucky, has sold bis large Bourbon county estate to an English company. Mr. Riddle, ot Boauoke county, Ya., made 4000 gallons of excellent wine from his vineyard last year. Thomas S. Lang, of North Vassolbourg, Me., has refused $40,000 for his famous horse, Gene ral Knox. 1 Jules Favre is a candidate for membership in the French Institute, made vacant by the death of Cousin. ' J. C. Breckinridge received one vote In the Kentucky Legislature lor United States Senator. Francis II (Bomblno), or King of Naples, has finally determined to withdraw his repre sentatives from the Papal Court. Earl Rufcecll Is at Cannes, on a visit to Lord Srvughaui. . . i THIRD EDITION EUROPE. BY THE CABLES. Important Address from Napoleon. England and Spain Have a Word. The Confederate Bondholders Again. Xiuancinl nnd Commercial IVcavh oi To-tltiy. Etc., Etc.. Kte.( Etc.. Etc.f Etc England. London, February 7 Evening. Lord Stanley stated in Parliament that the British Govern ment has protested against the delays and illegal proceedings of Spain In the case of the British eliip Tornado. London, February 7 Nion. The holders of Confederate bonda in this country have united in a petition to back their claims. movements of Steamers. Liverpool, February 7 Evening. The steom- ship Queen, which leit New York on the 20tii ult., arrived here this afternoon. France. Paris, February 7 Evening. Prince Napo leon will be Director of the Paris Exposition. The Emperor Napoleon, in his address on the assembling of the Corps Legislatif, will announce the final disposition and close of the Eastern and Mexican questions. Commercial and Financial Intelligence, London, February 7 Eveuing. Consols for money advanced i, and closed at 00 15-16. American securities closed as follows: U. S. 5- 'J0s, 72 11-16; Erie Railroad shares, 40h Illinois Central, 80. Frankfort, February 7 Evening U. S. 6- 20g closed this evening at 76J. Paris, February 7 Evening. TJ. S. 5-20s advanced j. London, February 8 Noon. Consols, 90$ for money. United States 5'20s, 72 13-16; Illinois aoj; urie itauroad, 40. Liverpool, February 7 Evening1. Cotton closed dull, with sales to-day of T000 bales. Middling Uplands declined tol4114.Jd. Liverpool. February 8 Noon. The Brokers' Circular reports the total sales of Cotton for the week ending last eveninz at 43,000 bales. The market has a declining tendency, and has declined fully 4d. during the week. Tbe market to-day is unchanged, with pros pective sales ot about 1000 bales. Middling Uplands, Ajd. Tho markets for Breadstuffs and Provisions are steady, and without quotable change. Liverpool, February 7 Evening. American tallow is salable at 4445s. Spirits of Turpen pine quiet but firm at 37s. 6d. FROM WASHINGTON THIS AFTERNOON. SPECIAL despatches to evening telegraph. Washington, February 8. Tax on Distilled Liquors. There is considerable agitation amongst liquor dealers on the subject of the proposed reduction of the tax on dlstillod spirits "from two dollars to one dollar per gallon, which was recommended by Hon. David A. Wells, Spe cial Commisioner of , tho Revenue, ln his report to Congress, through the Secretary of the Trea sury, at the opening of this session. Remon strances are pouring in upon tho House Com- MlllaA 4 U'ftl.O orwl Hfnnnn I I .1 f spirits, against tbe reduction. It Is represented by them thut over thirty millions of gallons of spirits are held in stock by dealers ot all kinds, on which the loss will amount to as many mil lions of dollars. They allege, moreover, that this measure will throw Immense profits Into the hands of parties who will secure controlling powers in me mancet from the reduction; tho very parties who, after securing control of the bulk of marketable whisky ln 1805, infiuencad congress 10 increase tne tax Irom one dollar and a half . to two dollars per gallon, and theu pocketed the immense gain thus derived. Salt Against the Secretary of War. Hon. Henry fitanbery. Attorney-General. and W. V. Kendall, Ksq.,as counsel for Secre tary Utuuton, In the suit pending against him before the .Supreme Court of this District, have riled amended pleas. It will be recollected that illiam T. srulthson, formerly in the baukiug business lu this city, was arrested during the war by the military authorities, and was tried by military commission and found guilty of giving aid and information to the enemy, and sentenced to the Albany Penitentiary, but was subsequently pardoned. For his arrest and im prisonment lie now claims damages to the amount of 850,000. The defense haa previously put in a plea of not guilty, and having- obtained leave, yester day filed amended pleas. The first special plea is that the plaintiff ought not to have or main tain his action, because before and after the time of the supposed trespass, viz., from April IS, 1801, to Juno 1, 1S83, sundry Rebels, insur gents and traitors against the Government and authority oi the United States, were levy ing aud making war on tho Government ol the United Htates, and to the end of sub verting the Government thereof, maintained large armies to besiege, capture, and destroy tills city, one of the lortiUed cities of the United States, in which was situated the head quarters of the armies of the United States; aud that he, the said ismithson, at the head quarters of the army, while the armies were be sieging the city, did act us a spy contrary to the Articles of War, and ave the enemies of the Government information under the fictitious name of Charles 11. Cables, and encouraged them to assault the garrison of this city, and Imparted the information that Johnson (mean ing Andrew Johnson, of Tennessee) was here entering into au arrangement for the employ, mentofmen to go to Tennessee and Kentucky to burn bridges, destroy maohlne shops, etc.; that at the time said alleged assault and impri sonment was made the defendant was (Secre tary of War, and that at the time he believed thst he (Smlthsou) was engaged as aforesaid. The second special pleats that he ought not to maintain his action as to breaking into aud entering his (plaintiff's) dwelling, etc., because under tue uiu section oi uie act of July 17, is2 authority Is given to the President of the United Htates to seize and appropriate to the useoltlie armies of the United States certalu property, among others that of persons giving aid tothu enemy, etc. That, acting under the order of the President of the United (States, he (the defendant) did, on the SOlh of June, lti. cause plaintiffs property to be seized and ap plied as quarters for olHcers and a home for soldiers' wives and children. The pica further states that the District Attorney libelled the said property on the 2d of January, IStil, uuder the Contiscatlou act, and a decree was made to sell the property in the Supreme Court of the District of Columbia, but on the 22d day of Oo toher following the decree wns vacated because of the production of a, decree of trust by J. L. lidwards and Charles Wilson, trustees. The third plea is that under the act of March 3. 1803. the production of any order of the Presi dent of tbe United States shall be a defense in any suit or action as to seizures made during the war, for Imprisonment, etc.; and that the "supposed trespass he (defoudant) committed as Secretary of War was by the orderand under the authority of the President of the United States and during the Itebeiliou." and he there fore prays juaeuioat, etc, HEAVY SAFE IIOBDERY. Escape of the Thieves. PPSC1AL DESPATCH TO THB EVKUBQ TttJMJAPH.j Harhisburo, February 8. The fire-proof safe of the Duncannon Iron Works Company, at Dnncannon, Pa., was opened by burglars last night, and robbed of over ten thousand dollars in greenbacks. This mouey had been drawn from the bank to pay off the employes of the Company to-day. The burglars made their escape, but the authorities are on their trail. The safe wns a large 'LUlie safe, inside of a fire-proof vault, with three feet walls, and was bored with a drill. The contents stolen are valued at over thirteen thousand dollars, in small greenbacks, and the following North Pennsylvania 10 per cent, coupon bonds: No. 12:), 11000; Nos. Mi, ,09, and 710, each I'jOO. , LEGAL INTELLIGENCE. Court of Qnarter Sessions Judge Lud low. Mary A. White was acquitted of a charue of false pretenses. The allegation of the Com monwealth was that in 1801 an advertisement appeared ln the Ledger for the sale of .one hun dred acres of land at Si per acre; tbe laud was described as rich valley land, situate in Centie county, within a few moments' walk of. a rail road. The name signed to the advertisement was Mary A. White, No. ViA North Twelfth Street. Mrs. Mary Work wns induced by this ndvertlsemeut to buy the land, paying down the money and getting a deed for the property. Shortly after the purchase she went to Centre country to look at the laud, but when she arrived there she found no such land as was described by tbe advertisement, and her land in Mifllin county on tho top of a mountain, and owned by another party. The Common wealth claimed tht Mrs. Work had been Induced by that advertisement to part with her money, and ln return she receivod neither the land she bargained for nor any other. The defense was that the advertisement re ferred to bv the prosecution described exactly another piece of land that Mrs. White bad for sale at the time. That Mrs. Work did not fmrchnse this land, but another tract that lay n Centre county, before the division of that county, and afterwards fell in Mllflln county. The Jury, in rendering a verdict of not guilty, placed the costs upon the defendant. Chauncey Johnson was charged with entering a bank with intent to steal. Un last Tuenday Johnson went into the Philadelphia liauk, and was observed stauding at the wire guard uea the desk of the paying toller. After remaining there a few moments he turned around and walked hastily towards the front door. The Teller noticed a one dollar bill between the wires, as ir it had been pulled through by some one on the outside of the guards, aud suspect ing the defendant, hnd him arrested before be had reached the street. Johnson was taken into the Cashier's room, when he said that he was a stranger ln the city, and had come Into the bank out of mere curiosity, with no intent to do wrong. He told the Cashier that he was in bad health, and would be much worse if imprisoned; and that he had an aged mother and a sister ln Brooklyn; and for these reasons begged him not to send him to prison. The Cashier asked him what his business was. and he answered. "That of Rambling; I am a gambler." Cashier "Do von consider eamblluu a reDUtable bust ness? Johnson "All men gamble: I am. not able to gamble ln stocks, and therefore resort to earns." ... -The Cashier sent the defendant to an Alder man, who committed him to prison.. There was a theft of S100 ln tbe Bank that day. There were packages of f 100 each, put up in one and two dollar bills, and were held together by a hand, upon which was marked the amount of tne pacaage. wneu jonnson was arresieu, a torn band was found, with the mark $100. And outside was found a bunch of notes crumpled up, where they had been thrown away. On trial. District Court Judge Sharswood. Stinger vs. Lenler. An action to test tbe right of a water-course, and to recover damages sustained by plaintiff iu having his land frequently over run with water thrown back by a dam built by defendant. On trial. Court of Common Plea Judge Ludlow. Dilmore vs. Dilmore. iielore reported. On trial Supremo Court Chief Justice Woodward and Justices Thompson, Strong, and Agnew, Crouse vs. Crowd. Argued. FINANCE AND COMMERCE. Office of the Evening Teleoraph, Friday, February 8, 1867. f The Stock Market opened very dull this morn ing, and prices were weak and unsettled. Gov ernment bonds were firmlv held. 105 wns bid for old 6-20s; 108 lor 6a of 1881; 105 for 7'30s; and 100 j for 10-40s. City loans were unchanged; the new issue sold at 108 j. Railroad shares were Inactive. Reading sold at 62, no chanse; Camden and Am boy at 13H, no change; Pennsylvania Railroad at 57J, no change; and Mint-hill at 6(J, no change. 33 was bid for Little Schuylkill ;61i for Noma town; 35 for North Pennsylvania; 68i for Le hieh Valley; 30 for Elmtra common: 40 for pre ferred do. ; 29J for Catawiusa preferred ; 30 for Philadelphia and Erie; and 40 for Northern Central. City Pasenger Railroad shares were un changed. Spruce and Pine sold at 31. 05 was bid lor Tenth and Eleventh; 20 tor Thirteenth and Fifteenth; 48$ for Chesnut and Walnut; 72 for West Philadelphia; 14 for Ilestonville; and 12 for Ridge Avenue. Bank Bhareswere firmlrheld ot lull prices. North America sold at 234, and Mechanics' at 33. 140 was bid for First National; 153 for Philadelphia; 136 for Farmers' and Mechanics'; 66 for Commercial; 68 for Penn Township; 55J for Girard; 67 for City; 41 for Consolidation; and 58$ for Commonwealth. In Canal shares there was very little move ment. Lehieh Navigation sold at 61J, no change; 22 was bid tor Schuylkill Navigation common, 32j for preferred do.; 119i for Morris Canal preferred; 12 lor Susquehanna Canal: 541 for Delaware Division; und 63 for Wyoming Valley Canal. Quotations of Gold 10J A. M., 137 J: 11 A. M.. 137: 12 M.. 1384: 1 P. M.. 1384, an advance of J on the closing price last eventug. rillLADELFIM STOCK EXCHANGE SALES TO DAY Keported by Deuaveo fe Bro., No. 40 8. Third street BEFORE BOARDS. 200 sh Fulton Coal..-.. M lw sli Ct A cr 4 FIRST BOAKD. $111X1 Citv6s.New...is..l0U', iltlOO New Jersey 6a....HW'4 iiootf Union Cl Bs........ ai . :ipou do Is. K 2(HI0CA Am 's,'3... hkhi PaR iidm tie V! lluuo N Peunaos JO 1 ah Hk o! N A 2; 1 all Mech Bit 87 sh Cum & A m...ls-1-Jl !7hIi Penua KK...IS. 'a 2i ill Head R.. 200 ' do. loo do.... 4i0 Uo lui) do 2oo do Is. 0' ..sown. Hi .bainc. 01 .... btfO. 82 ,...! C. 62 6su Miueimi MS,' liriHli JeU V scr Is. soo uh Ucean Oil...:w. 2' D00 do 1b. 2;, 200 sll Sp fc l'iue....la.. 31 Messrs. WilUam Painter Co., banker, No. 3G South Third street, report the following rates of exchange to-day at 12 o'clock: U.S. 6s, 1881, coupon, 108i108 5 V. 8. t6-20a, coupon, 1862, Compounds, December, 1864, 14i14i. Messrs. De Haven & Brother, No. 40 South Third street, report the . following rates of ex change to-day at 1 P. M.: American gold, 1374 ri139 iSilver esands, 132; Compound Interest Notes, June. 1864, 174; do.. July, 1864, 16 J; do., August, 1864, 164; do., October.1864, 154; do., December, 1864, 144; do., May, 1865, 12; do., August, 1865, 11; do., September, 1865, 10J; do. October, 1865, 10. 1 Patting Out a Pire. Durimr the process of ex tinguishing tbe Are in the colliery of Clackman nan, near Stirling, England, In 1851, .about 8 000,000 cubic feet of carbonic acid gas were required to fill the mine, and a continuous stream of impure carbonio acid was kept up night and day for about three weeks. The mine extended over a surface of tweut v-six acres, and &ad been thirty jeers on fire. f?rl07i: do. new, iuuiBvu, i-v3, wupuu, ivi Si : U. 6. 7-308, 1st series, 105C!106i; 3ft oa spries. 105105J; 3d series. 105al05i: . jc r.. I III .il., IJlil. THE AFRICAN. His Fidelity and Heroism in tho Rebellion. A Lecture Delivered at National Hall Last Eveningrby William Wells Brown, with borne Account of the Lec turer, Etc. SPECIAL BEPORT TOR TUB XVKMIMO TELEORAPH. 1 The fifth lecture of the course before the Social, Civil, and Statistical Association of tbe Colored People of Philadelphia was delivered at National Ball, last evening, by William Well Brown, a colored orator of considerable culture, who has gained for himself a reputa tion as an author as well as a public speaker. Inasmuch as the story of bis lite presents a con vincing proof of the negro's capacity for cultare and refinement, we herewith present it to the public. Sketch of 'William W. Brown. Mr. Brown lea native of Lexington. Kentucky, and is said to be a Grandson, on his mother's fide, at the great, backwoodsman, Daniel Boone. His father was a member ot tne Wickliife family. and be can thus boast ot some ot the best and mo?t aristocrutic blood ot Kentucky, as well as Of that of the despised and down-trodden race witn wnicn ne is notasDameatoatunate. until the age of nine his life passed in slavish mono tony on the plantation of Dr. John Young, a gentleman ot consideraDie standing; in the com munity, and a most precise member of the Church.. At nine years of age, as he informs us in a sketch of bis hie prefixed to one of his works, he was one nitrbt "out to soak, and after that was Bcraped, scrubbed, washed, and dried;" and then, having exchanged his simple bag of a dress for a lull suit of tow linen, he was duly promoted from . "tbe quarters" to the "big house," to become tbe attendant of a nephew of nis masier, men oui iwo years oia. lhree years aiterwarcu the entire family re moved to Central Missouri, where Dr. Toung, wno naa lost most ot nts property oy misman agement, resumed the practice of medicine. hile residing here be was on one occasion severely flogsed because a visitor mistook hlrr lor a legitimate son of his muster. Dr. Ifoung subseqaentJy removed to St. Louis. and put several of his slaves out to Work. Young William was among the number, his lot being: cast as an attendant upon the work men in the office of the St. Louis Times, which was edited by Elijah P. Loveiov. tho brother of the late Owen Lovejoy, who was assassinated Dy a frantic pro-slavery moo ln Missouri some years after. William was Buttered to remain here but a short time, and whs then consigned to the tender mercies of a slave-trader by the name of Walker, whom he describes as a "heart li'ss, cruel, ungodly mac, who neither loved his Maker nor feared Satan." He remained one year with this wretch, beholding scenes which, Independently of his own condition of servi tude, thoroughly disgusted htm with the pecu liar institution. lie was then hired out as an under-steward on the st?araer Patriot, running between St. Louis and New Orleans. Here he led quite an exciting lile, vailegated at one time Dy an explosion, in whicn nineteen lives were lost. While ln ,uis service, youn William, picked up a great deal of information concern ing the iree states ard Canadas, from the edu cated travellers with whom he came in contact, and by his intercourse with whom he became thoroughly possessed with a yearning for free dom. In company with his mother he made an attempt at escape, but without suc cess; and then his ownership was trans ferred, first to a merchant tailor, and subsequently to a Captain Price, of the steamer Chester. He then made another at tempt' to escape. Leaving the steamer on which he had been at work, he started north ward, travelling by night and hiding away by day, not daring to appal to any one for food or shelter. After many days and nights ot ex- ?iosure in winter, he was at last forced to apply or assistance to an old Quaker gentleman by the name of Wells Brown. By this kind-hearted man be was sheltered for a fortnight, and when he left liis house he took with him his family name, the nrst he had ever possessed. He spent the remainder of the winter In Cleveland, and in the following spring obtained a situation on board a Lake Erie steamer. While thus employed he assisted many fugitives from slavery in their escape to Canada, in one year alone participating in no less than sixty of these hazardous adventures. The winters at this time were passed ln Buffalo. Meantime Mr. Brown had been picking np what he could In the way of an education,' and in the autumn of 1843 he commenced his career as a lecturer, by invitation of the Western New 1 ork Anti-Slavery Society. In this occupation ue u us conunuea untu tne present time, ln 1849 he was elected a delegate to the Peace Con gress at Paris, and visited" the Old World nnder the auspices of several prominent English abo litionists. On the adjournment of the Peaco Congress, he returned to Great Britain, and during the live years that he remained abroad, h wrote and published three books, lectured in every considerable town in the three king doms, and visited the Continent four times. Since his return to this country Mr. Brown has been engaged ln lecturing, princi pally in New England and the Western States, ln giving dramatic readings, and in. writing books and dramas. Of the latter two created something of a Bensation, being entitled "The Escape," published in 1858; and "The Backbone Drama," a burlesque on the Rev. Dr. Adams' "South-Side View of Slavery," first published ln 1859. r Mr. Brown is undoubtedly the most volumi nous writer the colored race has thus far pro duced in this country, and two of nis works have met with a very flattering success. "Sketches of Places and People Abroad," which was published ln Boston by J. P. Jewett & Co., in 1865, has passed through five editions of a thousand copies each, Of his next important work "The Black Man, his Antecedents, his Genius, and his Achievements;" Boston, Robert F. Wallcut, 1863 at least eighteen thousand copies have been sold. Messrs. Lee & Shepard, of Boston, have just published a fresh volume by Mr. Brown, bearing the following title: "The Negro in the American Rebellion: his neroinm and his Fidelity," which is a valuable addition to the current history of the great civil war. Mr. Brown's Lecture Last Bvenlng, was listened to with profound attention, and with frequent outbursts of applause, by a large audience in which the two races were about equally represented. The speaker was intro duced by Mr. William Still, Chairman of the Lecture Committee, and proceeded to address the audiencd as follows: At the breaking out of the Rebellion there were four millions of slaves in the South, and four hundred thousand free colored persons in the Northern States. The soolal and political condition of the latter at that time was any thing but encouraging. They were disfran chised in fourteen out of the nineteen free States, and a withering and a blighting prejij dice existed against them in every State in tlJ Union. They had no rights that white were bound to respect. . . The defiant and determined attitude of e South, at me close of toiEucluwuis rixW xml | txt
http://chroniclingamerica.loc.gov/lccn/sn83025925/1867-02-08/ed-1/seq-1/ocr/
CC-MAIN-2014-52
refinedweb
8,412
71.95
Business::EDI - Top level class for generating U.N. EDI interchange objects and subobjects. use Business::EDI; my $edi = Business::EDI-new('d09b'); # set the EDI spec version my $rtc = $edi->codelist('ResponseTypeCode', $json) or die "Unrecognized code!"; printf "EDI response type: %s - %s (%s)\n", $rtc->code, $rtc->label, $rtc->value; my $msg = Business::EDI::Message->new($ordrsp) or die "Failed Message constructor"; foreach ($msg->xpath('line_detail/all_LIN') { ($_->part(7143) || '') eq 'EN' or next; print $_->part(7140)->value, "\n"; # print all the 13-digit (EN) ISBNs } The focus of functionality is to provide object based access to EDI messages and subelements. At present, the EDI input processed by Business::EDI objects is JSON from the edi4r ruby library, and there is no EDI output beyond the perl objects themselves. When you use Business::EDI; the following package namespaces are also loaded: Business::EDI::Segment_group Business::EDI::Message That's why the example message constructor in SYNOPSIS would succeed without having done use Business::EDI::Message; Everything depends on the spec. That means you have to have declared a spec version before you can create or parse a given chunk of data. The exception is a whole EDI message, because each message declares its spec version internally. EDI has a hierachical specification defining data. From top to bottom, it includes: This module handles messages and everything below, but not (yet) communications. Much more documentation needed here... Constructor Get/set accessor for the value of the field. The string code designating this node's type. The code is what is what the spec uses to refer to the object's definition. For example, a composite "C504", segment "RFF", data element "7140", etc. Don't be confused when dealing with CodeList objects. Calling code() gets you the 4-character code of the CodeList field, NOT what that CodeList is currently set to. For that use value(). English description of the element. This method returns strings that can be fed to part() like: foreach ($x->part_keys) { something($x->part($_)) } This is similar to doing: foreach (keys %x) { something($x{$_}) } In this way an object can be exhaustively, recursively parsed without further knowledge of it. Returns subelement(s) of the object. The key can reference any subobject allowed by the spec. If the subobject is repeatable, then prepending "all_" to the key will return an array of all such subobjects. This is the safest and most comprehensive approach. Using part($key) without "all_" to retrieve when there is only one $key subobject will succeed. Using part($key) without "all_" to retrieve when there are multiple $key subobjects will FAIL. Since that difference is only dependent on data, you should always use "all_" when dealing with a repeatable field (or xpath, see below). Examples: my $qty = $detail->part('QTY'); # FAILURE PRONE! my @qtys = $detail->part('all_QTY'); # OK! $path can traverse multiple depths in representation via one call. For example: $message->xpath('all_SG26/all_QTY/6063') is like this function foo(): sub foo { my @x; for my $sg ($message->part->('all_SG26') { for ($sg->part('all_QTY') { push @x, $->part('6063'); } } return @x; } The xpath version is much nicer! However this is nowhere near as fully featured as W3C xpath for XML. This is more like a multple-depth part(). Examples: my @obj_1154 = $message->xpath('line_detail/SG31/RFF/C506/1154'); Returns value(s) instead of object(s). Examples: 'ORDRSP' eq $ordrsp->xpath_value('UNH/S009/0065') or die "Wrong Message Type!"; This code is experimental. EDI is a big spec with many revisions. At the lower levels, all data elements, codelists, composites and segments from the most recent spec (D09B) are present. Business::EDI::Spec edi4r - Joe Atzberger
http://search.cpan.org/~joeatz/Business-EDI-0.05/lib/Business/EDI.pm
CC-MAIN-2015-48
refinedweb
612
57.57
Page Two AGED MISER FOUND ' STARVED 10 DEATH f IN ip SHOP Carl Kroll, Cobbler Recluse, Be lieved To Possess For tune of $75,000 POLICE ARE STANDING GUARD OVER PREMISES Will Permit No One In While Coroner Searches For Wealth; He Finds Much Cash - Bslisvsd to bo tbs possessor of a fortune toUmated at between $60.0u0 and $76,000, contained in deeds, bortf&XM and cash, secreted in draw art, cheat* and various nooks and crannies of his shoe shop at No. 75t* Beventeenih-st. Carl Kroll, a recluse, V 0 years old, was found sitting erect tt hit cobbler stand, dead, Tuesday Horning. Coroner Burgees helloes that the need man starved himself 4> death, refusing to purchase food with his hoarded wealth. A. It Hunter, of NO. 661 FYmrteanth sve., one of the few in tuna te acquaint ances of the old man, jiiscoveaed the body when he made his regular morn ing call, to take care of KroU, bho had been in ill-health for some time, but always stoutly refused to oonsult a physician. Hunter found KroU sit ting erect, with hands folded on his lap, and did not know that he was dead until he attempted to aaouse him. There was evidences of heart dis ease, but the coroner also found in dications of malnutrition. Boater notified the police, and rumors that the aged man was a miser caused the police to place a guard about the premises. ▲bout $260 In cash was found by the coroner, each bill carefully folded up and tucked Into a crevice, drawer or some other receptacle. Kroll owned the property In which he lived, and also bad the deeds to four other pieces of property in the same block. The coroner declares that Kroll slept on a filthy old pallet, and had few of the bare necessities of life. His wile died, sevyal years ago, and though the old man is said to have a step-brother in Delray, and a brother in Youngstown, 0., he lived all alone, had few friends, and spent most of his time working over his little oobbler stand. Coroner Burgess will mases fur ther Investigation of the case, and the police guard 'around the premises will be maintained until the coroner makes a thorough search for hidden treasures. N. P, A/S ANNUAL MEETING IN CHICAGO JUNE 24-26 The next annual meeting of then's, tional Press association, formerly^the National Editorial association, will be held Ml Chicago, June 24 to 26 next, with headquarters at the Sherman house. The sessions of the association will be conducted In four departments, ss follows: First —Department of newspaper and Job printing, to Include the coat sys tem, under the direction of J. Clyde Os wald, publisher of the American Print er. Second —Department of the daily newspapers. In charge of T. B. Hall, of Jamestown. N. Y. Third—Department of the weakig newspaper, In charge of Ovid Ball, of Fulton. Mo. Fourth—Department of loarnaMstlc education, in charge of Walter Will iams, dean of the sohool of Journalism of the University of Missouri- It lb expected that a number es men Rrominent in newspaper work will de ver addressee during the sessions A printers’ supply exposition will be open during the three days she asso ciation Is In seeaton- Following the meeting a seven-days' trip prUl be taken through South Da kota during which all the scenic at tractions of that state. Including wind cave, said to be thq greatest and most woadarful cave In tne United States wjll be visited. From the time the special train enters South Dakota un til it leaves the state not a cent of ex pense will be incurred by anyone ex cept the Pullman fares, which wll be nominal. Under the reorganisation plan every member of every regularly organized press association in the United States is entitled to membership in the Na tional association upon payment of a membership fee of $2. Address W. F. ParrOt, secretary, Waterloo, lowa ADMITS ABUSING GIRL ON ANOTHER OCCASION Joseph Jacques, 55 years old. plead ed not guilty when arraigned. Tuesday afternoon, before Justice Jeffries on a charge of abusing Vlolst Stulz, 11- year-old girl, living st No. 125 Adams ave. east, who was attacked in her bedroom by a man, early Saturday morning. His examination was set for May t end bail was fixed at SSOO, two sureties Jaoques, who has 10 children, the youngest 13 years old. and who room ed in the Stuls home, was taken before Prosecutor Shepherd to make a state ment prior to going to court. He ad mitted being very fond of Violet Stulz and sold that he had bought her shoes and token her to the theater, last Fri day night He emphatically denied, 1 * however, that he was her assailant, but acknowledged subjecting her to In dignities on former occasions. He, said he was particularly fond of little girls, but had no time for women. Csrssir Frtfcw Draft. Coroner Rothacher la investigating the death of Mrs. Klale Tomm*.-w*kl, *7 yeera old. who died in her home. No. #T7 Mitchell-a ve.. at S o'rlock Tuesday morning, following the birth of a child, liar husband. Adam, aaya that two dor* tor* attended his wife at 4 o'clock In Lthe morning; that they gave her chloro form and left her under the Influence of the drug at T o'clock; that he called on or>n of them a little later, when lie saw ghe waa sinking, and that ha re fused to respond, saying he wax rest |ng The child also dle<f NKU YORK VO\KV. t NEW YORK. April 30 -Money on rail- * per cent. Time money: 3VW i'ji pgr cent for six months. Bar stl- London. 19 J-l« pence; New York (IV Demand sterling: |4 16 4 MH vx «;•- Tte ~ Child Gets Death Injuries When He Ignites His Dress M tth Alatche s PONTIAC, Mich., April 30.—Arthur Molow, aged two and a half years, la dead ss the result of burns sustained wheu he Isnlicd his dress a hi'** playing with matches. Kiiday. He was a son of Mr. aud Mrs. William Melow, of hsrmmgum. and securad the matches wheu his mother stepped to a neighbors home lor a few moments. His body was a mass of blisters Policeman Must Pay $250 For Arresting Woman Without Warrant in Her Home Judgment for $250 against John Wlshnewskl. a policeman, was re turned by Justice Command. Tuesday, iu a suit for damages brought uy Celina Csvkowski. t The plaintiff charged false arrest, olaimiug that the officer entered her home and compelled her to accompany him to the police sutiou, where she was closeted with three detective* and questioned for hours until she be came hysterical. This action of the policeman was taken, she said, upon the complaint of a hoarder, who uiet the officer on the street and told hhn •he had stolen $lO. 4W Justice Command held that the officer had no right to enter the woman s home and arrest her without a warrant, when he saw no crime committed. After the woman's arrest tt was shown that she did not steal the money. State Politic* Congressman McMorran. who is vis- Iting his home In Port Huron this week, has announced that he will uot be a candidate for re-election and that he will not endorse any of the men who have declared themselves candi dates for the Republican congres sional nomination in the Seventh dis trict. t&lward Frensdorf, of Hudson, chair man of the Woodrow Wilson league, said in an interview at Kalamazoo, that there la uo argument against the instruction of delegates to a state convention providing there la any merit In the primary. "When the primary is used," he •aid, "it means all manner of manipu lation of delegates is at an end; the delegates selected must represent the majority and are bound to vote as directed by the primary.” The Elkton Review recently pub lished a strong endorsement of State Treasurer Sleeper for the Republican nomination for governor and it was reprinted bv the Sanilac County 1 ar mer. In It Mr. Sleeper is credited with bringing the state treasury out of "the slough of disreputable . and disgusting disgrace and placing it where it belongs” and it is predicted that he would do the same for the en tire state administration if placed at its head. Representative John Kalmbach is reported to be a candidate for the Re publican nomination for prosecutor of Washtenaw county against Prosecutor E. K. Iceland, who is a candidate for renomlnatlon. Referring to the candidacy of State Senator Carl Mapes for the Republi can nomination for congressman in the Fifth district, the Lansing State Journal compliments Senator Mapes on his having been an excellent floor leader for Progressive legislation And It adda that there is great need in congress for men who are not only able and honest but willing to give industrious attention to serious and practical legislation.” Speaking from the convention city with the echoes of the riotous Repub lican state convention still reverberat ing, The Bay City Tribune declares "it Is useless to disguise the fact that the breach between the Roosevelt and Taft wings of the Republican party haa widened to the pplnt threatens the defeat of the party. ’ Circuit Judge Nelson W. Sharpe, of the 34th circuit, who is being boomed In the Tenth congressional district for appointment as successor to U. ff. District Judge Angell, at Detroit, has presided over his circuit for 20 yeais Many times he has had no opposition at the elections. His circuit is the largest in the state comprising the counties of Arenac. Crawford. Glad win. Ogemaw, Otsego and Roscom-< sum. As Massachusetts goes, so will the Republics® national convention go. ac cording to the opinion of Congress man H. O. Young, of the Twelfth dis trict "If Taft wins in Massachusetts be will win the nomination; other wise he will not,” said Mr. Young to the Calumet News. Woodrow Wilson leaders In Wayne county are beginning to take comfort out of their recent losing fight to cap-, ture the Democratic county conven tion by analyzing the campaign made by the Democratic county organiza tion against them. While the Wilson men showed up at the convention claiming only about 50 delegates, the Harmon-Clarlt opposition as represent ed by the county and city organiza tions had been working tooth and nail two and a half weeks to Insure an uninstructed delegation. Why such an absolutely complete machine should work so hard against the new and incomplete Wilson organization Is the question that has aet the Wilson men thinking Where the voters have had a fair chance to express themselves they have shown a preponderance of two to one for Roosevelt. Where the bosses have managed the conventions they have returned a delegation prac tically unanimous for Taft. It Is not altogigher what Taft says he stands for, It Is not what Roosevelt declares he favors altogether, fn deciding It Is often necessary to see who Is back ing up the two candidates and judge what their statements mean by the friends they attract to their support. Omega Oil Rheumatism and Lumbago Usually one or two rubbings with this wonderful Oil will give relief. Xrul bools ioc. , Urge bottles *ou THE DETROIT TIMES: WEDNESDAY. MAT?!, 1B1*!? Neither Taft nor Roosevelt are bad men. Both mean to do the right thing. The question Is. will the country be better off to go ahead by the slow going methods of Taft, or to go ahead by the more rapid methods of Roose velt? — Hillsdale Daily. Senator Smith of Michigan favors the renominatiou of President Taft, but it is hard to understand why he should do so. He is and has been against him in every prominent feature of his administration. • • • Mr. Smith opposed President Taft as to reciprocity, tariff regulation by a commission, the arbitration treaties, aud Central Amerlcau control. He has not favored even one of Mr. Taft's pet projects but has opposed every one of them. Yet Mr. Smith favors Jeopardizing Republican success by renominating Mr. Taft. This incon sistency may doubtless be accounted for by Mr. Smith s acting upon the be lief, which some montits ago was gen eral, that Michigan was for Taft. He finds himself committed to the Taft candidacy but his state admittedly for Roosevelt by a prodigious majority. Depend upon It. Mr. Smith realizes the awkwardness and the danger of the situation and is ardently wishing he had taken his customary perch on the fence. —Allegan Gazette. EXTRA JUDGE WILL SIT FOR TWO MONTHS The sum of $1,300 was allowed by the ways and means committee of the hoard of county supervisors. Monday afternoon, to pay for the services of an extra circuit court Judge to assist the Wayne circuit bench during May and June. This is made necessary by the great increase in business. Judge Tucker, of Mt. Clemens, who has assisted the local Judges ou previous occasions, has been written to. and if he can make arrangements he will be asked to take the position. Sheriff Gaston's request for ten motor cycles with which to put a stop to joy riding and speeding by testers on county roads, was referred back to the county auditors for a recommends tion. W. E. Duperow has besh appointed general agent ot the passsnger de partment of the Grand Trunk Pacific railway and coast steamships and Grand Trunk Railway system, with office in Vancouver, B. C. Previously he has occupied various positions in !he passenger and transportation de partments of the Grand Trunk lines iu eastern Canada. , The 1 Ebbitt House WASHINGTON, D. C. No matter what you came to Washington for—business or pleasure—The Ebbitt is most centrally located to everywhere. Recently re modeled, refurnished and redecorated throughout thoroughly modern in every feature. Rooms, single or en suite, with or without bath. RATES! A atari** a Plaa—S3 fa fS par day. Eara»MS Plaa—«l.M ta 94 par day. » 0. F. SCHOTT, Proprietor. SWI COUPON- TOM Bm2 1 gftkj ffl SaHithta (W/ m ‘ HOW TO GET THIS BOOK Desiring to render a great educational service to Its readers, The Detroit Times has arranged with Mr. Haakln to handle, WITHOUT PROFIT TO ITSELF, the exclusive output ot his valuable book (or Detroit. Cut the above coupon from six consecutive Issues of The Detroit Times and pressnt tham with 50 cents to cover the bare cost of manufacture, freight and handling, ana a copy will he presented to you without additional cost. Bear In mind that this hook has been most carefully written, that every chapter In It is vouched for by an authority; that It Is Illustrated from photographs taken especially for It; that It Is written In large, clear type on fine book papor and bound In heavv cloth In an attractive, durable manner. A 9) VALUE FOR BOc. Act quickly if you want a copy. ~ Save six consecutive coupons and prssent thsm at The Detroit Times Office No 13-15 John R -St. EACH BOOK BY MAIL 15c EXTRA for POSTAGE Belle Isle Casino Open FOR THE .SEASON. With its Famous Fish *9 CT Dlftf A and Chicken Dinner C Jv •MIC Service on the First Floor. PAUL KOLBE, Proprietor. MAYOR THOMPSON SITS 111 RITE COOED HAVE SEEN KEPT UNDER 1911FIGORE Executive Says He Sees Many Places To Apply Knife But Keeps Silent WAS IGNORED LAST YEAR Says Council and Estimators Must Take Blame For Bumper Budget "It is a serious reflection on the city,” said Mayor Thompaou. Tuesday morning. In las first public utterance ou the work of the board of estimates iu allowing a budget which will mean a tax rate of more than S2O as against the present rate of $18.15. "1 do not wish to appeaj as knoc k lug the board or the council in this matter, but they will certainly have to take the responsibility (or a higher tax rate. "A year ago 1 sent a special mes sage to the board urging economy and a reduction in the tax rate. 1 went to great pains u> get details re garding city funds for the board, and I conferred with the estimators day after day. I labored In behalf of the workingman, who w - ants the benefit of a low tax rate., I would get the promise of a committee to cut out a certain big item one night, and the next night when I wasn't present the committee would reinstate the item. And so It went all through the Dro ceedings. Why, it was eveu said by tome around town that Inasmuch as I had declared myself for a low tax rate and was elected after pledging myself to obtain a low tax rate, the board would ‘fix’ me. It was said that i the board would see to it that 1 got no political advantage by obtaining a low tax rate for the city. "This year I considered the matter very seriously. The treatment and consideration accorded me in my fight for lower taxes last year were not very encouraging. 1 did not want any quarrel with the estimators so I kept out. But the people know where I stand on the subject. 1 have gone on iecord repeatedly and my work last year bears witness to what I have done to obtain a low tax rate. If the estimators see fit to allow appropria tions which mean an addition of $2 or more to the tax rate that is their look cut, and they will have to stand re sponsible for it before the citizens of Detroit. There can be no shifting of responsibility in this matter, no point, ing to the mayor and asking what he has done to obtain a reduction in the tax rate. 'The fact is that the more you give some city departments the more they can spend. When appropriations are once allowed that is the end of it; the money is spent and the following years larger appropriations are asked for. "I sincerely believe that the tax rate this year could not only have been kept down to $18.15. but actually |gii! T^tSfflp^Y For Particular* Coaault Ageata. SUNDAY EXCURSIONS NAY sth. 1912 lfosnmCmKß OK ION aad retura 9 .SS CAMO aad refura I.SS SAGINAW aad velars I.SS HAY CITY aad retura 1.80 Special trala laaxea Detroit. Third Street Mtatloa. 7tO» A. Woodward Axe.. TilS A. >l. AV\ AHHOH aad retaru..., f .MO Jl< K*o\ aad retura 1.00 GR AM) K Al'lDs aad retara IT.OO M pedal trala leaxe* Detroit 7ilS A. M. City Ticket Offlcc, X*. 1 Opera Hauae Block. Station, Foot of Third Street. Ta CIIDfIDC ociCAn steamship 10 LuHUrL TICKETS on .tha prln clpal steamship lines at tariff rates, sold at HIRACHFEI.D BROS.' TICKET OPTICS. T 1 GRIS. WOLD- ST. (Eatraaea on Larued-ef.) mad** lowsr than that, without da* creasing the efficiency of any us tha departments. Consider that there waa tu ItcthUi of rnora than $43,300 000 In the assessed valuation of the city ami with all that put on the property cwnera of the city they are asked to I ay $2 more a thousand than at pres ent. The record of the board of esti mates speaks for itself. Auy business tnau or citizen can figure out for him self what was done. “The increase will lie largely felt ty the small tazpayera. who can hard* |> afford more money for such pur poses. “I could take Item after Item In the budget and point out where reductions could have been made without injur ing the city or hampering the work of any department, but I do not wiyh to quarrel with the estimators at this time. They are a law unto themselves and their actions speak for them >* Ivea. TEN PERSONSSWEPT TO DEATH IN TORNADO GILBERT. U.: April 30—Ten per sons were killed and a score Injured tu a tornado sweeping over three par ishes of Louisiana early today. Scores of buildings were wrecked, resulting in heavy property damage. The dead Include Sidney Roaa. 10, killed at Gilbert. In Franklin parish parish G L Ross was crushed to death under a falilug building. Eight men working in the fields were killed. THE DAY’S SERVICE ON A BIG LINE On the early morning cars—those cars that take the last of the night hawks home—arc always to be found conductors and motormen riding as passengers. They are the day men of the company starting for the car houses to begin their day’s work, and they begin their labors in many cases two hours before the factory employes take off their coats. • • • • At 4130 a. m. the day operation of the Woodward line begins, when all cars available are rushed into the service. The procession of cars—practically all free from passengers start ing from the big car house in Highland Park, proceed to the foot of Woodward avenue or to the Michigan Central depot and, there turning, begin the outbound trip, taking on numbers of passengers at the Campus. From Grand River avenue north these cars continue picking up a steady stream of workers coming from the side streets and. in addition, a large number of transfer passengers at the intersections of the Crosstown lines at Forest and Warren avenues. This practically completes the loading, while the unloading commences at Amsterdam avenue, with the result that all the workers have been distributed among the several plants in time for work at 6:30 a. m. These workers disposed of, the cars return to the heart of the city, carrying those begin ning work in*the down-town section between 6:30 and 9:30 a. m.—factory hands, clerks, students and those of the professions. Returning to the car house, the “trippers” retire from service and from then on the patrons of the day cars are the shoppers, with a considerable noon and matinee increase above normal. All are in the city’s center when the evening rush hour opens up at 4145 p. m., lasting for slightly over an hour. * * * * ' When the evening rush hour demand for service comes it is not by any means confined to the northward movement as was the case a few years ago. There is an almost equally great southward movement from the large manufacturing plants far out the avenue so that it is necessary to have cars assembled at the north end in which return these factory workers back to the Crosstown intersections and to the center of the city for distribution out the radiating lines around the city hall. This crush is somewhat eased by means of special cars provided for the employes of the Cadillac and Ford motor car companies. • • • • The course of travel on the Woodward line as given above is typical of that prevailing on the other lines, except that the normal day travel on the Woodward line is more constant as well as greater than on any of the others. The traffic on the Jefferson line requires cars to pull out of the Jefferson car house as early as 4:98 a. m. and proceed clear to the westerly limits of the city on Grand River avenue for the purpose of transporting passengers to the extreme easterly limits of the city on Jefferson avenue. The Michigan-Gratiot-Mack line is subject to great irregularity of service owing to the fact that some tracks in the congested district are largely used by cars of other lines. The Crosstown line is used largely as a transfei line to and from 14 other lines. The Fort line service is made irregular on account of numerous j;rade crossings with steam road tracks It is the experience of the company that the great majority of patrons of the various lines time themselves so as to allow just a sufficient number of minutes to reach the scenes of their labor should there be no interruption of service. While this is a flattering commendation to the regularity of service given by the company, a sufficient margin for possible delay should be allowed. • • A rush for the last car for work and a rush for the first car after the day's work is done produce crowding and confusion. There would be much less of this crowding and confusion were there a greater variation in the hours in which manufacturing plants opened and closed. * * * # Next Tuesday in this paper and in this space will appear, “Mektng Schedules for Street Cers. Detroit United Railway WHAT WE WANT What everybody wants, what everybody ha* been brought up to expect, is equal justice tor all, administered without tear or favor. AND WHAT DO WE GET? If you have any doubts, just read “BIG BUSINESS AND THE BENCH” in Everybody’s Magazine for May. Follow Mr. Connolly as he piles up the evidence — cool—clear—and straight. Its cumulative effect is overwhelming. Without heat or malice he drives home the conclusion that, if present tendencies are left uncurbed, the final destruction of liberty r.self may follow. Read it now. Get EVERYBODY’S MAGAZINE 15 Cents on all Nswe-stands $1.60 a Year THE RIDGWAY COMPANY, PUBLISHERS. NEW YORK. P. 3.—And in spite of it alt* we are not sure that we believe in the recall of the Judiciary. * The greatest single article ever in Everybody’s SHME-UP 111 POSTOFFICE STAFF IS COMM SOON Reorganization, With Economy as the Motive, Will Result in Fewer Officials Postmaster General Hitchcocks cam paign for economy has struck Detroit in the form of rumors of a reorganisa tion of the local poaloffice staff to take effect in a mouth or tlx weeks Two superintendents. It is sgid, will take the place of five now bolding office, and there is much speculation as to the identity of the two men to be placed in charge. Rumors agree pret ty well upou J. C. Hudson, now in charge of the money order branch, as the uew superintendent of finance, aud tor the new auperlnteudency of mailt, both Oliver Burke, uow superintend ent of delivery, and C. C. Kellogg, sup erintendent of the registry branch have been mentioned. According to the rumors, K. U Fuller, present super intendent of malls, and C. P. Niles, superintendent of second-class matter, will probably become assistants to the new superintendent of malls. Postmaster Homer Warren declines tc discuss the proposed reorganization. He admitted that luspeetort bad been worklug here bui declared that their import had not been seut to Waahinu t, n Other federal officials declai>'i that the report of the Inspectors had not been completed. The inspectors began work here, Jau. g. and tt is said will be her** an other week or*two. There have beon nine *d them worklug In Detroit ut different times, II A. Htrohtu. a. A Macltwal* K IV Howe, M H. Case. K A Mackev, >' J Gould. J. A. Niles. H. liadsel. and H K Ballard. All are connected with the Chicago dlvlslou of the poeiolfloe department Reorganisation such as is reported to t>e proposed for Detroit has been effected tu s autubtr of other cities, netnbly tu Toledo, where, it Is said, the change did not meet popular favor, and in Grand Rapids and Cincinnati. MRS. CADIEHX WILL APPEAL DIVORCE CASE Mrs. Rmeltta Cadleuz, who wus sev -oi*gl weeks ago denied a dlvorc e by Judge Hosntsr from her husband, Ur, Henry Cadleux. one of the best-known physicians In the city, with an office la the fashionable Pasadena apart ments on Jefferson ave.. has filed her appeal bond, and will carry the case to the supreme court. Mrs. t udleux charged non-support und cruelty, al leging that the doctor lost his earn lugs betting on the horses. xml | txt
https://chroniclingamerica.loc.gov/lccn/sn83016689/1912-05-01/ed-1/seq-2/ocr/
CC-MAIN-2021-17
refinedweb
4,698
69.11
- display users nearest city - Limiting link insertion from MySQL - paging alternative - Who knows how: dynamic php script to rip CB product salespage to your website - Get G search query for dynamic tracking id using PHP - How to copy scripts? - Looking for a program that does... - Use Curl to import MySQL database? - cURL socks5 proxy checker class times out when handling more than 3 ip's - EzyPal Hacking - Random username and password - Twitter: remote tweets say "From xxxx" - Looking to display random Video but don't know how - How To Save Settings and Call That Value Again? - Can I use echo to place data in a certain spot - php redirect not working in FF - Import xml feed into php script. - Help me decode this - Where To Start ? - Need user SERP query redirect php script - Need a good php programmer - quick easy project: will pay - is there a php script that grabs megavideo url's and plays them in divx? - Why dosn't this php script work? - How to drop a cookie using php ? - A database with testimonials - Trouble with WordPress "wp_insert_post()" function - My code to create an article spinner in PHP EASY - Joomla Vulernanility through com_jumi - [PHP/HELP] Submit, save and go to. - comment on youtube vidoes - I dont have idea how to solve title - Need help with stupid PHP include issue please? - Anyone know how decode this? have a try ... - Find .htaccess in wordpress - Best auto Classified script? - $5 to anyone who can decrypt this WP theme - Help With Parse Error - Need Search Script Help Using PHP & MySQL - Does creating ghost traffic help with SEO - PHP Help Please: Popup When Visitor Exits Website? - Need help with Curl And proxy - Help: thumbs don't appear on tube script - Any ********** autoposters? - Using proxy with perl IEAutomation - Upload new footer question. - How to Scrape an RSS feed? - Cron job for Stuffers Anonymous Image Stuffer - Please Help me to decode this "eval" crap! - Movizdb.com /Rapidshare scripts - Execute DMR script - I Need A Bit of Help with a PERL Script - Gossip Girl Site Script - Curl: Object Moved Error..?? - Wildcards in php help - How can We upload a Video file and play a video file online - Need php help Remove images by size - online football/soccer/fantasy football/soccer scripts? - Quick Php Upload Question - Need a php programmer, will pay for services - Is there scraper script like this?? - Simple PHP Scripts for Affiliate Type Stuff - Help with a simple paypal affiliate script. - How To Force File Download Of A .php File? - How can I run PHP scripts from windows & set cronjobs for them? - Need to combine 2 PHP Scripts Included With Explanation - Help me Please: problem with random video in php script - Need help to integrate paypal to my script - Help Me Get This POST Data Script Working - How to remove "_" and change "1B" to "1 B" in a verb? - PHP help please - Help me Please: How to add adsense in php script - Can mod-Rewrite append a string in front of a URL and encode the entire thing? - PHP, to store record into Database but check for records first - Perl and using proxys for SMTP - Attn: Please modify plugin for bhw bloggers!!! - URL Rotator Script needed - Need some help with my site. - Change IP in PHP - Browser comptibilioty issuse - need a decent php programmer - quick project - help me with a script - Need help with Joomla main menu position**all messed up** - A Idea Is this Possible?? - How to increase the post and comment date in a MYSQL database - php vs python for bh apps - Database search - Anyway to cheat a PHP Poker game? - [PHP]FullRss - I need advice! - geoip script maxmind help - redirect URL in iFrame - Defining Custom WordPress Link Structure - How to do this - using curl to execute ajax - Any Ideas How To Link A Category And Slug Field Together? - Need PHP help - mod_rewrite help - Affiliate url rotator help - [GET] Script For Searching For Aged PR Domains - Passing data from URL to page - Need Someone To Crack - decrypting php - i need help with this BADLY - Craigslist Captcha not loading when using a bot - Referer Access Control Script - How to Display each record on a row of a table - Google Image Scraper script - Could anybody write a simple timer thing? - NEED help with mplayer in php - php: pingback / trackback? - How to Use $_GET in mysql_query() funcion? - breaking the iframe - How to Spread out My Clicks Evenly to a set of Links? - Uploadjockey type script - Maxadd.com or omgadds.com Clone Script? - need to hide my IP address + referer - get Capatcha back to user then submits it --PHP-- - installing a perl script on a web server - Gmail login use curl/php - Modding FirstRSS plugin for wordpress to display maximum # of items - Form submit multiple times - wordpress ping - Cool Script To Password Protect A Single PHP or HTML File - what is the best way to null wp plugin? - Anybody have fsrevolution code decoded? Im looking into the technology - I built a google scraper that returns the number of results for keywords - Can someone help me with this script? - PHP Form to Iframe - image rewrites with .htaccess - is it even possible? - CPA referral thoughts.... - Start your own Gmail, Yahoo mail, Rediff mail - PHP, iMacros and threading? - mysql = lame-o-vision - Captcha loop - Delayed Autoresponder PHP Script? - Replicated Websites - How to crack a captcha - Slow progress but progress nonetheless (Craigslist) - I need help pple - POST help - Need help on my script - Mod rewrite help needed - In need of more PHP help..again - Craigslist PHP/cURL poster - Simple visitor survey in PHP? - php MySQL Error help. - Replicated Website - PHP redirect - Easy Question???? Who Knows This Answer - Help with Decaptcher - Some php help over here please.... - scraping ask.com - Hide Script Protection in public classes - help with this php email attachment script! - phpbb signup submit to another signup page - Any PHP'ers out there? I have a little question. - *Need help with this php problem!* - Take a screenshot with curl - how do i run some php coding? - How to use cookie to rotate affiliate ads links? - HostGator + FWrite = Permission Errors? - Need sql Help - PHP Question - Location script - Scraping Ezinearticles - You Are Now Leaving Site Script - Creating Twitter accounts - Need PHP coder for few lines of code - redirect showing the referal? - two frames - one reloads when offer is completed? - Simple text link rotator? - Know of any Incentive/Credit Script - help in .htaccess - Check this out - Free PHP Security Videos - Urgent issue with godaddy vds, will pay for immediate help - Random include PHP code help? - cpa redirect and code - proxies in perl? - Default How To download multiple files simultaneously with php - Redirect traffic between three different pages? - PHP to reverse a string - Split sentence on half - External JavaScript via PHP to only activate if called from a WebPage..block direct access - PHP code once per IP Address - Help w/ Digg Login - n00b PHP Install Question - Simple PHP question - $_SESSION - Payment to get this to work - I need help with a simple PHP code - Resources to Learn PHP - anyone willing to help me out with a script I had made and the guy left on me? - Help With PHP clone page - Converting PERL to PHP? - ZZEE PHPExe v2.2.2 compiler - How to grab text from a website with cURL? - Landing Page script needed - Safe Querystring Encoding and Decoding?? - Copy Protection For PHP files? - Is It Possible To Insert Variable To Flash & Download The Updated File? - PHP and Using two Proxies chained - TinyURL Batch Creator Script? (+minor features needed) - US redirect script - [Codepost] - tor switch with php - [Codepost] - Scroogle Scraper Function - nulling scripts how do i? - baner rotation, here ya go - PTC site help - Movieposters are f*cked, any clue? - problem whith projectRAFS script (help) - Watermarking / writing text over an image in PHP? - Need Help, Getting mysql error =( - Coding a sequential number generator - How to add days to date in PHP? - Where can I find a script like this? - Coding a PHP Login System - CSS writer wanted - I have a php question - How To Make A MySQL Database - Need, PHP Dropdown from mysql Database - What geo-targeting script are they using? - Is It Possible To Use fwrite To Insert PHP Variables? - Need Some Quick Help With A Script - Need Help With The "Revenge Site" Script - Web UI That Has PHP Intellisense? - Need PHP Help > Basics - Where do i change the memory limit on this php script? - Need interlinking script for BANS in php - Need some help (PHP) - PHP and Ebay coding help required - mod rewrite and .htaccess - Need PHP Programmer To Setup Simple Work - PHP / cURL Page Scraping Script - Need Someone To Code For Me PHP Geo-Targetting - If Referring URL = X Then Redirect Here, If Y Then Redirect There??? - How To Inject Code Into PHP Files and ZIP ONLINE? - PHP Encoder - online unzip - Perl and youtube - Perl problem with Shadowmaker. - I am a newbiw and want to lean php - getting a website through a proxy in perl/php? - How do I download php page or view source code? - Onsite php redirect, then to an affiliate? What'd going on? - Want something coded? - I have a few hours free today - PHP Script: How to grab e-mails from code. - A good PHP encoder/encrypter - Custom Scrape Tool PHP - Index.php won't load as default page... - What does this script do? - Wordpress plugin - troubleshooting required - Preventing Proxy Site Access
http://www.blackhatworld.com/blackhat-seo/sitemap/f-127-p-3.html
crawl-003
refinedweb
1,557
72.76
Content-type: text/html lsearch, lfind - Perform a linear search and update Standard C Library (libc.so, libc.a) #include <search.h> void *lsearch( const void *key, void *base, size_t *nelp, size_t width, int (*compar) (const void *, const void *)); void *lfind( const void *key, const void *base, size_t *nelp, size_t width, int (*compar) (const void *, const void *)); Interfaces documented on this reference page conform to industry standards as follows: lsearch(), lfind(): XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Points to an entry containing the key that specifies the entry to be searched for in the table. Points to the first entry in the table to be searched. Points to an integer that specifies the current number of entries in the table to be searched. This integer is incremented whenever an entry is added to the table. Specifies the size of each entry, in bytes. Points to the user-specified function to be used for comparing two table entries (strcmp(), for example). This function must return 0 (zero) when called with arguments that point to entries whose keys compare equal, and nonzero otherwise. The lsearch() function performs a linear search of a table. This function returns a pointer into a table indicating where a specified key is located in the table. When the key is not found in the table, the function adds the key to the end of the table. Free space must be available at the end of the table, or other program information may be corrupted. The lfind() function is similar to the lsearch() function, except that when a key is not found in a table, the lfind() function does not add an entry for the key to the table. In this case, lfind() returns a null pointer. [Digital] The lsearch() function is reentrant, but care should be taken to ensure that the function supplied as argument compar is also reentrant. The comparison function need not compare every byte; therefore, the table entries can contain arbitrary data in addition to the values undergoing comparison. If an entry in the table matches the key, both the lsearch() and lfind() functions return a pointer to the entry's location in the table. Otherwise, the lfind() function returns a null pointer, and the lsearch() function returns a pointer to the location of the newly added table entry. Functions: bsearch(3), hsearch(3), tsearch(3), qsort(3) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man3/lfind.3.html
CC-MAIN-2017-22
refinedweb
412
60.65
CodePlexProject Hosting for Open Source Software tinymce has its own plugin infrastructure which add a number of features that users may find helpful. Adding these at present however would seem to require modifying the tinymce module, and it doesn't seem possible to create a new module which provides tinymce plugins. To meet this need, I've created an extensibility point in the tinymce module which allows modules to implement an interface to provide tinymce plugins. It uses the event bus to allow modules to register plugins against tinymce, and these are then populated by the tinymce module. Use in a module would work something like this (for the print plugin): namespace Orchard.TinyMce.Print.Plugins { using System.Collections.Generic; using Orchard.TinyMce.Plugins; public class PrintPlugin : ITinyMCEPlugin { public IEnumerable<ExternalPlugin> GetPlugins() { yield return new ExternalPlugin() { Name = "print", URL = "/modules/TinyMce.Print/Scripts/editor_plugin.js", Buttons2 = new string[] { "print" } }; } } } Nice work David. I altered the tiny mce js (core) file manually to add some plugins and noted that this would need looking at at some stage. Is there any way it could be extended without having to write/alter modules? Ie: pick up some sort of plugin description text file and load each plugin that's listed. It might be worth having both extensibility models so that individual modules can alter the functions of the editor as well as global changes. Greg Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/357459
CC-MAIN-2016-50
refinedweb
268
56.45
Create Namespace Creates a new service namespace. Once created, this namespace’s resource manifest is immutable. This operation is idempotent. The namespace identifier should adhere to the following naming conventions: The name length is at least 6 and at most 50 characters. The name matches regex ^[a-zA-Z][a-zA-Z0-9-]*$ (the namespace name can contain only Letters, numbers, hyphens “-“). The name does not end with “-“, “-sb“ or “-mgmt“. The name is available via a call to provisioning i.e. the name must be unique across Azure to be successfully created. The name must start with a letter. A GUID is now allowed as the namespace name. Request Request Headers The following table describes required and optional request headers. Note that the request also requires a client certificate. This certificate must match the certificate you uploaded for that particular subscription. Request Body The namespace description. See Namespace Description. Only Region is required; the other fields are optional. Response The response includes an HTTP status code and a set of response headers. Response Codes Note If you create a namespace with a name containing special or encoded characters (for example, "test?Name=value&", which gets encoded to "test%3FName%3Dvalue%26"), a “(400) invalid request body” exception is generated. For information about status codes, see Status and Error Codes. Response Headers The response for this operation includes the following headers. The response may also include additional standard HTTP headers. All standard headers conform to the HTTP/1.1 protocol specification. Response Body The Namespace Description is returned. If some description properties were missing from the PUT request, these properties might contain default values.
https://docs.microsoft.com/en-us/rest/api/servicebus/create-namespace
CC-MAIN-2020-50
refinedweb
274
52.87
Repository Summary Packages README o3d3xx-ros o3d3xx-ros is a wrapper around libo3d3xx enabling the usage of IFM Efector O3D3xx ToF cameras from within ROS software systems. Software Compatibility Matrix Prerequisites Additionally, your compiler must support C++11. This package has been validated with: - g++ 4.8.2 on Ubuntu 14.04 LTS, ROS Indigo - g++ 5.3.x on Ubuntu 16.04 LTS, ROS Kinetic Building and Installing the Software NOTE: Since we are talking about ROS here, we assume you are on Ubuntu Linux. You should first ensure that you have installed ROS by following these instructions. The desktop-full installation is highly recommended. Next, you should be sure to install libo3d3xx. This ROS package assumes you have installed libo3d3xx via the supported debian installer. Step-by-step instructions for that process are located here. If you already have libo3d3xx installed you can skip this step. However, it is important to point out that the table above denotes a software compatibility matrix and you should be sure to link o3d3xx-ros against a compatible version of libo3d3xx. We now move on to building o3d3xx-ros. Building and installing o3d3xx-ros is accomplished by utilizing the ROS catkin tool. There are many tutorials and other pieces of advice available on-line advising how to most effectively utilize catkin. However, the basic idea is to provide a clean separation between your source code repository and your build and runtime environments. The instructions that now follow represent how we choose to use catkin to build and permanently install a ROS package from source. First, we need to decide where we want our software to ultimately be installed. For purposes of this document, we will assume that we will install our ROS packages at ~/ros. For convenience, we add the following to our ~/.bash_profile: if [ -f /opt/ros/indigo/setup.bash ]; then source /opt/ros/indigo/setup.bash fi cd ${HOME} export LPR_ROS=${HOME}/ros if [ -d ${LPR_ROS} ]; then for i in $(ls ${LPR_ROS}); do if [ -d ${LPR_ROS}/${i} ]; then if [ -f ${LPR_ROS}/${i}/setup.bash ]; then source ${LPR_ROS}/${i}/setup.bash --extend fi fi done fi Next, we need to get the code from github. We assume we keep all of our git repositories in ~/dev. $ cd ~/dev $ git clone We now have the code in ~/dev/o3d3xx-ros. Next, we want to create a catkin workspace that we can use to build and install that code from. It is the catkin philosophy that we do not do this directly in the source directory. $ cd ~/dev $ mkdir o3d3xx-catkin $ cd o3d3xx-catkin $ mkdir src $ cd src $ catkin_init_workspace $ ln -s ~/dev/o3d3xx-ros o3d3xx So, you should have a catkin workspace set up to build the o3d3xx-ros code that looks basically like: [ ~/dev/o3d3xx-catkin/src ] tpanzarella@jelly: $ pwd /home/tpanzarella/dev/o3d3xx-catkin/src [ ~/dev/o3d3xx-catkin/src ] tpanzarella@jelly: $ ls -l total 0 lrwxrwxrwx 1 tpanzarella tpanzarella 49 Dec 2 15:26 CMakeLists.txt -> /opt/ros/indigo/share/catkin/cmake/toplevel.cmake lrwxrwxrwx 1 tpanzarella tpanzarella 32 Dec 2 15:24 o3d3xx -> /home/tpanzarella/dev/o3d3xx-ros Now we are ready to build the code. $ cd ~/dev/o3d3xx-catkin $ catkin_make -DCATKIN_ENABLE_TESTING=ON $ catkin_make run_tests $ catkin_make -DCMAKE_INSTALL_PREFIX=${LPR_ROS}/o3d3xx install The ROS package should now be installed in ~/ros/o3d3xx. To test everything out you should open a fresh bash shell, and start up a ROS core: $ roscore Open another shell and start the primary camera node: $ roslaunch o3d3xx camera.launch ip:=192.168.10.69 NOTE: The IP address of your camera may differ. If you are using the factory default (192.168.0.69), you do not need to specify it on the above roslaunch line. Open another shell and start the rviz node to visualize the data coming from the camera: $ roslaunch o3d3xx rviz.launch At this point, you should see an rviz window that looks something like: Congratulations! You can now utilize o3d3xx-ros. Nodes /o3d3xx/camera This node provides a real-time feed to the camera data. This node is started from the primary camera.launch file: $ roslaunch o3d3xx camera.launch The naming of the camera can be customized via the ns (namespace) and nn (node name) command line arguments passed to the camera.launch file. For example, if you specify your roslaunch command as: $ roslaunch o3d3xx camera.launch ns:=robot nn:=front_camera The node will have the name /robot/front_camera in the ROS computation graph. Published Topics Advertised Services Parameters /o3d3xx/camera_tf This node is of type tf/static_transform_publisher. It establishes a frame_id for the camera in the global tf tree. This node is launched from the primary camera.launch file: $ roslaunch o3d3xx camera.launch When run as above, the tf publishing node would be named /o3d3xx/camera_tf and the camera coordinate frame would be o3d3xx/camera_link in the tf tree. You can customize this naming via the frame_id_base arg or implicitly through the ns (namespace) and nn (node name) command line arguments passed to the camera.launch file. For example, if you specify your roslaunch command as: $ roslaunch o3d3xx camera.launch ns:=robot nn:=front_camera The node name will be /robot/front_camera_tf and the camera frame will be robot/front_camera_link in the tf tree. /o3d3xx/camera/config_node This node is used as a proxy to simplify calling the /o3d3xx/camera/Config service offered by the /o3d3xx/camera node. It was noted above that there appears to be a limitation in the YAML parser of the ROS rosservice command line tool. Specifically, it seems that it is not capable of assigning a JSON string to a variable. This is the reason for this node. This is not a long-running node but rather works like a typical command-line tool would: you invoke it, it runs, and exits. The following command line will launch this node: $ roslaunch o3d3xx config.launch Parameters /rviz This package offers a launch script that wraps the execution of rviz so that the display will be conveniently configured for visualizing the /o3d3xx/camera data. To launch this node: $ roslaunch o3d3xx rviz.launch Running the command as above will color the point cloud with the data from the normalized amplitude image (i.e., the intensity). The rviz window should look something like (assuming you are coloring the point cloud with the intensity data): /o3d3xx/camera/XXX_throttler This package offers a launch script that wraps the topic_tools/throttler node so that it can throttle the core topics from the camera. Specifically, it will throttle /o3d3xx/camera/cloud to /o3d3xx/camera/cloud_throttle, /o3d3xx/camera/amplitude to /o3d3xx/camera/amplitude_throttle, /o3d3xx/camera/raw_amplitude_throttle, /o3d3xx/camera/depth to /o3d3xx/camera/depth_throttle, /o3d3xx/camera/confidence to /o3d3xx/camera/confidence_throttle. To launch this node: $ roslaunch o3d3xx throttled.launch By default, it will throttle the above named topics to 1 Hz. You can change the frequency with the hz command line argument. For example, to send data at 2 Hz: $ roslaunch o3d3xx throttled.launch hz:=2.0 Using this launch file to launch this set of nodes is strictly optional. We have found use for it in two ways. First, to slow down the publishing frequency of the topics when used in conjunction with the /o3d3xx/camera/file_writer node for collecting data (i.e., in those instances when we really do not need all the data but rather some subsampling of it). Second, if we are running the camera on a drone (for example) that has a slower radio link down to a ground control station running rviz where we want to see what the camera sees while the drone is in flight. Clearly there are other uses for this, YMMV. Configuring Camera Settings Configuring the camera is accomplished by passing a JSON string to the /o3d3xx/camera/config_node which will call the /o3d3xx/camera/Config service to mutate the camera settings. Using a JSON string to configure the camera has the following primary benefits: - Configuration is declarative. The camera configuration will reflect that which is described by the JSON file. - The JSON data is human-consumable and easily edited in a text editor. This makes it very convenient for headless embedded systems. - The JSON data is machine parsable, so configuring the camera on the fly via programmable logic is also possible. There are also a few downfalls to using JSON. Most notably the lack of comments and an enforceable schema. One could argue that the latter keeps things flexible. None-the-less, JSON is the format used by libo3d3xx and, by extension, this ROS package. An exemplary JSON file is shown here (this is the result of calling the /o3d3xx/camera/Dump service on a development system). When passing a JSON string (like the previously linked to file) to the /o3d3xx/camera/Config service (or to the /o3d3xx/camera/config_node) the following rules are used to configure the camera: - The Devicesection is processed and saved on the camera. - The Appssection is processed. For each app: - If the Indexkey is present, a current app at that Indexis looked up. If present, it is edited to reflect the data in the JSON file. If an app at that Indexis not present, a new app is created with the parameters from the JSON file. It is not guaranteed that the new app will have the specified Index. - If the Indexkey is not present, a new app is created with the parameters as specified in the JSON file. - The active application is set by consulting the desired index of the ActiveApplicationfrom the Devicesection of the JSON. If the specified Indexdoes not exist, the active application is not set. - The Netsection is processed. A reboot of the camera may be necessary after changing the camera's network parameters. Additionally, you will likely need to restart the /o3d3xx/cameranode pointing it to the new IP address (if that is what you changed). It should also be noted that any portion of the JSON tree can be specfied to configure only that part of the camera. The only rule to follow is that all keys should be fully qualified. For example, to simply set the active application, you can use a JSON snippet like this: { "o3d3xx": { "Device": { "ActiveApplication": "2" } } } The above snippet is provided as an example here. To apply this to your camera, you can: $ roslaunch o3d3xx config.launch infile:=/path/to/ex_set_active.json It was also noted above that the /o3d3xx/camera/config_node will read stdin by default, so you could also: $ echo '{"o3d3xx":{"Device":{"ActiveApplication":"2"}}}' | roslaunch o3d3xx config.launch Here is another example JSON file. This one will add a new application to the camera, using the default values for the high-dynamic range imager. We note that this application is added to the camera because no Index is specified for the application. If an Index were specfied, the application at the specified Index, if present, would be edited to reflect this configuration. In general, a simple way to configure camera settings without having to memorize the JSON syntax would be to simply dump the current camera settings to a file: $ rosservice call /o3d3xx/camera/Dump > /tmp/camera.json Then, open /tmp/camera.json with a text editor to create a declarative JSON configuration for your camera. You should be sure to delete the variable names from the rosservice output if you are following this example word-for-word. Additionally, you can delete any unnecessary keys if you would like, however it is not strictly necessary as the /o3d3xx/camera/Config service will leave unedited values unchanged on the camera. Once you have a configuration that you like, you can: $ roslaunch o3d3xx config.launch infile:=/tmp/camera.json You can check that your configuration is active by calling the /o3d3xx/camera/Dump service again. TODO Please see the Github Issues. LICENSE Please see the file called LICENSE. AUTHORS Tom Panzarella [email protected]
https://index.ros.org/r/o3d3xx/
CC-MAIN-2019-43
refinedweb
1,973
56.15
Python tool kit for multi-body dynamics. Project description PyDy, short for Python Dynamics, is a tool kit written in and accessed by the Python programming language that utilizes an array of scientific tools to study multibody dynamics. The goal is to have a modular framework which utilizes a variety of tools that can provide the user with their desired workflow, including: - Model construction - Equation of motion generation - Simulation - Visualization - Publication We started by building the SymPy mechanics package which provides an API for building models and generating the symbolic equations of motion for complex multibody systems and have more recently developed two packages, pydy-code-gen generally depends on these Python packages: SciPy Stack - SymPy >= 0.7.2 - NumPy >= 1.6.1 - SciPy >= 0.9.0 - matplotlib >= 0.99.0 - IPython >= 0.13.0 PyDy Stack - pydy-code-gen >= 0.1.0 - pydy-viz >= 0.1.0 It’s best to install the dependencies from the SciPy Stack using the instructions provided on the SciPy website. Once you have all of the SciPy Stack dependencies you can simply install the PyDy Stack with pip: $ pip install pydy Or download the source and run: $ python setup.py install For system wide installs you will need root permissions (perhaps prepend commands with sudo). Note that the PyDy package is currently a simple wrapper to pydy-code-gen and pydy-viz that provides a common namespace pydy. These packages will likely be merged into this package soon. Usage Simply import the modules and functions when in a Python interpreter: >>> from sympy import symbols >>> from sympy.physics import mechanics >>> from pydy import codegen, visualization Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pydy/0.1.0/
CC-MAIN-2019-51
refinedweb
294
55.54
Marc Portier wrote: > But taking it one step further: IMHO DomBuilder will use SAXParser down > below somewheree to actually do the parsing. So just setting the > parameter namespace-prefix in the cocoon.xconf will ensure all > saxparsers in the pool to (correctly) have the parameter set... so both > cases should behave the same then. (and no ugly patching is needed) Hmm, it's not nessarily the sax parser which is used. In this case, the dom builder gets the sax events directly from a pipeline. The pipeline might use the file generator (which uses the sax parser), but it also might use some other generator. So you can't rely that you get the namespace attributes in the dom builder. > > > Of course the logical remaining question is what about the side-effects > of this parameter-setting on the rest of cocoon (where SAXParser sounds > like a component under heavy use in every corner) Yes, I asked this myself as well :) I think there was a drawback, that's why the default is "false" and there was a big println in the original code when the feature was turned on. But I actually do not know what this was :( > > reading up on the meaning of the feature: > > > it seems to result in sax-events (startElement) carrying the > xmlns:pfx=uri declarations as attributes! This might sound as blasphemy > to the hardcore namespaced xml peeps around but it just ensures those > attributes to be available on the DOM elements that are input to the > form-binding-manager (who in turn needs those to instruct jxpath on what > namespaces are used on each level) > > Logically I do not think this will lead to errors, as transformers and > serializers are typically targetted at reacting on specific attribites, > rather then scan and process them all in a way that would lead to faulty > results, no? Hmm, I'm not sure here either. I know that over the past years I had to fix several transformers who did not expect an xmlns:something attribute. Ok, it's a bug in the transformer, but it shows the potential danger of turning this on by default. And as I explained above, I don't think that this will solve all problems as there are use cases where the sax parser is not used. In addition, I don't think that this is a bug in the sax parser or the dom builder. I guess that this is more a problem down in the cforms code which does expect a specific dom format. I fear we have to fix the problem there. > > Also, I would guess the resource/performance penalty of setting this > param to true isn't going to be too bad (taking the amount of > ns-declarations over useful data would be not too big in the typical > usage) but wouldn't know that for a fact. wdot? > > > Other options (then setting the param to true) would be > - to retrieve those xml-declarations from the DOM in some other way... > anyone? (can DOM nodes be asked to produce their available and/or > inherited ns-declarations?) Yes, I think this is one way to go. I don't know of an easy way. But I think you have to go up the tree manually for that... :( > - or rewrite the whole form-binding-builder process to use sax :-) :) I was never that happy that forms uses dom internally :=) But as its working well and is used throughout the framework, I guess you are joking :) > > (on the side: the fact that there exists an xpath 'namespace::' axis > leads me to believe that there should be some way of doing it on DOM as > well ? ) Good point. So we could check the xpath implementation :) Carsten
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200705.mbox/%[email protected]%3E
CC-MAIN-2018-30
refinedweb
624
65.05
Registry In's and Out's Using C# This is an article on using the Registry in C#. Having been looking around the web and on the MSDN libraries, I have determined that there is not a complete guide on using all the registry tools accessible using C#. So this article is going to cover all methods exposed to the user when developing in C#. Firstly i would like to say that I am very aware that the current consensus is that we should not be using the Registry to hold information about our programs anymore and that the ".NET" way is to store information in XML files. Well if that is you then you don't really need this article, however if you are accessing information from the registry then this is the source for you, please note that i have also included information from my previous article on "Advance Registry Access in C#". Ok so lets get down to business. When using the .NET framework in C# the user needs to add access to the Microsoft.Win32 namespace in order to access registry manipulation tools, this is done as shown in the below code: Once this reference has been added along with the other, it gives us access to the elements involved within registry control. There are two main classes associated with this namespace, RegistryKey and Registry. These classes are used together to allow us to do pretty much everything in the registry. Registry Class The Registry Class itself on its own doesn't do a lot for us this is because the Registry Class merely represents the seven top-level nodes within the registry for us to access and manipulate. The Registry class is powerful when used with the RegistryKey class. However if you open up Visual Studio.NET and add the namespace above and then just type, "Registry." you will see the list of the seven subnodes (Shown below). Now the we have identified what the Registry Class does and represents we can look at it's partner the RegistryKey Class. RegistryKeyClass The RegistryKey classis the most important part in terms of registry manipulation when using C#. TheRegistryKey class has many methods which the use can exploit to create and deletedata. Before we can do any of this we have to create an Instance of a RegistryKey, this is done using the code below: In the above code extract we have created a instance of RegistryKey called OurKey, we have initialized this key to be the HKEY_USERS subkey in the registry. Basically this mean that any method we use in this key will effect the direct subkeys of HKEY_USERS and their data items. OpenSubKey() OpenSubkey is a very important method when using the RegistryKey class, because it allows us to access/manipulate the subkey of the top key. This may sound a tad strange, basically if as above we have "OurKey" set to the HKEY_USERS key, using the OpenSubKey method we can set "OurKey" to a subkey of HKEY_USERS. The best we to demonstrate this is to show you some code (See below). In the above code sample we have created our instance of the RegistryKey class and given it the value of the HKEY_USERS key. The next step is that we have opened a subkey within HKEY_USERS called ".DEFAULT", we should also note that it doesn't have to be a top level subnode either, We could have opened ".DEFAULT\subkey". The second part to the method is defining if the key is opened and Read-only mode or Read/Write mode. If the value is true we can edit items within the key, and the key itself. DeleteSubKey()/ CreateSubKey() & DeleteSubKeyTree() All these three methods are to do with the management of Subkeys under the current selected key. These methods are pretty self explanatory and the below code shows there implementation. GetSubKeyNames() GetSubKeyNames is an important method because it allows us to get all the names of the secondary subkeys, For instance we could get all the names of the subkeys below "HKEY_USERS". The only draw back being that it only gets the immediate subkey names. However with a little recursion we can get them all, the first code snippet is of the basic use of GetSubKeyNames(). In the next snippet of code we see how we can use recursion to obtain all the names of the subkeys. GetValue()/ GetValuesNames() These methods are also very important when you want to extract information form the registry, when you used the other methods to get to the subkey that you want you would like to extract the information from them. The first methods, GetValue allows you obtain the value held in a registry value by specifying its name, For example in our "test" subkey used earlier, i have a string (REG_SZ) value named "Testvalue" with the value "This is a test". To extract the value i would use the below code. As you will have noticed when we get information out of the registry we are always converting it using the ToString() method, this is because values and subkeys within the registry are classed as object and must therefore be converted, either via the ToString() method or by Casting. As we have seen using the GetSubKeyNames() method we get the collection of names returned, this is also the case when we use the GetValueNames() methods. Using the code similar to the code used earlier we can obtain the names of the values, and also their values at the same time, as shown below. SetValue() The SetValue method does exactly what it says, it sets the value of a registry value, to use the function you supply the name of the value and what you want to set it to, if the value doesn't exist it creates it, simple as that, see the code below: Using the SubKeyCount and Valuecount properties You can use these two properties to obtain firstly of how many subkeys there directly below the specified key, and also the number of values under the specified key. to help understand we can look at the below code: Closing What we've worked on I should also mention that when you have finished working and using a section of registry it makes good error free practice to close the key you have been using, this is done via the Close() method. See code below. Unsupported Registry Functions The.NET Framework supports a lot and in C# the RegistryKey class is very powerfuland can be used well to manipulate the registry. Most of the Win32 APIs which we would have had to use previously have been built in however there are a few that are not, and this is where i refer to my Article "Advance Registry Work in C#". These are RegLoadKey() and RegUnloadKey(). These two calls allow you to load registry Hives into the registry and then manipulate them. As you will have seen in that article you need to declare these calls and you also need to gain relevant privileges to use them.API's Declared OpenProcessToken(int ProcessHandle, int DesiredAccess, ref int tokenhandle); [DllImport("kernel32.dll", CharSet=CharSet.Auto)] public static extern int GetCurrentProcess(); [DllImport("advapi32.dll", CharSet=CharSet.Auto)] public static extern Preivous×00000020; public const int TOKEN_QUERY = 0×00000008; public const int SE_PRIVILEGE_ENABLED = 0×00000002; public const string SE_RESTORE_NAME = "SeRestorePrivilege"; public const string SE_BACKUP_NAME = "SeBackupPrivilege"; public const uint HKEY_USERS = 0×80000003; public string shortname; bool unloaded = false; API's Used int token=0; int retval=0; TOKEN_PRIVILEGES TP = new TOKEN_PRIVILEGES(); TOKEN_PRIVILEGES TP2 = new TOKEN_PRIVILEGES(); LUID RestoreLuid = new LUID(); LUID BackupLuid = new LUID(); retval = OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, ref token); retval = LookupPrivilegeValue(null, SE_RESTORE_NAME, ref RestoreLuid); retval = LookupPrivilegeValue(null, SE_BACKUP_NAME, ref BackupLuid); TP.PrivilegeCount = 1; TP.Attributes = SE_PRIVILEGE_ENABLED; TP.Luid = RestoreLuid; TP2.PrivilegeCount = 1; TP2.Attributes = SE_PRIVILEGE_ENABLED; TP2.Luid = BackupLuid; retval = AdjustTokenPrivileges(token, 0, ref TP, 1024, 0, 0); retval = AdjustTokenPrivileges(token, 0, ref TP2, 1024, 0, 0); RegLoadKey(HKEY_USERS,"NTUSER.DAT",@"C:\NTUSER.DAT");) // Unloading a Hive "C:\NTUSER.DAT" RegUnLoadKey(HKEY_USERS,"NTUSER.DAT"); About the Author My name is Michael Bright, I'm a university student studying a Bsc in Computer Science at University College Chester. I have a love for all thing computers and have only worked wit h C# for about 3 months. I am currently training for my MCP in it. My other interests are C, VB, HTML, ASP and also Flash. I have interests in Developing Network app's and Network Security. My Web site is csharp.brightweb.co.uk where you can find examples of my Work, including my Defender Security applications.
http://www.csharphelp.com/2007/01/registry-ins-and-outs-using-c/
CC-MAIN-2014-35
refinedweb
1,438
59.33
Implementation Spotlight: Marketo We first heard about Marketo a long time ago, as they were a very early adopter of the Ext framework. The initial version of their marketing automation application was very nice even using pre-1.0 versions of Ext, and it has steadily improved over time. In addition to using the standard Ext components extensively, Marketo also employs their own custom UI theme to give the application a polished and distinctive look. They have also extended the base Ext platform with many sophisticated user interaction techniques that add power and expressiveness to the user experience. I recently spoke with Glen Lipka, Marketo’s Director of UX and Product Management, and Paul Abrams, lead UI engineer, about their application and their use of Ext. Tell us a little bit about Marketo and how you’re using Ext. Marketo is sophisticated yet easy marketing automation that helps B2B marketing professionals drive revenue and improve accountability. In short, we are a Marketing department’s one-stop, on-demand resource. We use Ext as the fundamental base for the client-side interface. Additionally, I use the numerous examples posted on extjs.com and the forums as my UI design toolkit. When faced with a feature or problem, I often look at different UI examples. The samples on Ext give me confidence that we could implement that approach quickly without reinventing the wheel. Did you evaluate other JavaScript frameworks when starting work on Marketo? Why did you choose Ext? We started using Ext very early, when it was called YUI-Ext. We were using YUI and Prototype and homegrown and others at the time. The mishmash wasn’t helping us, rather it was making maintenance difficult. When we started using YUI-Ext 0.33, there were lots of hurdles, but it still made things much better. We understood the vision and bought in. We skipped 1.0 and introduced 2.0 when it came out in Beta. By this I mean that we ran 0.33 and 2.0 simultaneously. The new efforts in 2.0 totally transformed our development process. The flexibility of the system and the general intuitive nature of how it was built made it an easy choice to standardize on. How does Ext fit into your overall technical architecture? We have a LAMPhp stack using the Symfony framework. Before Ext, we used a lot of one-off HTML. The CSS and HTML burden of trying to maintain duplicate code everywhere was a hassle. Now, we use Ext configurations and subclass components we want to extend. The potential of UI variations has dropped significantly and we can now reuse UI patterns much more effectively. How has the transition from .33 to 2.0 been going? We introduced 2.0 in the fall of 2007 and ran it with 0.33 and our legacy code. The new tree, grid and other components were much more in line with our needs. There were many useful enhancements in 2.0. We are finally ready to remove the legacy 0.33 code. We hoped the technical switch would involve upgrading components to their Ext 2.x equivalent and be relatively painless. It took six months on and off experimenting with different strategies before finding a new app structure and upgrade approach that worked, and then a couple of months of solid work rebuilding all the components derived from YUI and YUI-Ext 0.33 from scratch in Ext 2.x. Unfortunately, little was reusable. We have delayed a few components while we are designing a new skin, and later this year we will have a 100% Ext 2.x application. This has been a long time coming, but we are excited about it. The Marketo application is extremely feature-rich. How did you manage the UI code complexity? Just like Ext, we organize our code base into components and try to reuse as much code as possible. For example, our main Viewport lives in a folder/namespace and gets its own .js and .css files. We also have custom extensions for most of the major building blocks such as trees, combo boxes, grids, dialogs, etc. What features could we add to Ext to make building a rich application like Marketo easier in the future? There have been several places where building a custom extension involved overriding private methods, so more subclass hooks and smaller methods would help. Ext also needs a unified Ajax session manager similar to the role of the Ext window manager. A rich web application like Marketo needs more management over the user session, such as detecting a session timeout and showing a login prompt. Another example is managing groups of Ajax requests, so that making a request to the same group could abort ongoing requests. Building an Ext RIA would involve creating a Viewport and an Ajax session manager, and then building components on top of that foundation. We rolled our own session manager in the 0.33 days and need to hook in the new Ext 2.x components. Do you have any advice for developers just starting out with Ext for the first time? If you are building a large application, don’t try to use Ext to decorate an existing app via “unobtrusive JavaScript” or build an app half with conventional construction and half with Ext. Instead, embrace Ext as a framework and start from scratch with something like a Viewport, and build components on top of that foundation. There are 10 responses. Add yours. Phunky6 years ago The UI used on Marketo looks fantastic - really shows that you dont have to stick to the default ExtJS style and can look outside the box!! Very well done guys it looks great! Mark P6 years ago That is pretty hot. Was the initial screenshot built using only Ext (no html)? If so, can you guys share how you were able to achieve that custom look? Very nice. Adrian6 years ago Veey nice looks on that app. I agree with Mark, maybe you could post a tutorial on how to achieve a nice custom look just like this one. Technology Behind Marketo's Marketing Automation |6 years ago [...] lead UI engineer, discuss how their application made use of this technology to create its slick UI. Read more… Filed Under: AnalysisTagged: [...] TaunT6 years ago ?? ?????, ?? ????? ???? ??? ?????? ??? html moeoleg6 years ago That is pretty hot. Was the initial screenshot built using only Ext (no html)? If so, can you guys share how you were able to achieve that custom look? Very nice. _______________________________________________________________________________ celebrityadult.freehostia.com kabin5 years ago cabin producut….. cinsellik5 years ago I am grateful to you for this great content. Siki?5 years ago eyw admin cigerim benim Ben 10 Mania ?zle5 years ago Thanks. Best blog. Comments are Gravatar enabled. Your email address will not be shown.Commenting is not available in this channel entry.
http://www.sencha.com/blog/implementation-spotlight-marketo/
CC-MAIN-2014-41
refinedweb
1,147
67.45