text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I've been messing around with this for a bit and cannot seem to code it so that it will work as intended. I'm trying to prompt the user for input repeatedly, until the user enters a -1, when the loop is supposed to terminate. My problem lies in that the loop does not stop when -1 is entered. What am I doing wrong here? import java.util.Scanner; class Test{ public static void main(String[] args){ Scanner scan = new Scanner(System.in); String in = scan.nextLine(); while(in != "-1"){ System.out.println("You entered: " + in); in = scan.nextLine(); if(in == "-1") System.out.println("Stopped"); break; } } }
https://www.javaprogrammingforums.com/loops-control-statements/802-user-input-loop.html
CC-MAIN-2021-43
refinedweb
108
68.36
tplcpl Templates Compiler Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install tplcpl TplCpl - templates compiler This is a command-line utility to compile javascript templates (currently Jade and Underscore) for use client-side. TplCpl compiles a whole directory of your project's templates into single JS-file and optionally minifies them with Uglify.js. Using compiler Under Linux: Install TplCpl from npm: sudo -g npm install tplcpl tplcpl --help Global ( -g) installation is preferred to use the command line tool. Usage: tplcpl -t path/to/templates -o path/to/templates.js -c You have to pass -t and -o options - templates directory and output JS-file respectively. -c option means "compress" - minify output file with Uglify.js. During compilation a directory is traversed recursively. Template engine to compile with is chosen by file's extension. Jade templates must have .jade extension, underscore templates - .us extension. Under Windows: Download and run tplcpl-setup.exe Windows installer TplCpl is installed to C:\tplcpl. C:\tplcpl\bin\tplcpl -t path\to\templates -o path\to\templates.js -c Under windows please avoid spaces in paths. Using compiled templates After templates are compiled, they may be used as following: var html = Templating.tpl('my/template.jade', {foo:'bar'}); console.log(html); A Templating namespace provides also a convenient way to add tpl() helper to your javascript "class". Suppose we have an OOP module like this: var MyModule = function () { }; MyModule.prototype.myMethod = function () { }; //etc... Then we make: Templating.enable(MyModule); Now instances of MyModule will have a tpl() method: var inst = new MyModule; var h = inst.tpl('my/template.jade', {foo:'bar'}); Since tpl() is added with Templating.enable, templates rendering functions are bound to the calling object. So in your jade or underscore templates you can use this variable referring to the object that calls tpl(). Underscore extension There is a missing feature in Underscore templates - an escaping. So it is added here. Now <%=variable%> is used to escape special characters in a variable. To show variable without escaping you should use: <%-variable%> Collaboration I have written TplCpl for private use, so it is not a universal tool. Feel free to fork it or submit an issue if you want to add any missing features.
https://www.npmjs.org/package/tplcpl
CC-MAIN-2014-15
refinedweb
381
61.02
Simple, re-usable, stateless Django app for visualizing and browsing statistics, mainly based on your existing Django models. Please note: Django MStats is in early development and the API is very likely to change. MStats is a super simple, re-usable, stateless Django app for visualizing and browsing statistics, mainly based on existing Django models. My motivation for creating MStats is to have a dead simple way, with as little effort as possible, to get visualization of key metrics in different Django projects. The goal of Django MStats is not to be the ultimate metrics/statistics solution™. It will not support different backends for different Metrics services and databases. MStats makes all queries in real-time, and does not store any permanent data itself, even though Django’s cache might be used. In other words, Django MStats is a reusable app for those who want to get basic statistics browsing with minimum effort. Since MStats is stateless, it can easily be tested out, and thrown away in favor of something more advanced, if a project grow out of it. What does M in Mstats stand for? Model or Mini. Whichever you like best. Requirements Currently MStats depends on PostgreSQL, because it uses a Postgres specific SQL functions for retrieving stats. Installation Install from PyPI: pip install django-mstats Add django_mstats to INSTALLED_APPS Add URL route to your urls.py: url(r"^mstats/", include("django_mstats.urls")), Create mstats.py file(s) in your Django apps (see below). Defining different metrics Once you have added django_mstats to your INSTALLED_APPS, you can create mstats.py files within your Django apps. In those files you should create classes that inherits from ModelStats. Below are some examples. Statistics for newly registered users: from django_mstats.models import ModelStats from django.contrib.auth.models import User class NewUsers(ModelStats): model = User datetime_field = "date_joined" Specifying a name: class NewUsers(ModelStats): model = User datetime_field = "date_joined" name = "User registrations" License BSD License Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-mstats/
CC-MAIN-2017-34
refinedweb
343
58.38
Flash Update target to support bootloader. - Update linker script. - Add required metadata to targets.json. - Implement mbed_start_application. - Implement flash HAL API. - Verify changes with tests. Linker script updates When building a bootloader application or an application that uses a bootloader, the Arm Mbed OS build system automatically defines values for the start of application flash, MBED_APP_START, and size of application flash, MBED_APP_SIZE, when preprocessing the linker script. When updating a target to support this functionality, linker scripts must place all flash data in a location starting at MBED_APP_START and must limit the size of that data to MBED_APP_SIZE. This change must occur for the linker scripts of all toolchains - GCC Arm (.ld), Arm (.sct) and IAR (.icf). You can find examples of this for the k64f, stm32f429, odin-w2. Use these 2 defines in place of flash start and size for a target: MBED_APP_START- defines an address where an application space starts. MBED_APP_SIZE- the size of the application. Note: When an application does not use any of the bootloader functionality, then MBED_APP_START and MBED_APP_SIZE are not defined. For this reason, the linker script must define default values that match flash start and flash size.. An example of how a target could define MBED_APP_START and MBED_APP_SIZE in the linker script file: #if !defined(MBED_APP_START) #define MBED_APP_START 0 #endif #if !defined(MBED_APP_SIZE) #define MBED_APP_SIZE 0x100000 #endif Be careful with these defines because they move the application flash sections. Therefore, you should move any sections within flash sectors accordingly. Note: The VTOR must be relative to the region in which it is placed. To confirm, search for NVIC_FLASH_VECTOR_ADDRESS and SCB->VTOR, and ensure the flash address is not hardcoded. Problematic declaration of flash VTOR address: #define NVIC_RAM_VECTOR_ADDRESS (0x20000000) #define NVIC_FLASH_VECTOR_ADDRESS (0x00000000) Bootloader-ready declaration of flash VTOR address: #define NVIC_RAM_VECTOR_ADDRESS (0x20000000) #if defined(__ICCARM__) #pragma section=".intvec" #define NVIC_FLASH_VECTOR_ADDRESS ((uint32_t)__section_begin(".intvec")) #elif defined(__CC_ARM) extern uint32_t Load$$LR$$LR_IROM1$$Base[]; #define NVIC_FLASH_VECTOR_ADDRESS ((uint32_t)Load$$LR$$LR_IROM1$$Base) #elif defined(__GNUC__) extern uint32_t vectors[]; #define NVIC_FLASH_VECTOR_ADDRESS ((uint32_t)vectors) #else #error "Flash vector address not set for this toolchain" #endif targets.json metadata The managed and unmanaged bootloader builds require some target metadata from CMSIS Packs. Add a "device_name" attribute to your target as Adding and configuring targets describes. Start application The mbed_start_application implementation exists only for Cortex-M3, Cortex-M4 and Cortex-M7. You can find it in the Arm Mbed_application code file. If mbed_start_application does not support your target, you must implement this function in the target HAL. Flash HAL For a bootloader to perform updates, you must implement the flash API. This consists of implementing the function in flash_api.h and adding the correct fields to targets.json. There are two options to implement flash HAL: Option 1: CMSIS flash algorithm routines These are quick to implement. They use CMSIS device packs and scripts to generate binary blobs. Because these flash algorithms do not have well-specified behavior, they might disable cache, reconfigure clocks and other actions you may not expect. Therefore, proper testing is required. First, make sure CMSIS device packs support your device. Run a script in mbed-os to generate flash blobs. Check the flash blobs into the target's HAL. Arm provides an example of how to do this. To enable a CMSIS flash algorithm common layer, a target should define FLASH_CMSIS_ALGO. This macro enables the wrapper between CMSIS flash algorithm functions from the flash blobs and flash HAL. "TARGET_NAME": { "extra_labels": [FLASH_CMSIS_ALGO] } The CMSIS algorithm common layer provides a trampoline, which uses a flash algorithm blob. It invokes CMSIS FLASH API, which the CMSIS-Pack Algorithm Functions page defines. Option 2: Your own HAL driver If CMSIS packs do not support a target, you can implement flash HAL by writing your own HAL driver. Functions to implement: int32_t flash_init(flash_t *obj); int32_t flash_free(flash_t *obj); int32_t flash_erase_sector(flash_t *obj, uint32_t address); int32_t flash_program_page(flash_t *obj, uint32_t address, const uint8_t *data, uint32_t size); uint32_t flash_get_sector_size(const flash_t *obj, uint32_t address); uint32_t flash_get_page_size(const flash_t *obj); uint32_t flash_get_start_address(const flash_t *obj); uint32_t flash_get_size(const flash_t *obj); To enable flash HAL, define FLASH in targets.json file inside device_has: "TARGET_NAME": { "device_has": ["FLASH"] } Finally, to indicate that your device fully supports bootloaders, set the field bootloader_supported to true for the target in the targets.json file: "bootloader_supported": true Tests The following tests for the FlashIAP class and flash HAL are located in the mbed-os/TESTS folder. - Flash IAP unit tests: tests-mbed_drivers-flashiap. - Flash HAL unit tests: tests-mbed_hal-flash. They test all flash API functionality. To run the tests, use these commands: - Flash IAP: mbed test -m TARGET_NAME -n tests-mbed_drivers-flashiap. - Flash HAL: mbed test -m TARGET_NAME -n tests-mbed_hal-flash. Troubleshooting For targets with VTOR, a target might have a VTOR address defined to a hardcoded address as mentioned in the Linker script updates section. Using Flash IAP might introduce latency as it might disable interrupts for longer periods of time. Program and erase functions might operate on different sized blocks - page size might not equal to a sector size. The function erase erases a sector, the program function programs a page. Use accessor methods to get the values for a sector or a page. Sectors might have different sizes within a device.
https://os.mbed.com/docs/mbed-os/v5.14/porting/flash.html
CC-MAIN-2021-04
refinedweb
876
57.06
Learning how to AI and Biology at the same time — Part 1 10th of February 2018 Some of you may know that I’ve been studying bioinformatics at Birkbeck, University of London. I’ve got lots of reasons why that I might discuss in another post, but I’d like to talk about some of the basics of Machine learning, structural biology and some tips about how one can go about studying for an MRes. The research path is long and difficult but very rewarding. If you are doing your job right, you’ll discover something no-one else has ever discovered, even if that thing is really tiny. There are few callings in life that can compete with that. Structural biology and Computer Science Biology is a big subject — and I mean really big! In terms of raw data, I heard a factoid that the data output from somewhere like EMBL is much larger than CERN. The problem is not just about volume though. Biological data is extremely heterogeneous. Sometimes it’s empirical and nicely formatted. Other-times it’s qualitative and messy. There are so many formats and workflows to keep track of, I’m amazed that anything gets done at all. Indeed, this is a problem that biology has when it comes to how certain we can be of any results. A recent talk at the CCC highlights the problem with P-Values. Biology has typically relied on a 0.05 P value which is frankly, terrible. Remember that whole 5 Sigma thing that was all the rage when the Higgs Boson was found? Physicists there could afford to be accurate. In biology, the systems are so much more complicated; drawing even a correlation, let alone a causation appears to me to be much harder. As computers and the systems we build become more and more complex, I’ve noticed a striking resemblance between the two. When we take artificial intelligence into account, we are building deeply complicated, power hungry systems that are beginning to defy reductionism and end-to-end, logical reasoning. The two fields are, in my mind, becoming a lot closer than I had previously thought. Structural biology is where I’ve been focusing my attention. Lots of people think that DNA is where it’s at in biology. I respectfully disagree. DNA is all about statistics, matching endless streams of letters and working around crappy software and slow machinery. Wouldn’t you rather take apart and build tiny machines made of amino acids? Of course you would! Structural biology has many similarities with computer graphics funnily enough. At one point in time, the fastest renderer of spheres was written by a biologist! There are many algorithms related to matching structures, converting between co-ordinates and all sorts of linear algebra that I feel quite at home with. Benjamin’s top tip! If you love computer graphics, programming and want to make a difference, go into biology! Deep Learning and polypeptides One thing biologists love to do, is predict what might happen given a certain starting condition. You’ve probably heard of Folding at Home right? Protein folding is a really important and really difficult problem. In a nutshell, you have a known list of amino acids like Tyrosine, Glycine, Proline, Glycine etc and you want to know what structure you’ll end up with at the end. This 3D structure defines what the protein will actually do. If we know what a particular sequence will turn into, we can figure out what a certain DNA sequence will result in. We can create better drugs or delete problematic DNA errors. It’s a big thing. My area looks at a similar, but smaller area. I’m thinking if it’s possible to model a certain class of loops — the one’s on antibodies specifically. Given a sequence of Amino Acids, can we predict what shape the loop will have? If we can, we can make better antibodies. Antibodies are great because they can be used as markers and drugs. Deep learning has been the hot, sexy topic for quite a while now. It’s funny how an old idea can suddenly come back and take the world by storm. Biologists have been using neural nets for some time now. PSIPRED is a is a good example of a program I’ve used before that uses neural nets. It tries to detect secondary structure when given an amino acid sequence. The neural networks in the news are typically image and classifier related and the things they can do are quite amazing. The canonical example is the Googlenet based on the work of Lecun et al. Such nets are called deep, not only because they have several layers, but also because they rely on convolutions. A convolution operation takes a kernel of a certain size and convolves it over the input. The kernel might be, say, an 8 x 8 patch that sums up all the values within it. reducing the input by a factor of 8 (roughly). However, at the same time, it creates a third dimension that only gets deeper in time. For instance, an input image might be 256 x 256 in size, made up of single values. After the first convolution it might be 128 x 128 x 3 depending on your kernel and it’s operation. A further operation might reduce further to 64 x 64 x 6. The last dimension keeps getting deeper as the original dimensions get smaller. Tensorflow Google released Tensorflow sometime ago, and I used it briefly when I was looking at Natural Language Processing. This time around I thought I’d learn a little more first. I’d recommend the coursera introduction to machine learning. You’ll get the bare basics which you’ll need to get started. I decided to use Tensorflow over Caffe or Keras because it’s easy to setup and there’s plenty of documentation. Not only that, it’s pitched at just the right level — it’s quite powerful but quick to use. I decided to setup a machine to run all of my nets on. I managed to get hold of a machine being thrown out from the University. With a spare graphics card kicking around I had a machine I could install Arch Linux on and take advantage of Tensorflow’s GPU support. Tensorflow has support for a few languages, the most popular being Python (which is what I use). It’s good to know that C is also an option for a more final product. Tensorflow has the idea of the graph and the session. Tensors are supposed to flow through the graph in a session. You build a graph that is made up of operations that pass tensors around. With this graph built, you create a session which takes some inputs and runs them through the graph. Hack-a-day has a really good introduction to Tensorflow that describes how the classic idea of a set of neurons and links, maps on to a series of matrix multiplications. Essentially, most tensors we are likely to deal with are either matrices (2D) or maybe 3D. The following is a short example of one of my early test nets. This example creates a convolutional neural net that has 1 convolutional layer and two fully-connected layers. At points, I resize the tensors to perform certain operations, but mostly, it’s a series of matrix multiplications and additions. Finally, I use a tanh activation function. If you read a lot of the Tensorflow examples out there, you’ll see a lot of ReLUs being used, but for our purposes, we need a nice range between -1 and 1. graph = tf.Graph()with tf.device('/gpu:0'): with graph.as_default(): tf_train_dataset = tf.placeholder(tf.bool, [None, FLAGS.max_cdr_length, FLAGS.num_acids],name="train_input") output_size = FLAGS.max_cdr_length * 4 dmask = tf.placeholder(tf.float32, [None, output_size], name="dmask") x = tf.cast(tf_train_dataset, dtype=tf.float32) W_conv0 = weight_variable([FLAGS.window_size, FLAGS.num_acids, FLAGS.num_acids] , "weight_conv_0") b_conv0 = bias_variable([FLAGS.num_acids], "bias_conv_0") h_conv0 = tf.tanh(conv1d(x, W_conv0) + b_conv0) dim_size = FLAGS.num_acids * FLAGS.max_cdr_length W_f = weight_variable([dim_size, output_size], "weight_hidden") b_f = bias_variable([output_size], "bias_hidden") h_conv0_flat = tf.reshape(h_conv0, [-1, dim_size]) h_f = tf.tanh( (tf.matmul(h_conv0_flat, W_f) + b_f)) * dmask W_o = weight_variable([output_size, output_size], "weight_output") b_o = bias_variable([output_size],"bias_output") y_conv = tf.tanh( ( tf.matmul(h_f, W_o) + b_o) * dmask, name="output")return graph With this graph in place, I can then run it over on my GPU with the following session: def run_session(graph, datasets): ''' Run the session once we have a graph, training methodology and a dataset ''' with tf.device('/gpu:0'): with tf.Session(graph=graph) as sess: training_input, training_output, validate_input, validate_output, test_input, test_output = datasets # Pull out the bits of the graph we need ginput = graph.get_tensor_by_name("train_input:0") gtest = graph.get_tensor_by_name("train_test:0") goutput = graph.get_tensor_by_name("output:0") gmask = graph.get_tensor_by_name("dmask:0") stepnum = 0 # Working out the accuracy basic_error = cost(goutput, gtest) # Setup all the logging for tensorboard variable_summaries(basic_error, "Error") merged = tf.summary.merge_all() train_writer = tf.summary.FileWriter('./summaries/train',graph) train_step = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(basic_error) tf.global_variables_initializer().run() while stepnum < len(training_input): item_is, item_os = next_item(training_input, training_output, FLAGS) mask = create_mask(item_is) summary, _ = sess.run([merged, train_step], feed_dict={ginput: item_is, gtest: item_os, gmask: mask}) if stepnum % 100 == 0: mask = create_mask(validate_input) train_accuracy = basic_error.eval( feed_dict={ginput: validate_input, gtest: validate_output, gmask : mask}) print('step %d, training accuracy %g' % (stepnum, train_accuracy)) train_writer.add_summary(summary, stepnum) stepnum += 1 # save our trained net saver = tf.train.Saver() saver.save(sess, 'saved/nn02') There are a few little gotchas here that are worth mentioning. It’s import to call: tf.global_variables_initializer().run() Tensorflow gets upset if the variables in the system are not initialized. Most examples don’t do things the way I do them but I wanted to partition my program a little differently. If you’ve named your tensors and placeholders, you can references them later by name: ginput = graph.get_tensor_by_name("train_input:0") A placeholder like ginput does exactly what you’d expect. It’s like a sort of socket that you plug your data into. If I pass it a Numpy array of data, Tensorflow will make a tensor out of it and send it on it’s way around the graph. There’s more to this example that I’ve not included, such as the cost functions and various support functions to create and initialise proper weights but I think we can agree that it’s not a lot of code to generate a fully usable neural network. How do I research good like? These of you who have done a research based masters degree or a PhD will no doubt have your own war stories. I’m sure there has been plenty written about the research experience but although I’m still going through the process I have a few things I can mention. Firstly, have a plan, and then realise that the plan is more like a framework. I’ve changed around bits of my plan already, but the things I’ve intended to cover, I’ve mostly covered. The order and priorities have changed, but overall, I can say whether or not I’m on track. The very nature of research will present you with new things you never expected and that will force changes, but be aware of what you are spending your time on. Make sure you keep plenty of time spare for writing. In my case, I’ve got around 4 months penciled down, which might not even be enough. Secondly, regular contact with folks, especially your supervisor, is important. I work remotely — very remotely! I live in Washington DC but I’m still talking to my supervisor in London every couple of weeks with some news. It helps keep you on the straight and narrow and reminds you that you are not alone. In fact, this is very important to remember. I’ve had help from my wife, my friends and even people I’ve never met at the London Biohackspace, so keep in touch with folk. Having the right tools is important, so long as you are spending time using the tools to get the work done! I’ve setup an AI machine that I’ve since not messed with; I can rely on it to just work. I’ve not upgraded tensorflow or any of the libraries on it and I won’t do until the job is done. I use Zotero to keep all my references in check, Latex for all my writing and I use a timer to record how long I’m spending each week on research. I’m a big fan of Trello for keeping track of the things you need to be doing and any ideas that come into your head. Finally, choose your project wisely. I went around at least 6 different supervisors, asking them about the projects that interested me. Not only that, I spoke with a couple of friends (who are both doctors in computer science) about which project sounded right for me, and I’m really glad I did! I’ve ended up with a project I truly enjoy and perhaps, that’s the most important thing. That way, you’ll get it finished and to a high standard. Love of the project is needed to get through the tough stages (and I’ve already had a couple of these). Ask people you trust what they think of your options. You’ll likely make a better choice. Going further In the next post, I’ll talk a bit about the different kinds of neural networks we can make: from conv nets to LSTMs. I’ll go into a little more detail about the various tests and algorithms we use to assess biological structures and what problems I’ve encountered on the way. I’ll also talk a little about Jupyter notebooks and how we can make science a bit more accessible.
https://benjamin-computer.medium.com/learning-how-to-ai-and-biology-at-the-same-time-part-1-ee85ebcacf2c
CC-MAIN-2021-10
refinedweb
2,337
65.32
IRC log of css on 2011-05-18 Timestamps are in UTC. 15:39:38 [RRSAgent] RRSAgent has joined #css 15:39:38 [RRSAgent] logging to 15:39:43 [glazou] Zakim, this will be Style 15:39:43 [Zakim] ok, glazou; I see Style_CSS FP()12:00PM scheduled to start in 21 minutes 15:39:49 [glazou] RRSAgent, make logs public 15:52:54 [arronei] arronei has joined #CSS 15:54:19 [stearns] stearns has joined #css 15:54:38 [glazou] Zakim, code? 15:54:38 [Zakim] the conference code is 78953 (tel:+1.617.761.6200 tel:+33.4.26.46.79.03 tel:+44.203.318.0479), glazou 15:55:22 [Zakim] Style_CSS FP()12:00PM has now started 15:55:30 [Zakim] +stearns 15:55:39 [Zakim] +plinss 15:56:25 [Zakim] +??P33 15:56:32 [glazou] Zakim, ?P33 is me 15:56:32 [Zakim] sorry, glazou, I do not recognize a party named '?P33' 15:56:40 [glazou] Zakim, ??P33 is me 15:56:40 [Zakim] +glazou; got it 15:56:43 [Zakim] + +1.415.290.aaaa 15:57:52 [Zakim] -glazou 15:58:02 [Zakim] +??P33 15:58:14 [glazou] Zakim, ??P33 is me 15:58:14 [Zakim] +glazou; got it 15:58:21 [plinss] zakim, aaaa is fantasai 15:58:21 [Zakim] +fantasai; got it 15:58:25 [vhardy] vhardy has joined #css 15:58:37 [glazou] Zakim, who is here? 15:58:37 [Zakim] On the phone I see stearns, plinss, fantasai, glazou 15:58:39 [Zakim] On IRC I see, fantasai, 15:58:41 [Zakim] ... plinss, Bert, Hixie, gsnedders, jgraham, trackbot 15:58:44 [stearns] zakim, aaa is vhardy 15:58:44 [Zakim] sorry, stearns, I do not recognize a party named 'aaa' 15:59:00 [smfr] smfr has joined #css 15:59:02 [stearns] zakim, aaaa is vhardy 15:59:02 [Zakim] sorry, stearns, I do not recognize a party named 'aaaa' 15:59:10 [glazou] Zakim, aaaa is vhardy 15:59:10 [Zakim] sorry, glazou, I do not recognize a party named 'aaaa' 15:59:17 [plinss] zakim, fantasai is vhardy 15:59:17 [Zakim] +vhardy; got it 15:59:28 [plinss] zakim, who is here? 15:59:28 [Zakim] On the phone I see stearns, plinss, vhardy, glazou 15:59:29 [Zakim] On IRC I see smfr,, 15:59:31 [Zakim] ... fantasai, plinss, Bert, Hixie, gsnedders, jgraham, trackbot 15:59:41 [oyvinds] oyvinds has joined #css 15:59:49 [Zakim] +smfr 16:00:28 [Zakim] +[Microsoft] 16:00:29 [arronei] zakim, microsoft has me 16:00:30 [Zakim] +arronei; got it 16:00:52 [Zakim] +[Apple] 16:01:01 [hober] Zakim, Apple has me 16:01:01 [Zakim] +hober; got it 16:01:12 [Zakim] +[Microsoft.a] 16:01:13 [johnjan] johnjan has joined #css 16:01:33 [johnjan] zakim, microsoft has johnjan 16:01:33 [Zakim] +johnjan; got it 16:01:52 [bradk] bradk has joined #css 16:03:28 [Zakim] +bradk 16:03:58 [Zakim] + +1.415.920.aabb 16:04:10 [glazou] Zakim, aabb is fantasai 16:04:10 [Zakim] +fantasai; got it 16:04:43 [Zakim] +??P11 16:04:50 [kojiishi] zakim, ??p11 is me 16:04:50 [Zakim] +kojiishi; got it 16:04:56 [glazou] ScribeNick: vhardy 16:05:09 [Zakim] -bradk 16:05:26 [TabAtkins] Last-minute regrets - in a meeting. 16:05:28 [Zakim] + +1.415.832.aacc 16:05:36 [glazou] Zakim, aacc is arronei 16:05:42 [glazou] Zakim, aacc is arno 16:05:54 [Zakim] +arronei; got it 16:06:00 [Zakim] sorry, glazou, I do not recognize a party named 'aacc' 16:06:13 [fantasai] is there an agenda? 16:06:15 [murakami] murakami has joined #css 16:06:19 [arno] Zakim, aacc is arno 16:06:22 [Zakim] sorry, arno, I do not recognize a party named 'aacc' 16:06:26 [howcome] howcome has joined #css 16:06:31 [Zakim] +howcome 16:06:45 [vhardy] glazou: The listserv at W3C has issues. I sent the agenda yesterday evening. It can take a ong time to see an email in your inbox. 16:07:07 [vhardy] glazou: this is for W3C mailing lists in general, not just the CSS lists. 16:07:23 [vhardy] arno: some of the email just seem to never make it to my inbox. 16:07:35 [vhardy] glazou: this happened to me too. 16:07:43 [vhardy] arno: yes, I do not see other people's email. 16:08:01 [vhardy] glazou: other agenda items? 16:08:11 [Zakim] +bradk 16:08:17 [vhardy] vhardy: will we have a meeting next week? 16:08:33 [vhardy] glazou: not sure, we have a chairing problem. 16:08:45 [vhardy] fantasai: may be Bert can char? 16:08:57 [vhardy] glazou; yes, I'lll try to find a replacement. 16:08:59 [glazou] 16:09:25 [vhardy] glazou: please respond to the questionaire about the next F2F. 16:09:33 [plinss] 16:09:59 [vhardy] glazou: please also fill out the information about your flight and arrival/departure info. 16:10:43 [vhardy] glazou: agenda items for Kyoto meeting. For now, we have CSS Regions/Exclusions, etc... (see above link). 16:10:49 [vhardy] glazou: are there other items? 16:10:49 [Zakim] -bradk 16:12:13 [vhardy] Hakon: could we discuss the multi-col test suite? 16:12:39 [Zakim] +bradk 16:12:45 [vhardy] hakon: the test suite is a start. 16:12:50 [vhardy] glazou: how complete is it? 16:13:01 [vhardy] hakon: it is a bit short on functionality. 16:13:11 [vhardy] hakon: we need more test cases for edge cases. 16:13:35 [vhardy] hakon: we would like to reach about 200 tests. 16:13:55 [vhardy] hakon: we currently have about 20. 16:14:11 [vhardy] hakon: I think Microsoft has between 50 and 100 tests. 16:14:55 [vhardy] johnjan: It is Microsoft's intention to contribute the tests. 16:15:16 [vhardy] glazou: other agenda items? 16:15:23 [johnjan] we just want to make sure we're not going to submit a bunch of duplicates to the opera tests 16:15:35 [vhardy] glazou: anything else about Kyoto? 16:15:45 [smfr] smfr has joined #css 16:15:48 [vhardy] plinss: I'll be there a few days in advance. 16:16:07 [alexmog] alexmog has joined #css 16:16:47 [vhardy] vhardy: the SVG WG will not meet in Kyoto. 16:16:56 [vhardy] glazou: yes, we had a message from them. 16:17:30 [vhardy] glazou: Cameron McCormak sent a message on May 12th. The SVG WG will reschedule the meeting likely late July in the US. 16:18:00 [vhardy] glazou: next agenda item. CSS 2.1 review period ended yesterday. 16:18:12 [vhardy] glazou: 23 answers. 21 are ok-go ahead. 2 are requesting changes. 16:18:35 [vhardy] glazou: some of the changes sent by Mohamed are related to references. 16:19:10 [vhardy] glazou: David from Mozilla had a comment about issue 225. Nokia mentioned it to. Saying we could add it to the document since it is resolved. 16:19:33 [vhardy] glazou: if we have add the resolution to the document, it could delay things. I would propose to move as fast as possible. 16:19:46 [vhardy] fantasai: Bert said the director could agree to make that change. 16:20:03 [vhardy] glazou: I am worried about a technical change that is not just editorial. 16:20:13 [vhardy] fantasai: I think the Director should make that decision. 16:20:26 [vhardy] fantasai: nobody objects to the change. 16:20:53 [vhardy] glazou: the Director could also be worried that not everybody reviewed the issue 225 resolution. 16:21:20 [fantasai] fantasai: That should be the director's call. 16:21:21 [vhardy] glazou: our responsibility as chairs is to decide on what we should recommend for the director. 16:21:34 [vhardy] glazou: unfortunately, Bert is not on the call. 16:21:43 [fantasai] fantasai: I don't think we should recommend against the chagne 16:21:48 [fantasai] s/chagne/change/ 16:22:02 [vhardy] glazou: If we can make some of the changes Mohamed recommended. 16:22:36 [vhardy] glazou: please remind your AC rep. about PR release about 2.1 16:22:49 [glazou] 16:22:49 [vhardy] glazou: let's move to other agenda items. 16:23:51 [vhardy] vhardy: what do I need to do to prepare for WD publication. 16:24:22 [vhardy] fantasai: talk to me and Bert off-line. 16:24:27 [vhardy] vhardy: ok, will do. 16:24:56 [vhardy] glazou: did you incorporate comments in the draft. 16:25:05 [vhardy] vhardy: I am in the process to do that. 16:25:23 [vhardy] glazou: we cannot make a decision to publish to WD because we do not have enough attendance in that call. 16:25:36 [vhardy] glazou: I propose we wait until next week if we have a call or decide during the F2F. 16:25:49 [vhardy] glazou: next item, we can talk about namespaces. 16:26:04 [vhardy] glazou: I had an AI to ping the i18n WG. 16:26:34 [vhardy] kojiishi: Actually, this was just discussed in the i18n meeting earlier today. 16:26:58 [vhardy] kojiishi: we should have an answer by next week. 16:27:17 [vhardy] glazou: I hope it will not imply a lot of changes. If it does not, we can publish. 16:27:23 [vhardy] glazou: anything else on that topic? 16:27:31 [vhardy] glazou: anything else we should discuss today? 16:27:46 [fantasai] zakim, who is here? 16:27:46 [Zakim] On the phone I see stearns, plinss, vhardy, glazou, smfr, [Microsoft], [Apple], [Microsoft.a], fantasai, kojiishi, arronei, howcome, bradk 16:27:48 [Zakim] [Microsoft] has johnjan 16:27:48 [Zakim] [Apple] has hober 16:27:50 [Zakim] On IRC I see alexmog, smfr, murakami, bradk, johnjan, oyvinds, vhardy, stearns, arronei, RRSAgent, Zakim, glazou, Martijnc, arno, kojiishi, unomi, szilles, CSSWG_LogBot, karl, 16:27:52 [Zakim] ... lhnz, krijnh, TabAtkins, hober, fantasai, plinss, Bert, Hixie, gsnedders, jgraham, trackbot 16:29:17 [vhardy] fantasai: I have a question about what to do about the intric width of multi-col elements. 16:29:35 [vhardy] hakon: I do not think this is a multi-col specific issue. 16:29:53 [vhardy] hakon: I think this is an issue that we need to address, just not as a multi-col issue. 16:30:40 [vhardy] glazou: do you mean that the algorithm to compute the width of columns is orthogonal to the widht of the elements themselves. 16:30:47 [vhardy] s/widht/width 16:30:57 [vhardy] fantasai: where should I address this? 16:31:22 [vhardy] fantasai: we need to define the shrink wrap algorithm for table and other use cases. 16:32:01 [vhardy] fantasai: I would like if this issue should be left undefined or if we should add it to the appendix of wrapping mode. 16:32:27 [vhardy] hakon: yes, I think you should do. 16:33:13 [vhardy] fantasai: multi-col has special considerations, such as the max-content-width that is different for multi-col elements. 16:34:35 [vhardy] hakon: I do not think we should single out the multi-col elements. 16:35:50 [vhardy] fantasai: describes a use case with multi-column where there are specificities. 16:36:22 [vhardy] hakon: we have simplified the multi-column specification. 16:36:43 [vhardy] fantasai: I would like to address these use cases. 16:41:41 [fantasai] we have 3 options 16:41:44 [vhardy] fantasai: we have 3 options: 16:41:54 [fantasai] a) leave shrinkwrap undefined, as currently in css3-multicol 16:42:16 [fantasai] b) define shrinkwrap to ignore multi-col properties, calculate as if columns weren't there 16:42:31 [fantasai] c) define shrinkwrap with consideration of multicol properties 16:44:29 [vhardy] hakon: there is already interoperable implementations of shrinkwrap in multi-col. It is not documented, but it is interoperably implemented. 16:44:48 [vhardy] hakon: if we document it, it should document current implementation. 16:45:06 [vhardy] galzou: fantasai says that the current impls. do not cover all scripts. 16:45:19 [vhardy] fantasai: yes, and other more complicated use cases we have not though of yet. 16:45:29 [vhardy] glazou: so we are not ready yet to standardize that? 16:45:59 [vhardy] fantasai: no, it is just that shrinkwrapping multi-col elements is that it is more important than we thought. 16:46:19 [vhardy] glazou: we have a pretty stable multi-col spec. that we can move along the spec. track. 16:46:39 [vhardy] glazou: the shrinkwrap algorithm needs to be extended separately, and implementors will have to do their work. 16:46:48 [vhardy] hakon: yes, I agree. 16:47:06 [vhardy] hakon: if there are new use cases, we could address them in a later spec. 16:47:25 [vhardy] glazou: yes, if we wait to address all use cases, we will drag the effort. 16:47:37 [vhardy] fantasai: I am not asking to modify the multi-col spec. 16:47:51 [vhardy] hakon: but you are asking to specifiy multi-col functionality in a different spec. 16:48:15 [vhardy] glazou: do we have a proposal? 16:48:20 [vhardy] fantasai: yes. 16:48:33 [vhardy] glazou: we need the whole group to be present for this discussion. 16:48:53 [vhardy] glazou: we cannot resolve it today. We can discuss it next week or during the F2F. 16:49:17 [vhardy] glazou: changing something in the feature related to the relation between two specification is something we can still discuss. 16:49:54 [vhardy] glazou: in the meantime, I propose we make progress on the multi-col spec. and make progress on the test suite, move it along as it is today. We have implementations, use cases on the web. 16:49:59 [vhardy] fantasai: ok with me. 16:50:04 [vhardy] hakon: ok with me. 16:51:04 [vhardy] glazou: no change in the multi-col spec. for now. We will discuss shrinkwrap issues related to multi-col with hakon present. 16:51:45 [vhardy] plinss: I proposed change to mercurial. Did not hear any objection. Planning to make the change today. 16:52:04 [vhardy] fantasai: do we have documentation on the mercurial client. 16:52:50 [vhardy] arronei: I have concerned about the documentation as well. We would need a place with documentation. 16:53:03 [vhardy] fantasai: we would need instructions for common functionality. 16:53:15 [vhardy] plinss: I can put this together. 16:53:26 [vhardy] fantasai: documenting merge process would be great. 16:54:38 [vhardy] (discussion about CVS/SVN merits) 16:56:45 [vhardy] glazou: any objection to move to mercurial? 16:56:51 [vhardy] fantasai: none if we have instructions. 16:56:57 [vhardy] (no objection) 16:57:01 [fantasai] very simple, clear, easy-to-follow instructions 16:57:08 [vhardy] RESOLUTION: moving test suite to mercurial. 16:57:08 [fantasai] not "here's a link to the manual" :) 16:58:27 [Zakim] -[Microsoft.a] 16:58:29 [Zakim] -[Microsoft] 16:58:30 [Zakim] -smfr 16:58:30 [Zakim] -[Apple] 16:58:31 [Zakim] -vhardy 16:58:32 [Zakim] -glazou 16:58:34 [Zakim] -kojiishi 16:58:36 [Zakim] -plinss 16:58:41 [Zakim] -stearns 16:58:43 [Zakim] -howcome 16:58:44 [Zakim] -bradk 16:58:46 [Zakim] -fantasai 16:59:25 [vhardy] vhardy has left #css 17:03:04 [Zakim] -arronei 17:03:05 [Zakim] Style_CSS FP()12:00PM has ended 17:03:07 [Zakim] Attendees were stearns, plinss, glazou, +1.415.290.aaaa, vhardy, smfr, arronei, hober, [Microsoft], johnjan, bradk, +1.415.920.aabb, fantasai, kojiishi, +1.415.832.aacc, howcome 17:14:33 [smfr] smfr has left #css 17:39:54 [arno] arno has joined #css 17:40:09 [nimbupani] nimbupani has joined #css 17:40:31 [nimbupani] nimbupani has left #css 18:08:51 [arno] arno has joined #css 18:30:51 [arno] arno has joined #css 19:03:38 [Zakim] Zakim has left #css 19:07:12 [arno] arno has joined #css 19:18:15 [arno] arno has joined #css 20:05:17 [arno] arno has joined #css 20:28:44 [karl] karl has joined #CSS
http://www.w3.org/2011/05/18-css-irc
CC-MAIN-2015-14
refinedweb
2,746
74.08
Created on 2006-12-11 23:17 by mattgbrown, last changed 2009-09-17 09:20 by techtonik. This issue is now closed. The ServerProxy class of the xmlrpclib module uses the old style HTTP / HTTPS classes of the httplib module rather than the newer HTTPConnection and HTTPConnection classes. The practical result of this is that xmlrpc connections are not able to make use of HTTP/1.1 functionality. Please update the xmlrpclib module to use the newer API provided by httplib so that the advanced functionality of HTTP/1.1 is available. Can you provide a patch to make this change? I think the only changes required are these: --- /sw/lib/python2.5/xmlrpclib.py 2006-11-29 02:46:38.000000000 +0100 +++ xmlrpclib.py 2007-06-15 16:03:17.000000000 +0200 @@ -1182,23 +1182,13 @@ self.send_user_agent(h) self.send_content(h, request_body) - errcode, errmsg, headers = h.getreply() + response = h.getresponse() + + if response.status != 200: + raise ProtocolError(host + handler, response.status, response.reason, response.msg.headers) - if errcode != 200: - raise ProtocolError( - host + handler, - errcode, errmsg, - headers - ) - - self.verbose = verbose - - try: - sock = h._conn.sock - except AttributeError: - sock = None - - return self._parse_response(h.getfile(), sock) + payload = response.read() + return payload ## # Create parser. @@ -1250,7 +1240,7 @@ # create a HTTP connection object from a host descriptor import httplib host, extra_headers, x509 = self.get_host_info(host) - return httplib.HTTP(host) + return httplib.HTTPConnection(host) There is another patch regarding the use of HTTP/1.1 for xmlrpclib at. The other issue could be update with some comments from this issue and then this issue could be closed, I believe. * use newer 2.0 public interface of httplib for connection handling Attached patch is for trunk/ Tested for Python 2.5 issue1767370 is a separate issue that can be fixed later I consider this issue as a duplicate of #6267 which is already fixed (commited). Reopen the issue if I'm wrong ;-) See also #2076 and #1767370 (other duplicates). Yep, the patch at #6267 is an extension of this one except for the last chunk where I also check if sockets are ssl-enabled. I am not sure why it was needed. It also may have been already fixed somewhere else. As this bug doesn't have any tests attached it may be considered closed for now. Would be nice to see these fixes in Python 2.6 though as it is the default version that seems to go in Ubuntu 9.10 + import socket + if not socket._have_ssl: raise NotImplementedError( "your version of httplib doesn't support HTTPS" ) @techtonik: I don't think that testing socket._have_ssl is better than testing for HTTPSConnection. socket._have_ssl might be True, whereas HTTPSConnection is missing for a random reason. xmlrpclib uses HTTPSConnection, not directly the socket library. HTTPSConnection may be implemented using something else than socket / ssl.? =) @techtonik: You wrote "HTTPConnection" twice. I don't really understand your request. Do you think that the issue is fixed in Python trunk or not? If not, please open a new issue since this issue is closed. techtonik> And I still would like to see this fix in Python 2.6 techtonik> - too bad it hadn't enough attention before 2.6. Python is developed by people working on Python in their free time. IMHO, voting for an ticket is useless. If you want to see your fix faster in the subversion, follow some rules: - explain correctly the issue - write a test - write a patch - explain your solution - fix your patches if needed after each patch review You posted your patch the 1st July 2008, and the 2.6 final version was released the 1st october 2008. It was a little bit too late for the 2.6 (because of the beta/RC releases), but it may be included in next 2.6.x release if it's easy to backport it. I also think that not enough people are interested by XML-RPC + HTTPS. This bug may be fixed. Unfortunately I do not possess original setup anymore. The primary issue is still issue648658 and that affects bzr + launchpad integration, XML-RPC access to bugzilla and probably more. And I want to add that I am glad that is finally fixed, so I really appreciate the work people done in this direction in their free time.
http://bugs.python.org/issue1613573
CC-MAIN-2013-20
refinedweb
718
69.48
A “Monty Python and the Holy Grail”-themed STL Tutorial Version 1.3 © Kara Ottewell 1997-2020 Table of Contents - Introduction - Templates ite Domum - What has the STL ever done for us ? - Sequence Adapters - Strings - Iterators - We are searching for the Associative Container - Algorithms and Functions - STL Related Web Pages - Bibliography Warning for the humour-impaired: Any strange references that you can’t understand are almost certainly a skit on Monty Python’s “Life of Brian”. These notes formed part of an internal course on the STL which I was asked to give to my colleagues at Yezerski Roper 1. Introduction. A discussion of the Standard Library as a whole is beyond the scope of this document – see Stroustrup and others in the bibliography for more information. What motivated people to write the STL ? Many people felt that C++ classes were inadequate in situations requiring containers for user defined types, and methods for common operations on them. For example, you might need self-expanding arrays, which can easily be searched, sorted, added to or removed from without messing about with memory reallocation and management. Other Object-Oriented languages used templates to implement this sort of thing, and hence they were incorporated into C++. Driving forces behind the STL include Alexander Stepanov and Meng Lee at Hewlett-Packard in Palo Alto, California, Dave Musser at General Electric’s Research Center in Schenectady, New York, Andrew Koenig, and of course “Mr C++” himself, Bjarne Stroustrup at AT&T Bell Laboratories. The example programs are known to work on Alpha/VAX VMS 6.2 onward using DEC C++ 5.6, and Microsoft Visual Studio 2017 or greater. The free Community Edition is fine. Platform-specific #pragmas have been guarded with #ifdef _VMS or #ifdef _WIN32. You can download the Microsoft Visual Studio solution, with all the samples as separate projects, as a .zip file here: KarasSTLTutorial.zip. The original code was written in 1997, the STL was still in its infancy, and I’m slightly embarrassed about some of the code I wrote back then, but that was 23 years ago. The VS solution has precompiled headers switched off to cut down on unnecessary files, and should not be regarded as a paragon of great coding. It is intended to be illustrative, with a dose of cringe-worthy humor thrown in. The standalone C and C++ files, more suitable for OpenVMS are downloadable as text files from this location , and as links to individual files throughout this tutorial. To build under OpenVMS use the MAKE.COM command file. Just give a program name like example_1_1 as its argument and it will look for files with extension .CXX, .CPP, .C , in that order. If you provide the extension then it uses that. On Alphas, output files get an _ALPHA suffix. Here is an example: $ @MAKE EXAMPLE_1_1 ! On an Alpha DEV$DISK:[KARA.] CC/PREFIX=ALL EXAMPLE_1_1.C -> EXAMPLE_1_1.OBJ_ALPHA LINK EXAMPLE_1_1 -> EXAMPLE_1_1.EXE_ALPHA $ @MAKE EXAMPLE_1_2 ! Now on a VAX DEV$DISK:[KARA.] CXX/ASSUME=(NOHEADER_TYPE_DEFAULT)/EXCEPTIONS/TEMPLATE_DEFINE=(LOCAL) EXAMPLE_1_2.CXX -> EXAMPLE_1_2.OBJ CXXLINK EXAMPLE_1_2 -> EXAMPLE_1_2.EXE A slight buglet introduced in DEC C++ 5.6 for Alpha VMS means that you might get a warning in the CXXLINK step. %LINK-W-NUDFSYMS, 1 undefined symbol: %LINK-I-UDFSYM, WHAT__K9BAD_ALLOCXV %LINK-W-USEUNDEF, undefined symbol WHAT__K9BAD_ALLOCXV referenced in psect __VTBL_9BAD_ALLOC offset %X00000004 in module MEMORY file SYS$COMMON:[SYSLIB]LIBCXXSTD.OLB;1 The undefined symbol is harmless and never referenced, but you can obtain the official patch from. Download and run cxxae01056.a-dcx_axpexe to unpack it, then whilst logged on as SYSTEM use @SYS$UPDATE:VMSINSTAL to install it. Download individual sample programs using Right Click and “Save” on their links, or download all the examples in a .zip file. For Microsoft Visual Studio the solution and project files, as well as the examples, are provided in the Kara’s STL Tutorial .zip distribution. The first two programs are implementations of expanding, integer arrays which are sorted into value order. Example 1.1 is in ANSI C, and 1.2 is C++using the STL. I have tried to make the C program as general as possible by using typedef, but this is still not really adequate, as we will see. Click on the link to see example_1_1.c In the this program (see Kara’s C Course for an introduction to C) I used a typedef for the type of item I want to store and sort, and a get_array_space function to allocate and/or expand the available space. Even with very basic error handling get_array_space is rather ungainly. It handles different types of data if I change the typedef, but if I wanted more than one type of data storing, or even more than one buffer of the same type, I would have to write a unique get_array_space_type function for each. The compare_values function would also have to be rewritten, though this is also the case in the C++ code, for user defined types or pointers to values. Click on the link to see example_1_2.cpp Contrast the C++ program, Example 1.2, with the previous code. Using the STL, we invoke the vector template class, which allows us to store any data type we like in what is essentially a contiguous, self-expanding, random access array. 2. Templates ite Domum This is (incorrect) Latin for “Templates Go Home !” and represents the ambivalence that some new C++ programmers feel towards this language feature. I hope that you will be persuaded of its usefulness after reading this section. C++ supports a number of OOP (Object Oriented Programming) concepts. Broadly speaking, it supports encapsulation through the member functions and private or protected data members, inheritance by allowing classes to be derived from other classes and abstract base classes, and polymorphism through virtual functions, function signatures and templates. Templates achieve polymorphism by allowing us to define classes or functions in a generic way, and let the compiler/linker generate an instantiation of the function or class using the actual types we require. The STL is built, as its name suggests, on the C++ template feature. There are two types of template: function templates and class templates. Both perform similar roles in that they allow functions or classes to be specified in a generic form, enabling the function or class to be generated for any data type – user defined or built-in. At first sight this might not appear to be very different from macros in the C language. In the C program Example 1.1 we could have made it more flexible by using a macro to define the comparison function. #define COMPARE_VALUES( value_type ) \ value_type compare_values_##value_type( const void *a, const void *b ) \ {const value_type *first, *second; \ first = (value_type *)a; second = (value_type *)b; return( *first - *second );} COMPARE_VALUES( float ) /* Generate function for floats, */ COMPARE_VALUES( double ) /* doubles and */ COMPARE_VALUES( int ) /* ints */ . /* Pick comparison function */ qsort( v, nitems, sizeof(array_type), compare_values_int ); The same method can be used for structure generation. There are a number of drawbacks to this approach. You have to explicitly generate functions for all the types you want to use, and there is no type checking in this particular case, so you could easily pass compare_values_float when you meant to compare integers, and you would have to be rigorous about your naming convention. In addition, some people would argue that macros are not as transparent, since you can’t see what they expand into until compilation time. Templates avoid these problems. Because they are built into the language, they are able to provide full type safety checking and deduce the types of their arguments automatically, generating the code for whatever arguments are used. C++ allows you to overload operators like < for user-defined types so the same template definition often suffices for built-in and user-defined classes. The following two sections discuss the syntax and use of template functions and classes. Function Templates Template functions have the following form: template < template-argument-list > function-definition The template-argument-list is one or more type-names within the scope of the template definition. In template functions the first argument is always a type, as in this code fragment. template T mymin( T v1, T v2) { return( (v1 < v2) ? v1 : v2 ); } The above example uses class T but you can also use things like template if you want to constrain a particular parameter to be of some known type. For example, if you were doing some sort of buffer class, you might want people to be able to pre-allocate a particular size of buffer, template class MyBuffer { protected: T MyBufferObjects[iMaxObjects]; public: & x(const T & x) { array[0] = x; return *this; } MyBuffer void func(const T & buffer) { // Whatever } }; class StringBuffer : public MyBuffer // Defaults to 256 { // Whatever } class ScreenBuffer : public MyBuffer { // Whatever } template class MyBuffer { protected: T MyBufferObjects[iMaxObjects]; public: & x(const T & x) { array[0] = x; return *this; } MyBuffer void func(const T & buffer) { // Whatever } }; class StringBuffer : public MyBuffer // Defaults to 256 { // Whatever } class ScreenBuffer : public MyBuffer { // Whatever } You should be able to use the typename keyword in your function (or class) declaration, like this // This may well give an error but is perfectly legal template T mymin( T v1, T v2) { return( (v1 < v2) ? v1 : v2 ); } Stroustrup favours the class T format because it means fewer keystrokes. Personally I would prefer the typename T form, but won’t use it because it will give errors with some compilers. Those of use who started programming using proper languages like FORTRAN are used to the idea of the compiler selecting the correct function variant 🙂 Not many people using the FORTRAN MAX function bother to specify IMAX0,JMAX0,KMAX0 and so on. The compiler selects the specific function according to the arguments. Remember that the class T type doesn’t have to be a class. It can be a built-in type like int or float. The C++ compiler always tries to find a “real” function with matching arguments and return type before generating a function from the template, as in the following program. Click on the link to see example_2_1.cpp The “real” function signature matched the float case, and was used in preference to the template. The using namespace std line is necessary on Windows if we wish to avoid prefixing STL features with std::, e.g. std::cin. It is not necessary with VMS and DEC C++ 5.6, though it may be with future versions of DEC C++. Class Templates Template classes have the following form: template <template-argument-list> class-definition The template-argument-list is one or more type-name within the scope of the template definition. For example Click on the link to see example_2_2.cpp 3. What has the STL ever done for us ? “Well, yes, vectors, I mean obviously the vectors are good …” “Don’t forget queues Reg, I mean, where would we be without properly organized queues and iterators for any data type ?” General murmurs of agreement “Yes, alright, apart from vectors and queues …” “Sorts Reg – I hated having to code up a new sort routine for every class.” Here, here, etc. “Right. So apart from vectors, queues and associated containers classes, iterators, various useful algorithms and functions, what has the STL ever done for us ?” “Memory allocators Reg. We can allocate container memory using any scheme we like, and change it without rewriting all the code. It keeps things in order” Reg loses his temper “Order ? Order ? Oh shut up!” At the end of this section, the waffle above should start making sense to you, but is unlikely to become more humorous as a result of your studies. There are three types of sequence containers in the STL. These, as their name suggests, store data in linear sequence. They are the vector, deque and list: - vector - deque - list To choose a container, decide what sort of operations you will most frequently perform on your data, then use the following table to help you. Each container has attributes suited to particular applications. The subsections and code samples below should further clarify when and how to use each type of sequence container. Throughout this tutorial, I have given the #include file needed to use a feature immediately after the subsection heading. Note that some of the header names have changed since earlier versions of the STL, and the .h suffix has been dropped. Older books may refer to, for example, , which you should replace with . If you include ANSI C headers, they should have the .h, e.g. . C++ versions of the ANSI C headers, prefixed by the letter “c” and minus the .h are becoming more widely available, but not all implementations currently support them, e.g. . On OpenVMS systems a reference copy of the source code for the STL can be found in SYS$COMMON:[CXX$LIB.REFERENCE.CXXL$ANSI_DEF]. So for look in there for the file VECTOR.; . For Windows, open the C++ file in Visual Studio, right-click on the header file name being included, such as #include <vector> and choose “Open Document “, in this example. Vector #include <vector> We introduced the vector in Example 1.2, where we used it instead of an array. The vector class is similar to an array, and allows array-type syntax, e.g. my_vector[2] . A vector is able to access elements at any position (referred to as “random” access in the preceding table) with a constant time overhead, O(1). Insertion or deletion at the end of a vector is “cheap”. As with the string, no bounds checking is performed when you use operator []. Insertions and deletions anywhere other than at the end of the vector incur overhead O(N), N being the number of elements in the vector, because all the following entries have to be shuffled along to make room for the new entries, the storage being contiguous. Memory overhead of a vector is very low and comparable to a normal array. The table below shows some of the main vector functions. Some Vector Access Functions Purpose ---------------------------- ------- begin() Returns iterator pointing to first element end() Returns iterator pointing _after_ last element push_back(...) Add element to end of vector pop_back(...) Destroy element at end of vector swap( , ) Swap two elements insert( , ) Insert new element size() Number of elements in vector capacity() Element capacity before more memory needed empty() True if vector is empty [] Random access operator The next example shows a vector in use. Click on the link to see example_3_1.cpp Note how the element sort takes v.begin() and v.end() as range arguments. This is very common in the STL, and you will meet it again. The STL provides specialized variants of vectors: the bitset and valarray. The former allows a degree of array-like addressing for individual bits, and the latter is intended for numeric use with real or integer quantities. To use them, include the bitset or valarray header files (these are not always supported in current STL implementations). Be careful if you erase() or insert() elements in the middle of a vector. This can invalidate all existing iterators. To erase all elements in a vector use the clear() member function. Deque #include <deque> The double-ended queue, deque (pronounced “deck”) has similar properties to a vector, but as the name suggests you can efficiently insert or delete elements at either end. The table shows some of the main deque functions. Some Deque Access Functions Purpose --------------------------- ------- begin() Returns iterator pointing to first element end() Returns iterator pointing _after_ last element push_front(...) Add element to front of deque pop_front(...) Destroy element at front of deque push_back(...) Add element to end of deque pop_back(...) Destroy element at end of deque swap( , ) Swap two elements insert( , ) Insert new element size() Number of elements in deque capacity() Element capacity before more memory needed empty() True if deque is empty [] Random access operator A deque, like a vector, is not very good at inserting or deleting elements at random positions, but it does allow random access to elements using the array-like [] syntax, though not as efficiently as a vector or array. Like the vector an erase() or insert() in the middle can invalidate all existing iterators. The following program shows a deque representing a deck of cards. The queue is double-ended, so you could modify it to cheat and deal off the bottom 🙂 Click to see example_3_2.cpp The card game is a version of pontoon, the idea being to get as close to 21 as possible. Aces are counted as one, picture cards as 10. Try to modify the program to do smart addition and count aces as 10 or 1; use a vector to store your “hand” and give alternative totals. Notice the check on the state of the input stream after reading in the character response. This is needed because if you hit, say, Z, the input stream will be in an error state and the next read will return immediately, causing a loop if you don’t clear cin to a good state. List #include <list> Lists don’t provide [] random access like an array or vector, but are suited to applications where you want to add or remove elements to or from the middle. They are implemented as double linked list structures in order to support bidirectional iterators, and are the most memory-hungry standard container, vector being the least so. In compensation, lists allow low-cost growth at either end or in the middle. Here are some of the main list functions. Some List Access Functions Purpose -------------------------- ------ begin() Returns iterator pointing to first element end() Returns iterator pointing _after_ last element push_front(...) Add element to front of list pop_front(...) Destroy element at front of list push_back(...) Add element to end of list pop_back(...) Destroy element at end of list swap( , ) Swap two elements erase(...) Delete elements insert( , ) Insert new element size() Number of elements in list capacity() Element capacity before more memory needed empty() True if list is empty sort() Specific function because sort routines expect random access iterators Click to see example_3_3.cpp The loop over elements starts at yrl.begin() and ends just before yrl.end(). The STL .end() functions return iterators pointing just past the last element, so loops should do a != test and not try to dereference this, most likely invalid, position. Take care not to reuse (e.g. ++) iterators after they have been used with erase() – they will be invalid. Other iterators, however, are still valid after erase() or insert(). Container Caveats Be aware that copy constructors and copy assignment are used when elements are added to and (in the case of the vector and deque) deleted from containers, respectively. To refresh your memories, copy constructor and copy assignment member functions look like this example: class MyClass { public: . // Copy constructor MyClass( const MyClass &mc ) { // Initialize new object by copying mc. // If you have *this = mc , you'll call the copy assignment function } // Copy assignment MyClass & operator =( const MyClass &mcRHS ) { // Avoid self-assignment if ( this != &mcRHS ) { // Be careful not to do *this = mcRHS or you'll loop . } return( *this ); } }; When you put an object in a container, the copy constructor will be called. If you erase elements, then destructors and copy assignments (if other elements need to be shuffled down) will be called. see supplementary example, RefCount.cpp for a demonstration of this. Another point to bear in mind is that, if you know in advance how many elements you are going to add to a container, you can reserve() space, avoiding the need for the STL to reallocate or move the container. vector things; things.reserve( 30000 ); for ( ... ) { things.push_back( nextThing ); } The above code fragment reserves enough space for 30000 objects up front, and produced a significant speed up in the program. Allocators Allocators do exactly what it says on the can. They allocate raw memory, and return it. They do not create or destroy objects. Allocators are very “low level” features in the STL, and are designed to encapsulate memory allocation and deallocation. This allows for efficient storage by use of different schemes for particular container classes. The default allocator, alloc, is thread-safe and has good performance characteristics. On the whole, it is best to regard allocators as a “black box”, partly because their implementation is still in a state of change, and also because the defaults work well for most applications. Leave well alone! 4. Sequence Adapters Sequence container adapters are used to change the “user interface” to other STL sequence containers, or to user written containers if they satisfy the access function requirements. Why might you want to do this ? Well, if you wanted to implement a stack of items, you might at first decide to base your stack class on the list container – let’s call it ListStack – and define public member functions for push(), pop(), empty() and top(). However, you might later decide that another container like a vector might be better suited to the task. You would then have to define a new stack class, with the same public interface, but based on the vector, e.g. VectorStack, so that other programmers could choose a list or a vector based queue. It is obvious that the number of names for what is essentially the same thing start to mushroom. In addition, this approach rules out the programmer using his or her own underlying class as the container. Container adapters neatly solve this by presenting the same public interface irrespective of the underlying container. Being templatized, they avoid name proliferation. Provided the container type used supports the operations required by the adapter class (see the individual sections below) you can use any type for the underlying implementation. It is important to note that the adapters provide a restricted interface to the underlying container, and you cannot use iterators with adapters. Stack #include <stack> The stack implements a Last In First Out, or LIFO structure, which provide the public functions push(), pop(), empty() and top(). Again, these are self explanatory – empty returns a bool value which is true if the stack is empty. To support this functionality stack expects the underlying container to support push_back(), pop_back(), empty() or size() and back() Container Function Stack Adapter Function ------------------ ---------------------- back() top() push_back() push() pop_back() pop() empty() empty() size() size() You would be correct in surmising that you can use vector, deque or list as the underlying container type. If you wanted a user written type as the container, then if provided the necessary public interfaces, you could just “plug” it into a container adapter. Example 4.1 demonstrates a stack implemented with a vector of pointers to char. Note that the syntax of using container adapters differs from that shown in Saini and Musser or Nelson’s book, and is based on the December 1996 Working Paper of the ANSIC++ Draft Standard. Click to see example_4_1.cpp Note how the stack declaration uses two arguments. The first is the type of object stored, and the second is a container of the same type of object. Queue #include <queue> A queue implements a First In First Out, or FIFO structure, which provides the public functions push(), pop(), empty(), back() and front() ( empty() returns a bool value which is true if the queue is empty). To support these, queue expects the underlying container to have push_back(), pop_front(), empty() or size() and back() Container Function Queue Adapter Function ------------------ --------------------- front() front() back() back() push_back() push() pop_front() pop() empty() empty() size() size() You can use deque or list as the underlying container type, or a user-written type. You can’t use a vector because vector doesn’t support pop_front(). You could write a pop_front() function for vector, but this would be inefficient because removing the first element would require a potentially large memory shuffle for all the other elements, taking time O(N). The following code shows how to use a queue. Click to see example_4_2.cpp Note how we haven’t given a second argument in the queue declaration, but used the default deque given in the header file. Priority Queue #include <queue> A priority_queue, defined in the queue header, is similar to a queue, with the additional capability of ordering the objects according to a user-defined priority. The order of objects with equal priority is not really predictable, except of course, they will be grouped together. This might be required by an operating system process scheduler, or batch queue manager. The underlying container has to support push_back(), pop_back(), empty(), front(), plus a random access iterator and comparison function to decide priority order. Container Function Priority Queue Adapter Function ------------------ ------------------------------- front() top() push_back() push() pop_back() pop() empty() empty() size() size() [] random iterators Required to support heap ordering operations Hence a vector or a deque can be used as the underlying container, or a suitable user-provided class. The next sample program demonstrates a priority_queue implemented with a vector of task objects. Note that the syntax of using container adapters differs from that shown in Saini and Musser or Nelson’s book. Click to see example_4_3.cpp Example 4.3 program shows a user-defined comparison function object (discussed later), the only member of the PrioritizeTasks class. This is used to determine the relative priority of tasks and, like the << operator, is made a friend of the TaskObject class, with the second parameter being a const TaskObject & , so that it can access the private data members. When the tasks are pulled off the priority_queue, they are in our notional execution order, highest priority first. 5. Strings #include <string> A member of the C++ standards committee was allegedly told that if strings didn’t appear pretty darn quickly, then there was going to be a lynching. There hasn’t been a lynching, and whilst we can debate the merits of this, I think there is general agreement that it is a good thing to have strings at last. Those of us who started programming with proper languages, like FORTRAN, have long criticized the rather ugly syntax of C string manipulation; “What ? You have to call a function to add two strings ?” being a typical comment. The C++ string template class is built on the basic_string template. Providing much of the functionality of the container classes like vector, it has built in routines for handling character set conversion, and wide characters, like NT’s Unicode. The string class also provides a variety of specialized search functions for finding substrings. The characteristics of the character set stored in the string are described by the char_traits structure within the string, there being a different definition of this for each type of character set. On the whole you needn’t concern yourself too much with these details if you are using strings of ASCII characters. Strings, like the vector, expand as you add to them, which is much more convenient than C-style strings, where you either have to know how big they will be before you use them, or malloc and realloc. The largest possible string that can be accommodated is given by the max_size() access function. Some String Access Functions Purpose ---------------------------- ------- find(...) Find substring or character, start at start find_first_of(...) Find first occurrence of any characters in given set, starting from start of string find_last_of(...) Find last occurrence of any characters in given set, starting from start of string find_not_first_of(...) Find first occurrence of characters _not_ in given set, starting from start of string find_last_not_of(...) Find last occurrence of characters _not_ in given set, starting from start of string rfind(...) Find substring or character, start at end size() Number of elements in vector [] Random access to return a single character - no bounds checking at(...) Random access to return a single character - with bounds checking + Concatenate strings swap( , ) Swap two strings insert( , ) Insert a string at the specified position replace(...) Replace selected substring with another string The string provides the highest level of iterator functionality, including [] random access. Hence all relevant standard algorithms work with string. You can sort, reverse, merge and so on. Specialized versions of some algorithms, like swap(), are provided for strings to take advantage of certain optimization techniques. The operator [] allows you to access a single character in a string, but without any bounds checking. Use the at() function if you want bounds checking. The operator+ allows easy string concatenation, so you can now do things like string firstname, lastname, name; . name = firstname + " " + lastname; or name = firstname; name += " "; name += lastname; Easily understandable documentation on the string class is still a bit thin on the ground at the moment, so I have compiled some sample code to illustrate the main facilities. Click to see example_5_1.cpp The next program puts some of the string functions to use in a simple expression evaluator, which takes arithmetic-style expressions. It also shows the at() function, which unlike operator[], at() will throw an out_of_rangeexception for a bad index. Try calculating the rest energy of an electron and proton ( E = m*c^2). Click to see example_5_2.cpp The expression evaluator above introduces maps, discussed later. Here they are used to allow us to get a numeric value from the symbolic name stored in a string. 6. Iterators #include <iterator> An iterator you will already be familiar with is a pointer into an array. char name[] = "Word"; char ch, *p; p = name; // or &name[0] if you like ch = p[3]; // Use [] for random access ch = *(p+3);// Equivalent to the above *p = 'C'; // Write "through" p into name while ( *p && *p++ != 'r' ); // Read name through p, look for letter 'r' Looking at the above code sample shows how flexible and powerful iterators can be. The above code fragment uses p in at least 5 different ways. We take it for granted that the compiler will generate the appropriate offset for array elements, using the size of a single element. The STL iterators you’ve already met are those returned by the begin() and end() container access functions, that let you loop over container elements. For example: list l; list::iterator liter; // Iterator for looping over list elements for ( liter = l.begin(); liter != l.end(); ++liter ) { *liter = 0; } The end-of-loop condition is slightly different to normal. Usually the end condition would be a less than < comparison, but as you can see from the table of iterator categories below, not all iterators support <, so we increment the iterator from begin() and stop just before it becomes equal to end(). It is important to note that, for virtually all STL purposes, end() returns an iterator “pointing” to an element just after the last element, which it is not safe to dereference, but is safe to use in equality tests with another iterator of the same type. The pre-increment ++ operator can sometimes yield better performance because there is no need to create a temporary copy of the previous value, though the compiler usually optimizes this away. Iterators are a generalized abstraction of pointers, designed to allow programmers to access different container types in a consistent way. To put it more simply, you can think of iterators as a “black box” between containers and algorithms. When you use a telephone to directly dial someone in another country, you don’t need to know how the other phone system works. Provided it supports certain basic operations, like dialling, ringing, reporting an engaged tone, hanging up after the call, then you can talk to the remote person. Similarly, if a container class supports the minimum required iterator types for an algorithm, then that algorithm will work with the container. This is important because it means that you can use algorithms such as the sort and random_shuffle we’ve seen in earlier examples, without their authors having to know anything about the containers they are acting on, provided we support the type of iterator required by that algorithm. The sort algorithm, for example, only needs to know how to move through the container elements, how to compare them, and how to swap them. There are 5 categories of iterator: - Random access iterators - Bidirectional iterators - Forward iterators - Input iterators - Output iterators They are not all as powerful in terms of the operations they support – most don’t allow [] random access, as we’ve seen with the difference between vector and list. The following is a summary of the iterator hierarchy, most capable at the top, operations supported on the right. The Iterator Hierarchy Iterator Type Operations Supported ^ +-------------------------------------+ / \ \ / / \ \ / / \ \ == != >= [] / / \ \ / / Access \ \ =*p / /-----------\ \-------------------------/ /Bidirectional\ \ == != ++ -- / / \ \ *p= -> =*p / /-----------------\ \-------------------/ / Forward \ \ == != ++ / / \ \ *p= -> =*p / /-----------+-----------\ \-------+-----/ /Input | Output\ \ == !=| ++ / / | \ \++ ->|*p=/ +--------------+--------------+ \ =*p| / \ | / \ |/ \ / v The higher layers have all the functionality of the layers below, plus some extra. Only random iterators provide the ability to add or subtract an integer to or from the iterator, like *(p+3). If you write an iterator it must provide all the operations needed for its category, e.g. if it is a forward iterator it must provide ==, !=, ++, *p=, -> and =*p. Remember that ++p and p++ are different. The former increments the iterator then returns a reference to itself, whereas the latter returns a copy of itself then increments. Operators must retain their conventional meaning, and elements must have the conventional copy semantics. In a nutshell, this means that the copy operation must produce an object that, when tested for equality with the original item, must match. Because only random iterators support integer add and subtract, all iterators except output iterators provide a distance() function to find the “distance” between any two iterators. The type of the value returned is template typename iterator_traits::difference_type This is useful if, for example, you find() a value in a container, and want to know the “position” of the element you’ve found. map::iterator im; map::difference_type dDiff; im = my_map.find( key ); dDiff = distance( my_map.begin(), im ); Of course, this operation might well be inefficient if the container doesn’t support random access iterators, since in that case it will have to “walk through” the elements comparing the iterators. Just as you can declare pointers to const objects, you can have iterators to const elements. The const_ prefix is used for this purpose, e.g. The iterator_traits for a particular class is a collection of information, like the “iterator tag” category, which help the STL “decide” on the best algorithm to use when calculating distances. The calculation is trivial for random iterators, but if you only have forward iterators then it may be a case of slogging through a linked list to find the distance. If you write a new class of container, then this is one of the things you must be aware of. As it happens, the vector, list, deque, map and set all provide at least Bidirectional iterators, but if you write a new algorithm, you should not assume any capability better than that which you really need. The lower the category of iterator you use in your algorithm, the wider the range of containers your algorithm will work with. Although the input and output iterators seem rather poor in capability, in fact they do add the useful ability to be able to read and write containers to or from files. This is demonstrated in the program below, and again in Example 7.2. Click to see example_6_1.cpp The result of the possible restrictions on an iterator is that most algorithms have two iterators as their arguments, or (perhaps less safely) an iterator and a number of elements count. In particular, when you are using iterators, you need to be aware that it isn’t a good idea to test an iterator against NULL, or to test if one iterator is greater than another. Testing for equality or inequality is safe except for output iterators, which is why the loops in the example code use iterator != x.end() as their termination test. Iterator Adapters Like the container adapters, queue, priority_queue and stack, iterators have adapters too. There are three types: - Reverse iterators - Insert iterators - Raw storage iterators The reverse iterator reverses the behaviour of the ++ and -- operators, so you can write code like this: Standard containers all provide rbegin() and rend() functions to support this kind of thing. The insertions iterators will, depending on the type of container, allow insertion at the front, back or middle of the elements, using front_insert_iterator, back_insert_iterator or insert_iterator. Because you might just as well use container.push_back() and so forth, their main use is as the return value of functions like front_inserter(), back_inserter and inserter, which modify how a particular algorithm should work. Raw storage iterators are used for efficiency when performing operations like copying existing container elements to regions of uninitialized memory, such as that obtained by the STL functions get_temporary_buffer and return_temporary_buffer. Look in the algorithm header for examples of iterator use. 7. We are searching for the Associative Container “We’ve already got one !” Mumbles of “Ask them what it looks like” “Well what does it look like ?” “It’s a verra naice !” There are four types of associative container in the STL. Associative containers are used to store objects that we want to be able to retrieve using a key. We could use a map as a simple token/value database, where a token might be a character string, and the value might be an integer. Associative containers store items in key order, based on a user-supplied comparison function, and the multi variants allow duplicate keys. Lookup is O(logN), N being the number of items stored. The associative containers are: - map - multimap - set - multiset All four associative containers store the keys in sorted order to facilitate fast traversal. For built-in types the Compare function can simply be a suitable STL function object, e.g. map . If you are storing pointers to objects rather than the objects themselves, then you will need to provide your own comparison function object even for built-in types. The multi variant of the container is able to store more than one entry with the same key, whereas map and set can only have one entry with a particular key. In Stroustrup’s book he shows how to make a hash_map variant of map. When working well, even with large data sets this can perform lookups in O(1) time, compared to O(logN) performance from the map. However, hashing can exhibit pathological behaviour if many keys hash to the same value, and if hash table expansion is required, that can be a slow operation. Map and Multimap #include <map> A map is used to store key-value pairs, with the values retrieved using the key. The multimap allows duplicate keys, whereas maps insist on unique keys. Items in a map are, when they are dereferenced through an iterator for example, returned as a pair, which is a class defined in the utility header. The pair has two members, first and second which are the key and the data respectively. The pair is used throughout the STL when a function needs to return two values. Some Map Access Functions Purpose ------------------------- ------- begin() Returns iterator pointing to first element end() Returns iterator pointing _after_ last element swap( , ) Swap two elements insert( , ) Insert a new element size() Number of elements in map max_size() Maximum possible number of elements in map empty() True if map is empty [] "Subscript search" access operator In the sample program below, which uses first and second, a list of tokens and values in the form pi = 3.1415926535898 c = 299792459.0 are read in from the file tokens.dat, then you are prompted to enter a token name for which the corresponding value is displayed. Because map supports the [] subscript operator, you can access the stored value using the key as the subscript. Click to see example_7_1.cpp In Example 5.2 we used the following lookup method with a map This is fine where we know that item is definitely in symbol_values[], but generally you must use the find(...) function and test against end(), which is the value returned if the key doesn’t exist. Several variants of an insert() function exist for the map. The single argument version of insert allows you test whether the item was already in the map by returning a pair. If successful the .second bool value will be true and the iterator will “point” at the inserted item. On failure .second will be false and the iterator will point at the duplicate key that caused the insertion to fail. The map can only store one value against each key. Because each key can only appear once, if you try and add a second instance of the same key, then that will supercede the existing one. Edit the tokens.dat file used by Example 7.1 and convince yourself that this is the case. In situations where this restriction is not acceptable, the multimap should be used. Set and Multiset #include <set> The set stores unique keys only, i.e. the key is the value. Here are some of the set access functions: Some Set Access Functions Purpose ------------------------- ------ begin() Returns iterator pointing to first element end() Returns iterator pointing _after_ last element swap( , ) Swap two elements insert( , ) Insert a new element size() Number of elements in set max_size() Maximum possible number of elements in set empty() True if set is empty Like map, set supports the insert() function. Entries are kept in order, but you can provide your own comparison function to determine that order. Useful algorithms operating on a set are includes(), set_union(), set_intersection(), set_difference() and set_symmetric_difference(). The set supports bidirectional iterators, all set iterators are const_iterators, even if you declare them as set::iterator, so watch out for that. Click to see example_7_2.cpp This example also shows the use of an output iterator which in this case is directing the output to cout, but this could just as well be a file. The set can only store unique keys, hence if you try and insert a second instance of the same key a failure will result. The single argument version of insert( const value_type& ) returns pair( it, true or false) with the same meaning as for map. Remember that you can define what equality means, so the situation may arise where two “identical” set elements have different data members. In situations where this restriction is not acceptable, the multiset should be used. 8. Algorithms and Functions We’ve already met and used several of the STL algorithms and functions in the example programs, sort(...) being one of them. In the STL, algorithms are all template functions, parameterized by iterator types. For example, sort(...) might be implemented like this: template inline void sort (RandomAccessIter first, RandomAccessIter last) { if (!(first == last)) { // Do the sort } } Because algorithms only depend on the iterator type they need no knowledge of the container they are acting on. This allows you to write your own container and, if it satisfies the iterator requirements, the STL algorithms will work with a container type “unknown” when they were written. Because the algorithms are all templatized, the compiler should be able to generate inline code to do the operation just as efficiently as if you had “hand coded” the routine. Many algorithms need a little help from the programmer to determine, for example, whether one item in a container is greater, less than or equal to another. This is where function objects come into the picture. Function objects are used similarly to function pointers in C. We have seen one example of function pointers in qsort used by Example 1.1. In OSF/Motif programs we often need to supply a “callback function” that will be executed when a particular event is seen by a “Widget”, e.g.someone clicking on a button widget. In C we might do this: typedef void (*CallbackProcedure)( Context c, Event e, Userdata u); struct { CallbackProcedure callback; Userdata udat; } CallbackRecord; . /* My function to be called when button is pressed */ void ButtonPressedCB( Context c, Event e, Userdata u); . CallbackRecord MyCallbackList[] = { {ButtonPressedCB, MyData}, {NULL,NULL} }; . SetButtonCallback( quit_button, MyCallbackList ); Problems with this approach are the lack of type safety, the overhead associated with indirecting function calls, lack of inline optimization, and problems with interpreting “user data” which tends to end up being a (void *) pointer. STL function objects avoid these problems because they are templatized, so provided the function object is fully defined at the point of use, the compiler can generate the code inline at compilation time. User data can be kept in the function object, and maintains type safety. You can, of course, pass function pointers to algorithms if you wish, but in the following sections it should become apparent that function objects are generally a better idea. Algorithms #include <algorithm></span We all know that an algorithm is abstract logical, arithmetical or computational procedure that, if correctly applied, ensures the solution of a problem. But what is an STL algorithm ? STL algorithms are template functions parameterized by the iterator types they require. In the iterators section I likened iterators to a “black box” that allowed algorithms to act on any container type which supported the correct iterators, as shown in the diagram below. +------------------+ +------------------+ | +---------------------+ | | Algorithms | | Containers | | +---------------------+ | +------------------+ +------------------+ The “abstraction layer” provided by iterators decouples algorithms from containers, and vastly increases the capability of the STL. Not all containers support the same level of iterators, so there are often several variants of the same algorithm to allow it to work across a range of containers without sacrificing speed benefits for containers with more capable iterators. The appropriate version of the algorithm is automatically selected by the compiler using the iterator tag mechanism mentioned earlier. It does this by using the iterator_category() template function within a “jacket definition” of the algorithm, and a rather convoluted mechanism which you don’t really need to worry about (see Mark Nelson’s book pages 338-346 if you really want to know the grisly details). Something you should worry about is whether the containers you are using support the iterators you need for an algorithm. The best way to determine this is to use a reference book, or look at the algorithm header file. For example, the min_element algorithm will have a definition similar to this: template ForwardIterator min_element (ForwardIterator first, ForwardIterator last); Hence it requires a container that supports at least forward iterators. It can lead to strange errors if you try an algorithm with a container that doesn’t provide the necessary iterators, because the compiler will try and generate code from the various templates and get confused before collapsing with an error. On VMS, this might result in a lot of %CXX-E-PARMTYPLIST, Ill-formed parameter type list. %CXX-E-BADTEMPINST, Previous error was detected during the instantiation of .. %CXX-E-OVERLDFAIL, In this statement, the argument list .. matches no .. errors. In Windows you tend to get lots of .. <`template-parameter-1',`template-parameter-2', .. .. could not deduce template argument for .. .. does not define this operator or a conversion to a type acceptable to the predefined operator Try compiling the example code and see what errors you get. Click to see example_8_1.cpp There are 60 different algorithms in 8 main categories in the STL. See Stroustrup pages 507-511 for a full list of all the functions. - Nonmodifying Sequence Operations – these extract information, find, position within or move through elements but don’t change them, e.g. find(). - Modifying Sequence Operations – these are miscellaneous functions that do change the element they act on, e.g. swap(), transform(), fill(), for_each(). - Sorted Sequences – sorting and bound checking functions, e.g. sort(), lower_bound(). - Set Algorithms – create sorted unions, intersections and so on, e.g. set_union(), set_intersection(). - Heap Operations – e.g. make_heap(), push_heap(), sort_heap(). - Minimum and Maximum – e.g. min(), max(), min_element(), max_element(). - Permutations – e.g. next_permutation(), prev_permutation(). - Numeric – include <numeric<for general numerical algorithms, e.g. partial_sum(). Some of the algorithms, like unique() (which tries to eliminate adjacent duplicates) or replace(), can’t simply eliminate or replace elements because they have no knowledge of what the elements are. What they actually do is shuffle the unwanted elements to the end of the sequence and return an iterator pointing just past the “good” elements, and it is then up to you to erase() the others if you want to. To get round this several algorithms have an _copy suffix version, which produces a new sequence as its output, containing only the required elements. Algorithms whose names end with the _if suffix, only perform their operation on objects that meet certain criteria. To ascertain whether the necessary conditions, known as predicates, have been met, you pass a function object returning a bool value. There are two types of predicate: Predicate and BinaryPredicate. Predicates dereference a single item to test, whereas BinaryPredicates dereference two items which they might compare for instance. template void count_if( InputIterator first, InputIterator last, Predicate pred,Size& n); This will return the number of objects in the range first to just before last that match the Predicate function object pred , which takes one argument – a reference to the data type you are checking. The next algorithm requires a BinaryPredicate. template ForwardIterator adjacent_find (ForwardIterator first, ForwardIterator last, BinaryPredicate binary_pred); This will look in the range first to just before last for two adjacent objects that “match”. If no match is found, then it returns last. Because a match is determined by the BinaryPredicate function object binary_pred , which takes two arguments (references to the appropriate data types), you can match on any conditions you like. In fact, there are two versions of adjacent_find : one just requires first and last and uses the == operator to determine equality, and the one above which gives you more control over the match test. With the information above, you should now be able to look at an algorithm in the header file or reference manual, and determine what sort of function object, if any, you need to provide, and what sort of iterators your container must support if the algorithm is to be used on it. Functions and Function Objects #include <functional> objects are the STL’s replacement for traditional C function pointers, and if you look at the STL algorithms, they are written as they would be if function objects were function pointers. This is why you can use function pointers (or plain, old functions)with the correct argument signature if you wish, though function objects offer several advantages, as we will see. We have already seen a function object used in Example 4.3 to compare two task objects. This is superior to the qsort style comparison function in many respects. The function object provides type safety, allows an object’s copy constructors to be used if necessary (rather than just doing a binary copy), and doesn’t require that the objects be contiguous in memory – it can use the appropriate iterators to walk through the objects. Function objects can be used to “tuck away” data that would otherwise have to be global, or passed in a “user data” pointer. The usual template features like inline optimization and automatic code generation for different data types also apply. “Why a ‘function object’?”, you might ask. What would be wrong with using a function template like this: template bool is_less_than( const T &x, const T &y ) { return x < y; }; . sort( first, last, is_less_than() ); Unfortunately this is not legal C++. You can’t instantiate a template function in this way (Stroustrup page 855). The correct thing to do is to declare a template class with operator() in it. template class is_less_than { // A function object bool operator()( const T &x, const T &y ) { return x < y;} }; . sort( first, last, is_less_than() ); This is legal because we have instantiated the function object for MyObjects. There are three main uses for function objects within the STL. The comparison and predicate function objects which we have met already return a bool value indicating the result of a comparison, e.g. one object greater than another, or telling an algorithm whether to perform a conditional action, e.g. remove all objects with a particular attribute. The numeric function objects perform operations like addition, subtraction, multiplication or division. These usually apply to numeric types, but some, like +, can be used with strings. Several function objects are built in to the STL, such as plus, minus, multiplies, divides, modulus, negate, equal_to, not_equal_to, greater, and so on. See the header file for a complete list. If the data type defines the correct operator, you can use the pre-defined template function objects like this: some_algorithm( first, last, greater() ); so you don’t always need to create your own function object from scratch. Try and use one of the library versions if it is available. This saves effort, reduces the chances of error and improves portability. To increase the flexibility of STL function objects, adapters are provided which allow us to compose slightly modified function objects from the standard ones. If we wanted to find values greater than 1997 in a vector of integers we would use a binder to take advantage of the greater() function, which takes two arguments, to compare each value with 1997. iter = find_if( v.begin(), v.end(), bind2nd(greater(),1997) ); Other adapters exist to allow negation of predicates, calling of member functions, or use of “real” function pointers with binders. This topic is covered in some detail by Stroustrup in pages 518 onwards. 9. STL Related Web Pages Here are a few of the URL’s I’ve collected relating to the STL and C++ draft standard. Use AltaVista and search for STL Tutorial, or the Yahoo! Standard Template Library section for more links. - Bjarne Stroustrup’s Homepage A man who needs no introduction, he has many links to other useful C++ and STL sites - Mumit’s STL Newbie Guide Mumit Khan’s informative STL introduction is full of examples - Standard Template Library Dave Musser’s Web Page. Highly recommended - The ISO/ANSI C++ Draft Jason Merrill’s HTML of the 02-Dec-1996 draft C++ Standard Template LibraryAnother great tutorial, by Mark Sebern - December 1996 Working Paper of the ANSI C++ Draft Standard The Standard Template Library Silicon Graphics STL Reference Manual, (some nonstandard features) Ready-made Components for use with the STL Collected by Boris Fomitche Sites of interest to C++ users by Robert Davies, this is packed full of very useful web site URLs for both the beginner and more advanced C++ programmer 10. Bibliography - STL Tutorial and Reference Guide C++ Programming with the Standard Template Library by David R. Musser and Atul Saini, Pub. Addison-Wesley, ISBN 0-201-63398-1Fairly useful in conjunction with Nelson - C++ Programmer’s Guide to the Standard Template Library by Mark Nelson, Pub. IDG Books Worldwide, ISBN 1-56884-314-3Plenty of examples and more readable than most of the other books - The Annotated C++ Reference Manual (known as the ARM) by Margaret A. Ellis and Bjarne Stroustrup, Pub. Addison-Wesley, ISBN 0-201-51459-1Explains templates – a “must have” book for anyone doing C++ programming - Data Structures and Algorithms in C++ by Adam Drozdek, Pub. PWS Publishing Company, ISBN 0-534-94974-6Not about the STL, but useful for understanding the implementation - Standard Template Library : A Definitive Approach to C++ Programming Using STL by P. J. Plauger, Alexander A. Stepanov, Meng Lee, Pub. Prentice Hall, ISBN 0-134-37633-1
https://pottsoft.wordpress.com/welcome-to-the-new-pottsoft-home-page/karas-c-stl-tutorial/
CC-MAIN-2020-24
refinedweb
9,225
53.1
Opened 5 years ago Closed 5 years ago Last modified 4 years ago #28161 closed Bug (fixed) contrib.postgres: ArrayField(CITextField) returns string instead of list Description The following code from django.db import models from django.contrib.postgres.fields import ArrayField, CITextField class Foo(models.Model): bar = ArrayField(models.TextField) baz = ArrayField(CITextField()) has this wrong behaviour: x = Foo.objects.get(id=1) x.bar # => ["Foo", "Bar"] x.baz # => "{Foo,Bar}" This is due to which requires registering a new adapter for citext fields: Use select typarray from pg_type where typname = 'citext'; to get the oid, then do psycopg2.extensions.register_type( psycopg2.extensions.new_array_type( (the_returned_oid,), 'citext[]', psycopg2.STRING)) for the correct behaviour in plain psycopg2. Django should do that automatically if any of the CI*Fields is used. Alternatively, the ArrayField should convert the result string manually to the proper list. Change History (9) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 follow-up: 5 Changed 5 years ago by comment:4 Changed 5 years ago by comment:5 Changed 5 years ago by Replying to Simon Charette: Thank you for the quick fix, that helps a lot :-) Awesome! We'll have to do the same thing we do for hstore. Marking as release blocker because it's a bug in a newly introduced feature.
https://code.djangoproject.com/ticket/28161
CC-MAIN-2022-40
refinedweb
224
58.48
Problem Statement Suppose you have an array of integers. The problem “Arrange given numbers to form the biggest number” asks to rearrange the array in such a manner that the output should be the maximum value which can be made with those numbers of an array. Example [34, 86, 87, 765] 878676534 Explanation: We have concatenated numbers with each other such that it produces the highest value. We have the value greatest is 765 but if we put it forward, our output will be less than the value we have got now, so we have to take 87 first and then 86 and then rest. The result should start with the rest digit. Algorithm to Arrange given numbers to form the biggest number 1 Compare and check which value is lexicographically greater. 2. The greater value will put forward. 3. Return that value. Explanation We have been asked to rearrange the array in such a way that within the numbers of the array, all the numbers collectively would produce the greatest number that can be formed with the numbers of the array. So here we will be taking the inputs as a string of numbers. If the number is given we can easily convert them to strings. The question that arises is how to find the greater number when all numbers concatenated. Because the three digits number is definitely a greater number when we compare it with any two-digit number. But here we have to find the first digit of the number should be greater than any of the numbers in input. In this way, we are going to solve this problem. We will be using a compare method for string manipulations. With this, we are manually going to sort our input lexicographically. This means if the starting digit is greater than any other number whose starting digit is lower. Then we would put it first whose starting digit is greater. Then we have to sort all of the input in this manner, now we can do this with the help of either the compare method which are predefined methods of the languages or we can traverse each and every string and find out the lexicographically greater string. But the efficient method than that is the method defined here. Now we have to just concatenate them or simply print them as in order in which they sorted. Also, we should have taken the input in the string format, so that we can sort them in the order of lexicographical order. Code C++ code to arrange given numbers to form the biggest number #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int myCompare(string X, string Y) { string XY = X.append(Y); string YX = Y.append(X); return XY.compare(YX) > 0 ? 1: 0; } void getLargest(vector<string> arr) { sort(arr.begin(), arr.end(), myCompare); for (int i=0; i < arr.size() ; i++ ) cout << arr[i]; } int main() { vector<string> arr; arr.push_back("34"); arr.push_back("86"); arr.push_back("87"); arr.push_back("765"); getLargest(arr); return 0; } 878676534 Java code to arrange given numbers to form the biggest number import java.util.Collections; import java.util.Iterator; import java.util.Comparator; import java.util.Vector; class rearrangNumericString { public static void getLargest(Vector<String> arr) { Collections.sort(arr, new Comparator<String>() { @Override public int compare(String X, String Y) { String XY=X + Y; String YX=Y + X; return XY.compareTo(YX) > 0 ? -1:1; } }); Iterator it = arr.iterator(); while(it.hasNext()) System.out.print(it.next()); } public static void main (String[] args) { Vector<String> arr = new Vector<>(); arr.add("34"); arr.add("86"); arr.add("87"); arr.add("765"); getLargest(arr); } } 878676534 Complexity Analysis Time Complexity O(N*|S| log N) where “N” is the count of numbers and |S| denotes the length of the largest number. Merge sort will make N logN comparisons but since each comparison takes |S| time in the worst case. The time complexity will also be N*|S| logN. Space Complexity O(N*|S|) where “N” is the count of numbers. Here |S| denotes the length of the numeric input.
https://www.tutorialcup.com/interview/string/arrange-given-numbers-to-form-the-biggest-number.htm
CC-MAIN-2021-25
refinedweb
687
64.91
Components and supplies Apps and online services About this project Welcome to the Tweeting thermostat! To get started, you will need to install a couple of libraries. First, you will use pip and get tweepy. Use: pip install tweepy in your terminal or command prompt. Then you will need to go onto and sign into your twitter account. From there select "New Application" and fill in the title. You will then need your API keys, so click on your project and scroll to the app settings. From there you can copy and paste your consumer key and secret into the python code. Next, generate an access token and secret. Copy/paste those into the code as well. Head on over to the Arduino IDE and load in the Arduino code. Connect the circuit that is shown in the schematic. For this project I used the Arduino Mega 2560, but any 5v Arduino board will work. Then select your serial port from the "tools" menu. Go back to the python code and change the serial port to whatever port you selected in the Arduino IDE. Go to "Sketch" -> "Libraries" -> "Manage Libraries" -> and search for "dht11". Install it into your IDE and then upload to your board. Run the python code and you are set to tweet your current temperature! It will update every 20 minutes and will avoid duplicate readings. Enjoy your new tweeting thermostat! Here is a picture of mine: You can follow my bot at: @BottHavingmc Happy tweeting! Code Arduino SideC/C++ #include <DHT.h> #define DHTPIN 2 // what digital pin we're connected to #define DHTTYPE DHT11 // DHT 11 DHT dht(DHTPIN, DHTTYPE); void setup() { Serial.begin(9600); Serial.println("DHT11 Temperature Tweeter!"); dht.begin(); } void loop() { // Wait a few seconds between measurements. Wait 20 minutes delay(1200000); // Read temperature as Fahrenheit (isFahrenheit = true) float f = dht.readTemperature(true); //print temp data Serial.println(f); } tweetBot.pyPython import tweepy import serial import time auth = tweepy.OAuthHandler('<Your consumer key>,<consumer secret>) auth.set_access_token(<access token>,<access token secret>) api = tweepy.API(auth) ser = serial.Serial(<Your Arduino serial port>,9600) while True: temp = ser.readline() print temp pastTemp = temp if temp == pastTemp: time.sleep(20 * 60) else: api.update_status("This is my current temp: " + str(temp)) time.sleep(20 * 60) Schematics Author Arduino “having11” Guy - 45 projects - 779 followers Published onMay 12, 2016 Members who respect this project you might like
https://create.arduino.cc/projecthub/gatoninja236/tweeting-thermostat-with-arduino-e3eda0
CC-MAIN-2021-49
refinedweb
403
69.58
Introduction In this Java tip, I will go through a few basic tools provided by the Eclipse IDE for navigating through your code. Tools that will be covered: 1. Using the Outline tool 2. View History tools 3. Showing code of selected Item 4. Code Folding 5. Open Declaration For this tip, you will want to get the Eclipse Helios IDE* (also known as Eclipse 3.6, it was released June 24, 2010). Download link: Eclipse Downloads. *note: older Eclipse IDE's are likely to have some or all of these features, but I can't guarantee that all of them will be there, or if they will even have the same functionality. Eclipse Code Outline tool The outline tool is a nice little tool that shows an overview of the current file opened. Attachment 165 As you can see from the screenshot, The outliner has a list of different classes, methods, fields, import declarations, etc. You can click on each item inside of the outliner and the text editor will go to the declaration of that item. Do note that the outliner does not show local variables (or at least, I haven't figured out to get it to do this yet). If you notice in the icon next to the field name, you can see all sorts of different icons for different properties: 1. The return type of methods or the type of the field is on the far right. 2. The visibility of the item is an icon on the right: - Private is a red square - Protected is a yellow diamond - Default is a blue triangle - Public is a green circle 3. Whether the item is static or abstract (denoted by a little A or S next to the item visibility icon. 4. Whether there is an error/warning with the item. This is located in the bottom left portion of the visibility icon. 5. Some other misc. information, such as overrided methods have a small triangle next to the visibility icon. Along the top row, you can filter the items shown in the outliner by clicking on the various buttons. Attachment 166 From left to right, the icons are: 1. Sort alphabetically. This sorts the different items alphabetically, with those alphabetically first put towards the top. Unchecking this option will re-order the items in the order they occur in the source. 2. Hide Fields. 3. Hide Static fields and methods. 4. Hide non-public members. Basically anything not declared public will be hidden in the outliner. 5. Hide local type. Not actually sure what qualifies as a local type... 6. Focus on active task. This will probably be talked about later in another tip. You can also provide a custom filter for the outliner. The only useful filter feature I've found is the name filter. Basically, this lets you create a filter that searches through for different items by their name. To create a custom filter: Click on the little triangle next to the icons. Select the "Filters..." item. I think it's basically a simplified regular expression engine that allows you to filter items you want to hide. You can also choose the options of hiding imports, package declarations, and/or synthetic members (not quite sure what synthetic members are yet). Quick Outlines It is possible to open up a quick outliner by right-clicking in the editor, then choosing the "Quick Outline" option. This basically opens up a outline viewer inside a small pop-up. You can also open up the quick outline view via the CTRL+O hotkey (kind of strange since most other applications reserve CTRL+O for open...) View History Tools View history tools allow you to jump between different sections of code that you've previously looked over. So say in the code below: Code Java: public class MyClass { String string; public static void main(String[] args) { Scanner reader = new Scanner("test.txt"); StringBuilder text = new StringBuilder(); while(reader.hasNext()) { text.append(reader.nextLine()); text.append("\r\n"); } MyClass tempObject = new MyClass(text.toString()); System.out.println(tempObject); System.out.println(tempObject.hashCode()); System.out.println(tempObject.doIt()); } public MyClass(String string) { this.string = string; // comment } public String toString() { return this.string(); } public int hashCode() { return this.string.hashCode(); } public String doIt() { return "Wrong method to use!"; } public String doItCorrect() { return "Correct method to use!"; } } Attachment 167 In the above code, say you were looking through this code and you get to the last line and wondered what doIt() did. You could use any of the other features outlined in this tip (such as the outline view or the open declaration) to go and take a look at what doIt() does. In this case, say it's not the correct method you wanted to use. You can use the "Back to..." button to jump back to the view you were previously looking at and change the code accordingly, so basically this is just an "undo/redo" feature for where you are looking in the editor. Note that this functionality does work across different files, so you can actually jump between several classes just by pressing the arrows, or you can even press the drop-down arrows to jump to a certain point in the view history. The button on the far left of the view history tools jumps your view to the last edit location. Basically, this is the last location where you actually changed something in the file. Again, this feature does work across multiple files, however unlike the buttons that change the view, this will only remember the last edit location, not a stack of all previous edits (one edit location period, not one edit location/file). Showing code of selected item It is possible to just show the code of a single section of a file. This allows you to edit a method or section of code without having to view through the other code (especially in large classes). This feature is actually not visible by default (at least not in Java editing). You have to click on the toolbar, choose customize perspective, in the Tool Bar Visibility tab, under Editor Presentation, check Show Source of Selected Element Only. Attachment 168 Once you've got the button enabled, simply enable it by clicking it and now you can use the outline viewer to not only jump to the method, but you can also hide all code other than the item you have selected. A note about using this feature: I've noticed that after using this feature, sometimes the code folding feature does break, I'm not sure if this is intended or if this is an actual bug. Code Folding Code folding is a useful feature similar to the above item which hides sections of code. Code folding is enabled by default. Unfortunately, there is only a specific list of items that you can fold: 1. Import declarations 2. Javadoc comments 3. Block comments (as long as they are not inside any methods) 4. Methods 5. Anonymous classes 6. Internal classes/enums/interfaces Hopefully that's all of them, there might be a few other items I've missed. To fold a section of code, simply find the item you want to fold, then click on the little circle on the right with a minus sign in it to fold the code, and click on the circle with a plus sign in the same location to un-fold that section of code. Attachment 169 Folding icons are highlighted by small red circles. Open Declaration Open Declaration is a useful feature if you want to jump to the declaration of different items. This allows you to quickly move between several pieces of code to find where an item is you want to look at/edit. To open the declaration Select the text you want to goto the definition of, Right click inside the editor, choose Open Declaration. You can actually put the edit cursor anywhere inside the item you want to open the declaration of and Eclipse will automatically pick up on which item it is you want to jump to. You can also use the open declaration hotkey (F3) Conclusion Hopefully this helps you when navigating through code, particularly code that is either very long, or code that you either didn't write or haven't looked at in a long time. Happy Coding :)
http://www.javaprogrammingforums.com/%20java-jdk-ide-tutorials/4718-java-tip-jul-5-2010-%5Beclipse-ide%5D-navigating-through-code-printingthethread.html
CC-MAIN-2013-48
refinedweb
1,400
63.29
NAME base - Establish an ISA relationship with base classes at compile time SYNOPSIS package Baz; use base qw(Foo Bar); DESCRIPTION); } for a description of this feature. The base class' import method is not called. DIAGNOSTICS - Base class package "%s" is empty. base.pm was unable to require the base package, because it was not found in your path. - Class 'Foo' tried to inherit from itself Attempting to inherit from yourself generates a warning. package Foo; use base 'Foo'; HISTORY This module was introduced with Perl 5.004_04. CAVEATS Due to the limitations of the implementation, you must use base before you declare any of your own fields.
https://metacpan.org/pod/base
CC-MAIN-2015-48
refinedweb
108
64.71
Git::Hooks::CheckAcls - Git::Hooks plugin for branch/tag access control. version 0.047 This Git::Hooks plugin hooks itself to the hooks below to guarantee that only allowed users can push commits and tags to specific branches. This hook is invoked multiple times in the remote repository during git push, once per branch being updated, checking if the user performing the push can update the branch in question. This hook is invoked once in the remote repository during git push, checking if the user performing the push can update every affected branch. This hook is invoked when a push request is received by Gerrit Code Review, to check if the user performing the push can update the branch in question. To enable it you should add it to the githooks.plugin configuration option: git config --add githooks.plugin CheckAcls Git::Hooks::CheckAcls - Git::Hooks plugin for branch/tag access control. The plugin is configured by the following git options. This variable is deprecated. Please, use the githooks.userenv variable, which is defined in the Git::Hooks module. Please, see its documentation to understand it. This variable is deprecated. Please, use the githooks.admin variable, which is defined in the Git::Hooks module. Please, see its documentation to understand it. The authorization specification for a repository is defined by the set of ACLs defined by this option. Each ACL specify 'who' has 'what' kind of access to which refs, by means of a string with three components separated by spaces: who what refs By default, nobody has access to anything, except the above-specified admins. During an update, all the ACLs are processed in the order defined by the git config --list command. The first ACL matching the authenticated username and the affected reference name (usually a branch) defines what operations are allowed. If no ACL matches username and reference name, then the operation is denied. The 'who' component specifies to which users this ACL gives access. It can be specified in the same three ways as was explained to the CheckAcls.admin option above. The 'what' component specifies what kind of access to allow. It's specified as a string of one or more of the following opcodes: You may specify that the user has no access whatsoever to the references by using a single hyphen ( -) as the what component. The 'refs' component specifies which refs this ACL applies to. It can be specified in one of these formats: A regular expression anchored at the beginning of the reference name. For example, "^refs/heads", meaning every branch. A negated regular expression. For example, "!^refs/heads/master", meaning everything but the master branch. The complete name of a reference. For example, "refs/heads/master". The ACL specification can embed strings in the format {VAR}. These strings are substituted by the corresponding environment's variable VAR value. This interpolation occurs before the components are split and processed. This is useful, for instance, if you want developers to be restricted in what they can do to official branches but to have complete control with their own branch namespace. git config CheckAcls.acl '^. CRUD ^refs/heads/{USER}/' git config CheckAcls.acl '^. U ^refs/heads' In this example, every user (^.) has complete control (CRUD) to the branches below "refs/heads/{USER}". Supposing the environment variable USER contains the user's login name during a "pre-receive" hook. For all other branches (^refs/heads) the users have only update (U) rights. This module exports two routines that can be used directly without using all of Git::Hooks infrastructure. This is the routine used to implement the update and the pre-receive hooks. It needs a Git::More object. This script is heavily inspired (and, in some places, derived) from the update-paranoid example hook which comes with the Git distribution..
http://search.cpan.org/~gnustavo/Git-Hooks/lib/Git/Hooks/CheckAcls.pm
CC-MAIN-2014-23
refinedweb
635
58.89
- Advertisement bioagentXMember Content Count255 Joined Last visited Community Reputation130 Neutral About bioagentX - RankMember General pointer/object question in Java bioagentX posted a topic in General and Gameplay ProgrammingPlease look at code for question [SOURCE] //Suppose I have the following relationship with nodes containing strings Node node0 = new Node("Random String", null) //the first parameter is for the value of the node, and //the second parameter is where that node points Node node1 = node0 Node node2 = node1; node1 = null; //My question is, will node2 now point to Node0? //If so, how is that possible if node1 is no longer referring to node0? [/SOURCE][/source][/source] If you ever played an MGS game... bioagentX replied to Nahoopii's topic in GDNet LoungeActually, he is the 2nd best character. 2nd only to Solid Snake himself. None of the other Snakes, even liquid, even comes close. Revolver Ocelot is a P.I.M.P BTW: That guy in the picture has pedaphile written all over him. I recommend someone contact that school and fast. Sorting algorithms code bioagentX posted a topic in General and Gameplay ProgrammingI'm trying to write a program that tests the speeds of three simple sorting algorithms. They are: selection sort, bubble sort, and Insertion Sort. The problem is, I'm not sure if my results are what they should be. I'd really appreciate it if someone could run this code, see how it works on their machine, and possibly inform me of any errors. Basically, there are 9 arrays. The first 3: a1, a2, and a3, are filled with random numbers in the range of 0-4999. The second 3: b1, b2, b3 are filled with numbers already sorted and in correct order. The last 3: c1, c2, c3 are sorted almost entirely, except for the last element which is set to the number 2. I think the sorting functions themselves are correct, so I think the error might lie in the initialization. One of my greates concerns is the fact that the InsertionSort method seems to take 0 milliseconds on an array that is already sorted. Now I know that since the array is already sorted it is going to take less time to sort through it, but shouldn't it take at least a few milliseconds to do this? Anyway, here is the code. I know its a lot to ask, but I've been checking it for like an hour and I can't see what the problem is. [SOURCE] import java.util.*; public class Test { public static void Swap(int a[],int i,int j) { int temp = a; a = a[j]; a[j] = temp; } public static void BubbleSort(int myArray[])//NOTE: This bubble sort does not contain the breakout section { for(int k = myArray.length -1; k>0; k--) { for(int j=0; j<k; j++) { if(myArray[j] > myArray[j+1]) { Swap(myArray, j, j+1); } } } } public static void SelectionSort(int myArray[]) { int minIndex = 0; for(int i=0; i<myArray.length - 1; i++) { for(int j= i + 1; j< myArray.length; j++) { if(myArray[j] < myArray[minIndex]) minIndex = j; } if(i != minIndex) { //swap the numbers Swap(myArray, i, minIndex); } minIndex = i + 1; } } public static void InsertionSort(int myArray[]) { for(int i=1; i<myArray.length; i++) { for(int j=i; j>0; j--) { if(myArray[j]<myArray[j-1]) { Swap(myArray, j, j-1); } else break; } } } public static void main (String[] args){ Date startTime; long sortTime; ////////// randomized arrays /////////// int[] a1 = new int[5000]; int[] a2 = new int[5000]; int[] a3 = new int[5000]; ///////// completely sorted ///////////// int[] b1 = new int[5000]; int[] b2 = new int[5000]; int[] b3 = new int[5000]; //////// almost sorted //////////////////// int[] c1 = new int[5000]; int[] c2 = new int[5000]; int[] c3 = new int[5000]; /////////////////////////////////////////// //set up arrays a1, a2 and a3, where each has 5000 random integers from 1-5000 in it. //see the bottom of p. 329 for an example of how to do it. for(int i=0; i<a1.length; i++) { a1 = (int)(Math.random() * 5000); } for (int j=0;j<a2.length ;j++ ) { a2[j] = (int)(Math.random() * 5000); } for (int k = 0; k<a3.length; k++) { a3[k] = (int)(Math.random() * 5000); } //set up arrays b1, b2 and b3, where each has 5000 integers from 0-4999 in it, perfectly sorted already. for(int i=0; i<b1.length; i++) { b1 = i; } for (int j=0;j<b2.length ;j++ ) { b2[j] = j; } for (int k = 0; k<b3.length; k++) { b3[k] = k; } //set up arrays c1, c2 and c3, where each has 5000 integers from 0-4999 in it, perfectly sorted already. //then set the last element of the array to 2. so it'll go 4996, 4997, 4998, 2. for(int i=0; i<c1.length; i++) { c1 = i; } for (int j=0;j<c2.length ;j++ ) { c2[j] = j; } for (int k = 0; k<c3.length; k++) { c3[k] = k; } c1[c1.length - 1] = 2; c2[c2.length - 1] = 2; c3[c3.length - 1] = 2; //for each array, do the sorts and see how long they take. //test this out going from 0-9 before you do it for 0-4999. //Used to test the arrays /*=============================== THE FOLLOWING IS CODE USED TO TEST THE ARRAYS ======================= */ /* System.out.println("BEFORE]+ " "); } //////////The selection Sorts////////// //SelectionSort(a3); //SelectionSort(b3); //SelectionSort(c3); /////////The bubble Sorts//////////// //BubbleSort(a3); //BubbleSort(b3); //BubbleSort(c3); ////////the Insertion Sorts //InsertionSort(a3); //InsertionSort(b3); //InsertionSort(c3); /////////////////////////// System.out.println(""); System.out.println("AFTER]+ " "); } System.out.println(""); System.out.println(""); System.out.println(""); System.out.println(""); System.out.println(""); System.out.println(""); System.out.println("");*/ //======================================== END OF ARRAY TEST CODE ============================================== //---------------------------------------------- //selection sort startTime = new Date(); SelectionSort(a1); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Selection sort of a1 took: " + sortTime + " ms."); startTime = new Date(); SelectionSort(b1); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Selection sort of b1 took: " + sortTime + " ms."); startTime = new Date(); SelectionSort(c1); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Selection sort of c1 took: " + sortTime + " ms."); //---------------------------------------------- //bubble sort startTime = new Date(); BubbleSort(a2); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Bubble sort of a2 took: " + sortTime + " ms."); startTime = new Date(); BubbleSort(b2); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Bubble sort of b2 took: " + sortTime + " ms."); startTime = new Date(); BubbleSort(c2); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Bubble sort of c2 took: " + sortTime + " ms."); //---------------------------------------------- //insertion sort startTime = new Date(); InsertionSort(a3); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Insertion sort of a3 took: " + sortTime + " ms."); startTime = new Date(); InsertionSort(b3); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Insertion sort of b3 took: " + sortTime + " ms."); startTime = new Date(); InsertionSort(c3); sortTime = (new Date()).getTime() - startTime.getTime(); System.out.println("Insertion sort of c3 took: " + sortTime + " ms."); } } [/SOURCE] Thx, --BioX Final Fantasy 7. Serious question bioagentX replied to rholding2001's topic in GDNet LoungeI played and beat the game about 3 to 4 years ago, so I remember my emotions when I saw that scene quite well. In fact, it was that game, as well as Metal Gear Solid One, that really made me want to be a game developer. Largely in part due to the greatness and quality that I saw in that one scene. First of all, I think the reason why so many people felt affected was because nothing similar had happenned in video games prior. Oh sure, it had happenned in movies all the time--and still does. But when it happens in a videogame, it affects the player a lot more. The player feels responsible--guilty--as if it was their fault. It wasn't aeris in particular that sparked so much emotion, it was the fact that such an innocent character that was doing something innocent (praying) died so brutally. Not only did she die, but she was murdered by a character that seemed to epitamize evil. This fact made her death so much more tragic. I felt this way even though I invested very little time into building up her character. In short, the reason why the scene is so remembered, is because it employed a unique tragedy in a videogame in full blown FMV format. The richness in quality of the video really helped to drive home the emotion, and due to the fact that she was a character that the player could control, players felt they had a closer relationship to the character than those from books or movies. Now onto the music Wihtout the music played in the scene(and there was music..I'd bet my life on it for anyone that didn't agree)it would have been far less affective. Music has a way of enticing the mind and inspring emotion. The sad simple toned melody really conveyed the idea of purity and innocense being killed. Music was important to this scene because the music in essence told us what to think. Music has a way of telling one what to think. If you don't believe me, imagine Aeris's death occuring to AC/DC music? Don't get me wrong, I love AC/DC, but rock is music that gets me excited and happy, not emotional and contemplative. And for that reason, because that music was able to inspire certain emotions, the scene of Aeris's death will continue to be remembered in video game history. [Edited by - bioagentX on December 12, 2004 10:35:45 AM] Acid Pro 4.0 bioagentX posted a topic in Music and Sound FXMy friend gave me a copy of Acid Pro 4.0 and I'm trying to learn how to use it. First of all, do I need a keyboard to use it? Can I make my own music without remixing previous songs? And where can I find a good tutorial on how to use Acid Pro 4.0? Thx for any help, --BioX 32-bit - long or int? bioagentX replied to Anima's topic in For Beginners's ForumLong will remain 32-bit, int will change to 64-bit when windows makes the change to a 64-bit OS. Javascript help bioagentX replied to bioagentX's topic in General and Gameplay ProgrammingThat doesn't seem to work. Are you sure you told me what I needed to know? In general can you use style sheets to make default event handlers (i.e something like mystyle:onMouseOver{color: red;})? --BioX Javascript help bioagentX posted a topic in General and Gameplay ProgrammingI'm creating a website for a friend at this site. Now as you can see the site is far from finished, but my attention is currently focused on the navigation bar on the left. I'm trying to make it so that the link color changes not when the cursor rolls over the link, but if the cursor rolls over anywhere in the entire table cell. Currently the table cell just changes to the color orange, but I want the link color text inside of it to change to yellow. I'd really appreciate it if someone could tell me how I would go about doing this, I've been going at it for like 2 hrs now. Any help is appreciated, --BioX Use of pointers in games bioagentX replied to Adam14's topic in For Beginners's ForumI'm curious though, in this first example, why do you need to use a pointer. Can't you just have something like "Enemy players_current_target" and set it equal to the desired option. Why do you need to use a pointer instead of initializing another object itself? --BioX Did "The Punisher" come out yet? bioagentX replied to bioagentX's topic in GDNet LoungeWas it good? Did "The Punisher" come out yet? bioagentX posted a topic in GDNet LoungeI'm wondering if "The Punisher" hit theaters yet? Did I miss it? The deal with Andre Lamothe bioagentX replied to bioagentX's topic in GDNet Lounge@Mr Hankey: Is that really his significant other? He doesn't seem to happy in that picture. The deal with Andre Lamothe bioagentX posted a topic in GDNet LoungeDoes Andre have a wife or kids? I know this is a wierd question but I've been reading some bios about him and I've come to the conclusion that he's probably in his late 30's early 40's. He is so on top of his email responses and he's willing to give so much time to his readers, I'm wondering if he has a family that he is neglecting because of all this kindness. --BioX - Oh ok, why? Sorry, this is last question I promise, I've been having difficulty with the 'static' keyword for some time. :) --BioX - Actually wait, its still not quite working. Here is my new code: #include<iostream.h> class Whatever { public: static int x; }; int main(/*int argc, char *argv[]*/) { int Whatever::x; /*for(int i=1; i<argc; i++) cout<<"Argument: "<<argv<<endl; */ return 0; } My error: Compiling... main.cpp c:\microsoft visual studio\myprojects\bittest\main.cpp(19) : error C2655: 'x' : definition or redeclaration illegal in current scope c:\microsoft visual studio\myprojects\bittest\main.cpp(7) : see declaration of 'x' Error executing cl.exe. - Advertisement
https://www.gamedev.net/profile/43821-bioagentx/
CC-MAIN-2018-47
refinedweb
2,210
65.22
Matlab has changed a bit of the command “deploytool” in 2009a and 2009b. To see how to use matlab functions in 2009a, see my previous post here. In 2009b, this has changed a bit and this post will show you how to accomplish the same task in Matlab 2009b. -’ or ‘MatlabInstallDir\toolbox\dotnetbuilder\bin\win32\v2.0’.! Maryam July 6, 2012 at 12:00 am Hi, I have the same problem as “Christos75” is describing in this page. I`d really appreciate any help. I get an ‘MWArray, Version=2.9.1.0, Culture=neutral, PublicKeyToken=e1d84a0da19db86f’ or one of its dependencies. An attempt was made to load a program with an incorrect format.” when I try to connect my matlab .Net assembly to C#. I change the configuration to x64. The error resolves. However, I get another error indication that one of the previous libraries developed initially in C# has the same problem: “‘DAL, Version=1.2.1.0, Culture=neutral, PublicKeyToken=…’ or one of its dependencies. An attempt was made to load a program with an incorrect format.” Can you help me how to resolve this? xinyustudio July 6, 2012 at 9:13 am Maryam, I think the reason might be: the DAL assembly you use is compiled in x86 configuration, so when you switch the project setting to x64, the DAL is not compatible with this setting any longer. One possible solution is to recompile your DAL solution in x64 mode, and reference the x64 version in your solution; of if you don’t have the source, you might have to try your program on a x86 (32 bit) OS. Happy coding and hope this helps! Maryam July 6, 2012 at 11:02 pm Many Thanks; That problem resolved. Actually I tried to create my Matlab MCR dll libraries with 32 bit matlab (on a 64 OS though). However, now I get another error. When I apply my .net libraries created in deploytool in a new project, everything`s fine and I have my expected outputs; but when I merge it into the main application solution, add them as references, and apply them for coding, then I receive this exception: “System.TypeInitializationException was unhandled Message: The type initializer for ‘com.SBD.SBDclass’ threw an exception.” SBDclass is the type that has been created with deploytool (Misrosoft Framework 3.5). Do you have any idea about this? I even tried creating a new project and referring to it in the main application solution; but I received the same error! xinyustudio July 10, 2012 at 5:56 pm As I don’t know where SBDclass is from, I cannot help. From your description, I am pretty sure that the problem is due to mixed config of x86/x64 config in your referenced assemblies. Given this hint, it is trivial to troubleshoot your problems. Divya September 7, 2012 at 4:35 pm Hi, I have deployed a Matlab .NET package and wrote a sample application in VS 2010 (64-bit). It builds successfully. But while running it throws an exception from at MathWorks.MATLAB.NET.Arrays.MWNumericArray.validateInput(Array realData, Array imaginaryData) at MathWorks.MATLAB.NET.Arrays.MWNumericArray.FastBuildNumericArray(Array realData, Array imaginaryData, Boolean makeDouble, Boolean rowMajorData) at MathWorks.MATLAB.NET.Arrays.MWNumericArray..ctor(Array realData, Array imaginaryData, Boolean makeDouble, Boolean rowMajorData) at MathWorks.MATLAB.NET.Arrays.MWNumericArray.op_Implicit(Array realData) at TestIntegration.Form1.buttonMagicSquare_Click(Object sender, EventArgs e) in K:\TestIntegration\TestIntegration\Form1.cs:line 46 at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) ……” The code that causes the exception is private void buttonMagicSquare_Click(object sender, EventArgs e) { MWNumericArray input = null; MWNumericArray output = null; int inputarg = 1; MLTestClass obj = new MLTestClass(); input = new MWNumericArray(5); output = obj.makesquare(1,input); /// Error comes from the second argument in this stmt, output = (MWNumericArray)output[0]; MessageBox.Show(output.NumericType.ToString()); } Can you help me in solving this problem? Thanks, Divya Divya September 13, 2012 at 1:02 pm The problem posted in the above comment has been solved. I think it was due to some incompatible settings parameter. Divya Moustafa November 8, 2012 at 8:55 am Thanks a lot .. This article was really a live saver ! Jungwoo June 18, 2013 at 8:40 am Hello, I’m searching my solution for problem. and than i find this site and you. I’coding C# with matlab compiler NE Tool. but My code has the error about mkl.dll. but my code is complete building and no error. i don’t know this situation. Details, when debugging and connecting to matlab functions in added class to making code, generate pop up windows about mkl.dll. and this pop up present “Can’t create the specifed module.” I’m so so sad… I’m crying soon. please help my problem. – See more at: sam September 26, 2013 at 5:58 pm Hi, thanks for the very interesting post. I am having trouble getting this to work in Azure.. Any ideas: I have compiled a MATLAB function using R2013a into a .NET dll using Builder NE. I can reference the dll and call the function successfully from a C# Console app. But the same code fails when running in a worker role through an Azure Cloud Service project. The application just crashes out silently and stops debugging at the line that initializes the class in the dll. There is this message in the Debug Output window: “The program ‘[8620] WaWorkerHost.exe: Managed (v4.0.30319)’ has exited with code -529697949 (0xe06d7363) ‘Microsoft C++ Exception’.” I have tried a couple of things in the project properties: Set the Platform target to x64; Unchecked ‘Enable the Visual Studio hosting process” It fails both running in the Azure Emulator on my dev machine, and when deployed to an Azure worker role on a Cloud Service. Yet when I execute a console app that calls the same dll via Process.Start it calls the MATLAB/MCR/.Net dll successfully. (I have a startup task that silently installs the MATLAB MCR) Thanks xinyustudio September 26, 2013 at 7:22 pm Sam, I have not tested this in cloud apps. What exception info does it give you? sam September 26, 2013 at 8:00 pm There’s no exception other than that line in the Debug Output window. It works in a console app, but not in the Azure emulator or in Azure proper. Ah well.. just thought I’d ask as your post was really good and hoped you might have tried it! Thanks Andreia January 20, 2014 at 2:53 am Hi xinyustudio, I have compiled a MATLAB function using R2012b into a .NET dll using Builder NE. I can reference the dll and call the function successfully from a C# Console app. I already have include this namespace: MathWorks.MATLAB.NET.Arrays I can build too without errors (Visual Studio 2013), just with one warning “There was a mismatch between the processor architecture of the project being built “MSIL” and the processor architecture of the reference “MWArray, Version=2.12.1.0, Culture=neutral, PublicKeyToken=e1d84a0da19db86f, processorArchitecture=AMD64”, “AMD64″.. MatLabTest”. But when i run my application test, it fails in this line: “MyFingerClass obj = new MyFringerClass()” the error is: “TypeInitializationException was unhandled” I follow yours steps and the steps from the site “”, and also this one “” to communicte with matlab. Can you solve it? Andreia January 20, 2014 at 3:30 am I resolved the warning, i put everything in x64. But i have another error…. When i execute the command: var activationContext = Type.GetTypeFromProgID(“matlab.application.single”); var matlab = (MLApp.MLApp)Activator.CreateInstance(activationContext); MyfingerClass obj = new MyfingerClass; Console.WriteLine(matlab.Execute(“tks=obj.demo_fingerprint()”)) it says “undefined variable “obj” or class “obj.demo_fingerprint” “, where demo_fingerprint is a funtion, that i have geneated in MatLab, from class MyFingerClass Can tpu solve it? Andreia January 20, 2014 at 5:28 am I already solved the problem thanks O.Hussain May 12, 2014 at 1:31 am Hi, Can I use the same way in Unity 3D with matlab functions ? Thanks alot, xinyustudio May 12, 2014 at 5:41 pm Yes, sure. No problem at all. O.Hussain May 12, 2014 at 10:07 pm Hi xinyustudio, I tried it but it gives me this error: MissingMethodException: Method not found: ‘MathWorks.MATLAB.NET.Utility.MWMCR..ctor’. Rethrow as TypeInitializationException: An exception was thrown by the type initializer for Mat_unity.Mat_unity testwith_matlab.Update () also i put the dlls from dist folder in the Assets folder of my unity project. xinyustudio May 12, 2014 at 10:37 pm @o.Hussain, if you have looked through the post from the beginning to the end, you will find there is a solution to it. Specifically, go to (8) and then (2) to see how to troubleshoot this. Good luck. Meysam July 12, 2014 at 4:05 pm Hi xinyustudio, I have a simple function for sample but I have a error with this function! my error is: ————————- … MWMCR::EvaluateFunction error … Too many output arguments.. ————————- my function in matlab file was: ————————- function [y]=foo(x) y=x^2; end ————————- and my code in C#.net is: ———————— using System; using System.Collections.Generic; using System.Linq; using System.Text; using foodll01; using MathWorks.MATLAB.NET.Arrays; using MathWorks.MATLAB.NET.Utility; namespace foo { class Program { static void Main(string[] args) { Class1 css1 = new Class1(); MWNumericArray w = new MWNumericArray(new Int32()); w = css1.foo(2); \\error in this line Console.WriteLine(w.ToString()); Console.ReadKey(); } } } ———————– TanX! xinyustudio July 13, 2014 at 10:50 am [y]=foo(x) should be y=foo(x)? No need to use [] when you return a single output. Tuan June 11, 2016 at 12:33 am I have a old application build by C# and they use some .dll files (m2m_aoa.dll, m2m_ctoa.dll, MWArray) which built by Matlab. I assume it was build by 32bit system. They all reference path at ” project_name\bin\Debug\”. Now I run that application in new system with Matlab 2014a (64bit),Visual Studio 2013 and MCR_R2014a_win64_installer. 1. First time, from Visual Studio, I run it in x86 platform, I got problem in line of code MWNumericArray mwMS = new MWNumericArray(MWArrayComplexity.Real, 2, 1); with error: Exception:Thrown: “The type initializer for ‘MathWorks.MATLAB.NET.Arrays.MWNumericArray’ threw an exception.” (System.TypeInitializationException) A System.TypeInitializationException was thrown: “The type initializer for ‘MathWorks.MATLAB.NET.Arrays.MWNumericArray’ threw an exception.” and they stuck in that line of code. 2. I try to build in x64 platform with copy MWArray.dll from “dotnetbuilder\bin\win64\v2.0” into directory : bin\Debug\MWArray.dll I run again and they pass that line of code. But I got new error in another m2m_ctoa.dll file which use MWArray (this file also is built by Matlab) in this line of code: ctoa = new CTOA(); with two errors Exception.” (System.BadImageFormatException) A System.BadImageFormatException was Exception:Thrown: “The type initializer for ‘m2m_ctoa.CTOA’ threw an exception.” (System.TypeInitializationException) A System.TypeInitializationException was thrown: “The type initializer for ‘m2m_ctoa.CTOA’ threw an exception.” Then the application shutdown immediately if I do not try catch in this line of code. Please help me. I am so confuse. All errors happen with these .dll files. I feel VS can not read these files. Maybe they are empty but their space up to 88KB each. Could these old .dll files which build from 32bit platform able run in my system? If it could, how can I fix it? Are the location of these files normal? They are in directory ” project_name\bin\Debug\” and the reference paths are match. Do I have to compile these files from Matlab again? In this case, could I convert dll file to Matlab code? Please help me. Thank you very much in advance.
https://xinyustudio.wordpress.com/2009/11/12/using-matlab-functions-in-c-2009b/
CC-MAIN-2019-30
refinedweb
1,962
60.01
? JSON is a universal, language-independent format for data. In this way, it’s similar to XML. Whereas XML uses tags, JSON is based on the object-literal notation of JavaScript. Therefore the format is simpler than XML. In general, JSON-encoded data is less verbose than the equivalent data in XML and so JSON data downloads more quickly than XML data. When you encode the stock data for StockWatcher in JSON format, it will look something like this (but the whitespace will be stripped out). [ { "symbol": "ABC", "price": 87.86, "change": -0.41 }, { "symbol": "DEF", "price": 62.79, "change": 0.49 }, { "symbol": "GHI", "price": 67.64, "change": 0.05 } ] Creating a source of JSON data on your local server Reviewing the existing implementation In the original StockWatcher implementation, you created a StockPrice class and used the refreshWatchList method to generate random stock data and then call the updateTable method to populate StockWatcher’s flex table. /** *); } In this tutorial, you’ll create a servlet to generate the stock data in JSON format. Then you’ll make an HTTP call to retrieve the JSON data from the server. You’ll use JSNI and GWT overlay types to work with the JSON data while writing the client-side code. Writing the servlet To serve up hypothetical stock quotes in JSON format, you’ll create a servlet. To use the embedded servlet container (Jetty) to serve the data, add the JsonStockData class to the server directory of your StockWatcher project and reference the servlet in the web application deployment descriptor (web.xml). Note: If you have a web server (Apache, IIS, etc) installed locally and PHP installed, you could instead write a PHP script to generate the stock data and make the call to your local server. What’s important for this example is that the stock data is JSON-encoded and that the server is local. -. public class JsonStockData extends HttpServlet { private static final double MAX_PRICE = 100.0; // $100.00 private static final double MAX_PRICE_CHANGE = 0.02; // +/- 2% @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Random rnd = new Random(); PrintWriter out = resp.getWriter(); out.println('['); String[] stockSymbols = req.getParameter("q").split(" "); boolean firstSymbol = true; for (String stockSymbol : stockSymbols) { double price = rnd.nextDouble() * MAX_PRICE; double change = price * MAX_PRICE_CHANGE * (rnd.nextDouble() * 2f - 1f); if (firstSymbol) { firstSymbol = false; } else { out.println(" ,"); } out.println(" {"); out.print(" \"symbol\": \""); out.print(stockSymbol); out.println("\","); out.print(" \"price\": "); out.print(price); out.println(','); out.print(" \"change\": "); out.println(change); out.println(" }"); } out.println(']'); out.flush(); } } Including the server-side code in the GWT module The embedded servlet container (Jetty) can host the servlet that generates the stock data in JSON format. To set this up, add <servlet> and <servlet-mapping> elements to the web application deployment descriptor (web.xml) and point to JsonStockData. path attribute>jsonStockData</servlet-name> <servlet-class>com.google.gwt.sample.stockwatcher.server.JsonStockData</servlet-class> </servlet> <servlet-mapping> <servlet-name>jsonStockData</servlet-name> <url-pattern>/stockwatcher/stockPrices</url-pattern> </servlet-mapping> </web-app> you’ll use are JSNI (JavaScript Native Interface) and GWT overlay types. This is the JSON data coming back from the server. [ { "symbol": "ABC", "price": 47.65563005127077, "change": -0.4426563818062567 }, ] First, you’ll use JsonUtils.safeEval() to convert the JSON string into JavaScript objects. Then, you’ll be able to write methods to access those objects. // JSNI methods to get stock data. public final native String getSymbol() /*-{ return this.symbol; }-*/; public final native double getPrice() /*-{ return this.price; }-*/; public final native double getChange() /*-{ return this.change; }-*/; For the latter, you’ll use JSNI. When the client-side code is compiled to JavaScript, the Java methods are replaced with the JavaScript exactly as you write it inside the tokens. Coding with JSNI As you see in the examples above, using JSNI you can call handwritten (as opposed to GWT-generated) JavaScript methods from within the GWT module. JSNI methods are declared native and contain JavaScript code in a specially formatted comment block between the end of the parameter list and the trailing semicolon. A JSNI comment block begins with the exact token /*-{ and ends with the exact token }-*/. JSNI methods are called just like any normal Java method. They can be static or instance methods. In Depth: For tips, tricks, and caveats about mixing handwritten JavaScript into your Java source code, see the Developer’s Guide, JavaScript Native Interface (JSNI). Converting JSON into JavaScript objects First you need to convert the JSON text from the server into JavaScript objects. This can be easily done using the static JsonUtils.safeEval() method. We’ll see later how to use it. JSON data types As you might expect, JSON data types correspond to the built-in types of JavaScript. JSON can encode strings, numbers, booleans, and null values, as well as objects and arrays composed of those types. As in JavaScript, an object is actually just an unordered set of name/value pairs. In JSON objects, however, the values can only be other JSON types (never functions containing executable JavaScript code). Another technique for a converting a JSON string into something you can work with is to use the static JSONParser.parse(String) method. GWT contains a full set of JSON types for manipulating JSON data in the com.google.gwt.json.client package. If you prefer to parse the JSON data, see the Developer’s Guide, Working with JSON. Creating an overlay type Your next task is to replace the existing StockPrice type with a StockData type. Not only do you want to access the array of JSON objects, but you want to be able to work with them as if they were Java objects while you’re coding. GWT overlay types let you do this. The new StockData class will be an overlay type which overlays the existing JavaScript object. Create the StockData class. Note: The commented numbers refer to the implementation notes below. You can delete them. package com.google.gwt.sample.stockwatcher.client; import com.google.gwt.core.client.JavaScriptObject; class StockData extends JavaScriptObject { // (1) // Overlay types always have protected, zero argument constructors. protected StockData() {} // (2) // JSNI methods to get stock data. public final native String getSymbol() /*-{ return this.symbol; }-*/; // (3) public final native double getPrice() /*-{ return this.price; }-*/; public final native double getChange() /*-{ return this.change; }-*/; // Non-JSNI method to return change percentage. // (4) public final double getChangePercent() { return 100.0 * getChange() / getPrice(); } } Implementation Notes (1) StockData is a subclass of JavaScriptObject, a marker type that GWT uses to denote JavaScript objects. JavaScriptObject gets special treatment from the GWT compiler and development mode code server. Its purpose is to provide an opaque representation of native JavaScript objects to Java code. (2) Overlay types always have protected, zero-argument constructors. (3) Typically methods on overlay types are JSNI. These getters directly access the JSON fields you know exist. By design, all methods on overlay types are final and private; thus every method is statically resolvable by the compiler, so there is no need for dynamic dispatch at runtime. (4) However, methods on overlay types are not required to be JSNI. Just as you did in the StockPrice class, you calculate the change percentage based on the price and change values. Benefits of using overlay types Using an overlay type creates a normal looking Java type that you can interact with using code completion, refactoring, and compile-time checking. Yet, you also have the flexibility of interacting with arbitrary JavaScript objects, which makes it simpler to access JSON services using RequestBuilder (which you’ll do in the next section). GWT now understands that any instance of StockData is a true JavaScript object that comes from outside this GWT module. You can interact with it exactly as it exists in JavaScript. In this example, you can access directly the JSON fields you know exist: this.Price and this.Change. Because the methods on overlay types can be statically resolved by the GWT compiler, they are candidates for automatic inlining. Inlined code runs significantly faster. This makes it possible for the GWT compiler to create highly-optimized JavaScript for your application’s client-side code. First, specify the URL where the servlet lives, that is: Note: If you are doing the php example, substitute the corresponding URL. Then, append the stock codes in the watch list to the base module URL. Rather than hardcoding the URL for the JSON server, add a constant to the StockWatcher class. -. * private void refreshWatchList() { if (stocks.size() == 0) { return; } String url = JSON_URL; // Append watch list stock symbols to query URL. Iterator<String> iter = stocks.iterator(); while (iter.hasNext()) { url += iter.next(); if (iter.hasNext()) { url += "+"; } } url = URL.encode(url); // TODO Send request to server and handle errors. } - To send a request, you’ll create an instance of the RequestBuilder object. You specify the HTTP method (GET, POST, etc.) and URL in the constructor. If necessary, you can also set the username, password, timeout, and headers to be used in the HTTP request. In this example, you don’t need to do this. When you’re ready to make the request, you call sendRequest(String, RequestCallback). The RequestCallback argument you pass will handle the response in its onResponseReceived(Request, Response) method, which is called when and if the HTTP call completes successfully. If the call fails (for example, if the HTTP server is not responding), the onError(Request, Throwable) method is called instead. The RequestCallback interface is analogous to the AsyncCallback interface in GWT remote procedure calls.. private void updateTable(JsArray<StockData> prices) { for (int i=0; i < prices.length(); i++) { updateTable(prices.get(i)); } // Display timestamp showing last refresh. lastUpdatedLabel.setText("Last update : " + DateTimeFormat.getMediumDateTimeFormat().format(new Date())); // Clear any errors. errorMsgLabel.setVisible(false); } At this point you’ve retrieved JSON-encoded stock data from a local server and used it to update the Price and Change fields for the stocks in your watch list. If you’d like to see how to retrieve JSON from web server on another domain, see Making cross-site requests. To learn more about client-server communication, see the Developer’s Guide, Communicating with the Server. Topics include: To learn more about JSNI, see the Developer’s Guide, JavaScript Native Interface (JSNI). Topics include: - Writing Native JavaScript Method - Accessing Java Methods and Fields from JavaScript - Sharing objects between Java source and JavaScript - Exceptions and JSNI - JavaScript Overlay Types
https://www.gwtproject.org/doc/latest/tutorial/JSON
CC-MAIN-2022-21
refinedweb
1,729
58.79
Write Your Own Python Decorators Overview In the article Deep Dive Into Python Decorators, I introduced the concept of Python decorators, demonstrated many cool decorators, and explained how to use them. In this tutorial I’ll show you how to write your own decorators. As you’ll see, writing your own decorators gives you a lot of control and enables many capabilities. Without decorators, those capabilities would require a lot of error-prone and repetitive boilerplate that clutters your code or completely external mechanisms like code generation. A quick recap if you know nothing about decorators. A decorator is a callable (function, method, class or object with a call() method) that accepts a callable as input and returns a callable as output. Typically, the returned callable does something before and/or after calling the input callable. You apply the decorator by using the @ syntax. Plenty of examples coming soon... The Hello World Decorator Let’s start with a ‘Hello world!’ decorator. This decorator will totally replace any decorated callable with a function that just prints ‘Hello World!’. def hello_world(f): def decorated(*args, **kwargs): print 'Hello World!' return decorated That’s it. Let’s see it in action and then explain the different pieces and how it works. Suppose we have the following function that accepts two numbers and prints their product: def multiply(x, y): print x * y If you invoke, you get what you expect: multiply(6, 7) 42 Let’s decorate it with our hello_world decorator by annotating the multiply function with @hello_world. @hello_world def multiply(x, y): print x * y Now, when you call multiply with any arguments (including wrong data types or wrong number of arguments), the result is always ‘Hello World!’ printed. multiply(6, 7) Hello World! multiply() Hello World! multiply('zzz') Hello World! OK. How does it work? The original multiply function was completely replaced by the nested decorated function inside the hello_world decorator. If we analyze the structure of the hello_world decorator then you’ll see that it accepts the input callable f (which is not used in this simple decorator), it defines a nested function called decorated that accepts any combination of arguments and keyword arguments ( def decorated(*args, **kwargs)), and finally it returns the decorated function. Writing Function and Method Decorators There is no difference between writing a function and a method decorator. The decorator definition will be the same. The input callable will be either a regular function or a bound method. Let’s verify that. Here is a decorator that just prints the input callable and type before invoking it. This is very typical for a decorator to perform some action and continue by invoking the original callable. def print_callable(f): def decorated(*args, **kwargs): print f, type(f) return f(*args, **kwargs) return decorated Note the last line that invokes the input callable in a generic way and returns the result. This decorator is non-intrusive in the sense that you can decorate any function or method in a working application, and the application will continue to work because the decorated function invokes the original and just has a little side effect before. Let’s see it in action. I’ll decorate both our multiply function and a method. @print_callable def multiply(x, y): print x * y class A(object): @print_callable def foo(self): print 'foo() here' When we call the function and the method, the callable is printed and then they perform their original task: multiply(6, 7) <function multiply at 0x103cb6398> <type 'function'> 42 A().foo() <function foo at 0x103cb6410> <type 'function'> foo() here Decorators With Arguments Decorators can take arguments too. This ability to configure the operation of a decorator is very powerful and allows you to use the same decorator in many contexts. Suppose your code is way too fast, and your boss asks you to slow it down a little bit because you’re making the other team members look bad. Let’s write a decorator that measures how long a function is running, and if it runs in less than a certain number of seconds t, it will wait until t seconds expire and then return. What is different now is that the decorator itself takes an argument t that determines the minimum runtime, and different functions can be decorated with different minimum runtimes. Also, you will notice that when introducing decorator arguments, two levels of nesting are required: import time def minimum_runtime(t): def decorated(f): def wrapper(*args, **kwargs): start = time.time() result = f(*args, **kwargs) runtime = time.time() - start if runtime < t: time.sleep(t - runtime) return result return wrapper return decorated Let’s unpack it. The decorator itself—the function minimum_runtime takes an argument t, which represents the minimum runtime for the decorated callable. The input callable f was “pushed down” to the nested decorated function, and the input callable arguments were “pushed down” to yet another nested function wrapper. The actual logic takes place inside the wrapper function. The start time is recorded, the original callable f is invoked with its arguments, and the result is stored. Then the runtime is checked, and if it’s less than the minimum t then it sleeps for the rest of the time and then returns. To test it, I’ll create a couple of functions that call multiply and decorate them with different delays. @minimum_runtime(1) def slow_multiply(x, y): multiply(x, y) @minimum_runtime(3) def slower_multiply(x, y): multiply(x, y) Now, I’ll call multiply directly as well as the slower functions and measure the time. import time funcs = [multiply, slow_multiply, slower_multiply] for f in funcs: start = time.time() f(6, 7) print f, time.time() - start Here is the output: 42 <function multiply at 0x103cb6b90> 1.59740447998e-05 42 <function wrapper at 0x103d0bcf8> 1.00477004051 42 <function wrapper at 0x103cb6ed8> 3.00489807129 As you can see, the original multiply took almost no time, and the slower versions were indeed delayed according to the provided minimum runtime. Another interesting fact is that the executed decorated function is the wrapper, which makes sense if you follow the definition of the decorated. But that could be a problem, especially if we’re dealing with stack decorators. The reason is that many decorators also inspect their input callable and check its name, signature and arguments. The following sections will explore this issue and provide advice for best practices. Object Decorators You can also use objects as decorators or return objects from your decorators. The only requirement is that they have a __call__() method, so they are callable. Here is an example for an object-based decorator that counts how many times its target function is called: class Counter(object): def __init__(self, f): self.f = f self.called = 0 def __call__(self, *args, **kwargs): self.called += 1 return self.f(*args, **kwargs) Here it is in action: @Counter def bbb(): print 'bbb' bbb() bbb bbb() bbb bbb() bbb print bbb.called 3 Choosing Between Function-Based and Object-Based Decorators This is mostly a question of personal preference. Nested functions and function closures provide all the state management that objects offer. Some people feel more at home with classes and objects. In the next section, I’ll discuss well-behaved decorators, and object-based decorators take a little extra work to be well-behaved. Well-Behaved Decorators General-purpose decorators can often be stacked. For example: @decorator_1 @decorator_2 def foo(): print 'foo() here' When stacking decorators, the outer decorator (decorator_1 in this case) will receive the callable returned by the inner decorator (decorator_2). If decorator_1 depends in some way on the name, arguments or docstring of the original function and decorator_2 is implemented naively, then decorator_2 will see not see the correct information from the original function, but only the callable returned by decorator_2. For example, here is a decorator that verifies its target function’s name is all lowercase: def check_lowercase(f): def decorated(*args, **kwargs): assert f.func_name == f.func_name.lower() f(*args, **kwargs) return decorated Let’s decorate a function with it: @check_lowercase def Foo(): print 'Foo() here' Calling Foo() results in an assertion: In [51]: Foo() --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-51-bbcd91f35259> in <module>() ----> 1 Foo() <ipython-input-49-a80988798919> in decorated(*args, **kwargs) 1 def check_lowercase(f): 2 def decorated(*args, **kwargs): ----> 3 assert f.func_name == f.func_name.lower() 4 return decorated But if we stack the check_lowercase decorator over a decorator like hello_world that returns a nested function called ‘decorated’ the result is very different: @check_lowercase @hello_world def Foo(): print 'Foo() here' Foo() Hello World! The check_lowercase decorator didn’t raise an assertion because it didn’t see the function name ‘Foo’. This is a serious problem. The proper behavior for a decorator is to preserve as much of the attributes of the original function as possible. Let’s see how it’s done. I’ll now create a shell decorator that simply calls its input callable, but preserves all the information from the input function: the function name, all its attributes (in case an inner decorator added some custom attributes), and its docstring. def passthrough(f): def decorated(*args, **kwargs): f(*args, **kwargs) decorated.__name__ = f.__name__ decorated.__name__ = f.__module__ decorated.__dict__ = f.__dict__ decorated.__doc__ = f.__doc__ return decorated Now, decorators stacked on top of the passthrough decorator will work just as if they decorated the target function directly. @check_lowercase @passthrough def Foo(): print 'Foo() here' Using the @wraps Decorator This functionality is so useful that the standard library has a special decorator in the functools module called ‘wraps’ to help write proper decorators that work well with other decorators. You simply decorate inside your decorator the returned function with @wraps(f). See how much more concise passthrough looks when using wraps: from functools import wraps def passthrough(f): @wraps(f) def decorated(*args, **kwargs): f(*args, **kwargs) return decorated I highly recommend always using it unless your decorator is designed to modify some of these attributes. Writing Class Decorators Class decorators were introduced in Python 3.0. They operate on an entire class. A class decorator is invoked when a class is defined and before any instances are created. That allows the class decorator to modify pretty much every aspect of the class. Typically you’ll add or decorate multiple methods. Let’s jump right in to a fancy example: suppose you have a class called ‘AwesomeClass’ with a bunch of public methods (methods whose name doesn’t start with an underscore like init) and you have a unittests-based test class called ‘AwesomeClassTest’. AwesomeClass is not just awesome, but also very critical, and you want to ensure that if someone adds a new method to AwesomeClass they also add a corresponding test method to AwesomeClassTest. Here is the AwesomeClass: class AwesomeClass: def awesome_1(self): return 'awesome!' def awesome_2(self): return 'awesome! awesome!' Here is the AwesomeClassTest: from unittest import TestCase, main class AwesomeClassTest(TestCase): def test_awesome_1(self): r = AwesomeClass().awesome_1() self.assertEqual('awesome!', r) def test_awesome_2(self): r = AwesomeClass().awesome_2() self.assertEqual('awesome! awesome!', r) if __name__ == '__main__': main() Now, if someone adds an awesome_3 method with a bug, the tests will still pass because there is no test that calls awesome_3. How can you ensure that there is always a test method for every public method? Well, you write a class decorator, of course. The @ensure_tests class decorator will decorate the AwesomeClassTest and will make sure every public method has a corresponding test method. def ensure_tests(cls, target_class): test_methods = [m for m in cls.__dict__ if m.startswith('test_')] public_methods = [k for k, v in target_class.__dict__.items() if callable(v) and not k.startswith('_')] # Strip 'test_' prefix from test method names test_methods = [m[5:] for m in test_methods] if set(test_methods) != set(public_methods): raise RuntimeError('Test / public methods mismatch!') return cls This looks pretty good, but there is one problem. Class decorators accept just one argument: the decorated class. The ensure_tests decorator needs two arguments: the class and the target class. I couldn’t find a way to have class decorators with arguments similar to function decorators. Have no fear. Python has the functools.partial function just for these cases. @partial(ensure_tests, target_class=AwesomeClass) class AwesomeClassTest(TestCase): def test_awesome_1(self): r = AwesomeClass().awesome_1() self.assertEqual('awesome!', r) def test_awesome_2(self): r = AwesomeClass().awesome_2() self.assertEqual('awesome! awesome!', r) if __name__ == '__main__': main() Running the tests results in success because all the public methods, awesome_1 and awesome_2, have corresponding test methods, test_awesome_1 and test_awesome_2. ---------------------------------------------------------------------- Ran 2 tests in 0.000s OK Let’s add a new method awesome_3 without a corresponding test and run the tests again. class AwesomeClass: def awesome_1(self): return 'awesome!' def awesome_2(self): return 'awesome! awesome!' def awesome_3(self): return 'awesome! awesome! awesome!' Running the tests again results in the following output: python3 a.py Traceback (most recent call last): File "a.py", line 25, in <module> class AwesomeClassTest(TestCase): File "a.py", line 21, in ensure_tests raise RuntimeError('Test / public methods mismatch!') RuntimeError: Test / public methods mismatch! The class decorator detected the mismatch and notified you loud and clear. Conclusion Writing Python decorators is a lot of fun and lets you encapsulate tons of functionality in a reusable way. To take full advantage of decorators and combine them in interesting ways, you need to be aware of best practices and idioms. Class decorators in Python 3 add a whole new dimension by customizing the behavior of complete classes. Source: Tuts Plus
http://designncode.in/write-your-own-python-decorators/
CC-MAIN-2017-43
refinedweb
2,254
55.44
Squashing the asp.net MVC response - part 1 The goal of this post : reduce the total size of a simple asp.net MVC page response. Our measuring tools : Firefox running Firebug and the YSlow plugin Source Code : Download here Lets use a really simple and common scenario as the example. The steps to create this really simple example are: - Create a new MVC project in visual studio. - Dump some useful script files into the scripts folder and delete the others we dont need. - Create a new .css file in the Content folder and cut half the css out of the site.css and paste it into this new file. (this is just to create more than 1 css file) - Open the site.master file and add references to the new files in the head.: <%@" /> <link href="../.. : - Unity - Dependency injection framework used by the utils project - Utils - common utils and helper classes incl. encryption wrappers, collections, string utils, a number of extension methods, etc etc - Utils.Web - common web utils and helpers not specific to either webforms or MVC incl querystring helpers and a URL helper to name a few - Utils.Web.MVC - common MVC specific classes and utils. a namespace reference in the web.config : <add namespace="Utils.Web.MVC"/> - Add a httphandler to the web.config : <add verb="*" path="css.axd" type="Utils.Web.HttpHandlers.CSSHandler, Utils.Web" validate="false"/> - Change the site.master. Cut out the old references to the stylesheets and replace with this in the head: <% Html.CSS().Add("~/Content/Site.css"); %> <% Html.CSS().Add("~/Content/Layout.css"); %> <%= Html.CSS().HTML %> Cool. With a few changes to the HTML, we get the same response but with a few improvements: - Our YSLOW score has jumped up to 83! - We now have less requests (5) and our total file size is down to 130KB. - Add the httphandler to the web.config : <add verb="*" path="js.axd" type="Utils.Web.HttpHandlers.JSHandler, Utils.Web" validate="false"/> - Change the site.master. Cut out the old script references from the head and at the bottom of the file just before the closing of the body tag, add the following code: <%: - Our YSLOW score is a whopping 98!!! WOW!! Thats more like it. The only thing we are not getting an A score for is the Content Delivery Network section. - But more importantly, our number of requests has dropped to 3 and the total size is only 27KB! - Our total transfer time is now only 2.08 seconds! The HTML output is now the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns=""> <head><title> <: - 6 requests down to 3 - 135K size down to 27K - YSlow score from 73 up to 98 - Download time from 4.11s down to 2.08s)
http://weblogs.asp.net/bradvincent/squashing-the-asp-net-mvc-response-part-1
CC-MAIN-2014-52
refinedweb
463
76.42
I'm suppose to write a program that uses two parallel arrays to store student names and their grades. It should use an array of strings that hold the names and and array of integers that hold the grades. The program should produce a report that displays list of students names and grades, and the name of the student that has the highest grade. The names and grades should be stored using an initialization list at the time the arrays are created. Use a loop and conditional statement to determine which student has the highest grade. This is what I have tried so far. I'm new at this and there not many good examples in the book Code: #include "stdafx.h" #include <iostream> #include <iomanip> #include <string> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { int i = 0, h = 0, saved = 0; int g[ ] = {87, 99, 70, 75, 77, 91, 95}; string s [ ]= {"S1", "S2", "S3", "S4", "S5", "S6", "S7"}; while ( i <= 3) {if (g[i] > h) {h = g[i]; saved = i; } } cout << "Highest " << h; cout << "student is " << s[saved]; return 0;
http://cboard.cprogramming.com/cplusplus-programming/137224-i-need-help-two-parallel-arrays-printable-thread.html
CC-MAIN-2015-32
refinedweb
184
74.42
Getting started with React can be daunting especially if you want to understand the entire setup. Solutions like create-react-app have hidden a lot of this complexity. But there's more to it. CodeSandbox by Ives van Hoorne pushes the problem online. Instead of setting up a React project each time you want to experiment, you can use his service instead. Read on to learn more. University of Twente and a part-time developer at Catawiki. I worked there full-time last year, at that time I was responsible for converting the website to React.My name is Ives van Hoorne; I'm a Computer Science student at the Though I like all kinds of programming, I've been especially attracted to frontend the last few years, mostly because it's also a bit artistic. I get a lot of satisfaction from building user interfaces that people both find beautiful and easy to use. CodeSandbox is an online editor for web development projects. It automates things like transpiling, bundling and dependency management for you so you can easily create a new project with a single click. The editor also has a live preview so you can see the results of your work while you type. Sharing is very easy; you can just share the URL of your project or embed it in an iframe. Others can then fork it to edit the project to their liking. CodeSandbox currently has a focus on React projects, which means that it supports features like downloading to a create-react-app template. This is an example of a project on CodeSandbox, it's the classic TodoMVC example in Redux: CodeSandbox at its core consists of two parts: the editor and the preview. The editor is the whole CodeSandbox application (file manager, code editor, dependency settings) and the preview is the result you see on the right. These two parts are very decoupled and only communicate using postMessage. The preview is on a subdomain ( sandbox.codesandbox.io) in an iframe to literally 'sandbox' the preview away from the main application. The editor sends all its files, directories and dependencies to the preview; this either happens when the user changes something or when the application loads. The preview then takes all these files and processes each type using different loaders, which currently is either CSS, JavaScript, JSON, or HTML. These loaders can be very simple, the JSON loader, for example, is only a one-liner: export default module => JSON.parse(module.code); The JavaScript loader is by far the most interesting since this loader also has to transpile, require and cache the result. It first transpiles the code using babel; then it evals the transpiled code with a stubbed require function. This require function just takes a path, checks if this is an npm dependency or a file and handles it again with the loader for that extension. Every result is cached, so most of the time only the edited file is evalled again after a change. The output of the loader goes through a boilerplate, this boilerplate is determined by the output. A boilerplate simply is a separate file that does something with the loader output, for example, the boilerplate for a returned React components is: import React from 'react'; import { render } from 'react-dom'; // domChanged is a boolean which specifies if the module // has done something to the DOM while it was evaluated export default function(evaluatedModule, domChanged) { if (!domChanged) { const node = document.createElement('div'); document.body.appendChild(node); render( React.createElement(evaluatedModule.default), node ); } } This boilerplate renders the output of a React component to the DOM if it doesn't change the DOM at all. I want to make it possible for others to build and share loaders/boilerplates as well, but this requires some thinking because we still want to support create-react-app interoperability. The npm dependencies are handled by a separate server; I call it the 'bundler'. The editor sends the list of dependencies to it, the bundler then creates a UMD build of this combination using webpack 2 and sends an object containing the URL and the manifest back. A manifest is an object with a mapping from dependency name to module number, so the JavaScript loader knows which module to load from the UMD build when a dependency is required. CodeSandbox is one of the few editors that supports npm dependencies and multiple files/directories. It also handles almost everything in the browser, which allows us to show real-time feedback without any server communication. That is a feature-wise difference, but I think the real difference compared to other editors is the goal. We want to make it possible to let others import your sandbox as a dependency. This way you can't only edit others work, you can use it in your projects. The feature hasn't been fully implemented yet; I still need to finish the UI for it. I started to think about CodeSandbox last summer when I was on holiday in St. Ives. Several colleagues asked me questions about the React project we've been working on, but there was no easy way for me to answer them. The questions were either related to a library or were so complex that it was very hard to show in, for example, Codepen. That's when I started thinking: 'man, it would be great just to have an online editor that could do this'. I began working on this in my spare time and eventually, my friend Bas Buursma joined me. I'm currently working on more support for users and sharing. Specifically, I'm building the profile view right now; here you can showcase your sandboxes and see statistics like how many times your sandboxes were forked and how many views you got. It also includes a very requested feature: deleting sandboxes. Deleting is currently impossible and also very irritating, I have 38 sandboxes right now, and I would love to delete the junk ones. This is the current design for the new profile view: After we have better support for searching and sharing sandboxes within CodeSandbox, I can work on 'import as library'. I'm excited about that feature and would love to build it sooner; it's just that I first need to build the foundation for it. I'm also exploring ways to use your editor for building things on CodeSandbox; I'm looking at things like setting CodeSandbox as git remote/Github integration or making it possible to sync local files. Syncing is still very vague and unexplored though, so nothing is sure on this.. Try not to get overwhelmed! That's easier said than done, so if you do get overwhelmed by a task, it's smart to divide it into smaller, more manageable sub-tasks. Take it step by step. I also recommend learning by just starting a personal project. Building something that you like and can share is a great motivation, and that motivation helps to overcome so many hurdles along the way. Christian Alfoni, the creator of WebpackBin (now defunct) and Cerebral. It has been a blast working with him. He is close to releasing a new version of a state controller called Cerebral. Editor's note: I interviewed Christian earlier about Cerebral. I learned React via SurviveJS! Great book and helped me a lot with understanding React. Thanks for the interview Ives! It's nice to see services like this to appear as they take so much pain out of the process and enable quick experimentation. Maybe one day web development goes to the web entirely. Be sure to give CodeSandbox a go.
https://survivejs.com/blog/codesandbox-interview/
CC-MAIN-2022-40
refinedweb
1,279
62.38
0 hi everyone! I have two files, main.py and sprite_class.py. sprite_class.py is in a separate folder called lib. here's a quick diagram: ------------------------------------ /folder engine/ main.py /folder lib/ sprite_class.py ------------------------------------ ...sprite_class.py contains a class called Sprite: # sprite class class Sprite(): def __init__( self, start_x, start_y, image_path ): self.x = start_x self.y = start_y self.starting_y = self.y self.image_path = image_path self.sprite = pygame.image.load( self.image_path ) since i'm kind of rusty when it comes to import statements, I created a small def. function to handle it for me. also at the bottom, I import my sprite_class.py file: def CD( Dir_String ): sys.path.append( Dir_String ) # changes the system's current dir. path to specified path CD( "/lib" ) import sprite_class Now, the only problem I have is trying to run the class in main.py. I know how to run def. statements, but how do you run classes? Any hints? Thank you in advanced!
https://www.daniweb.com/programming/software-development/threads/188707/import-classes-from-a-seperate-file
CC-MAIN-2018-05
refinedweb
159
81.09
Stefan Bodewig wrote: > On Wed, 13 Aug 2003, Costin Manolache <[email protected]> wrote: > >> All this overriding may create some bad maintaince problems. > > I agree for overriding in arbitrary namespaces, but we have to keep > supporting it for the default namespace. > > We've added support for task overloading when Ant added a task with > the name <manifest> and FOP's build broke as they had a manifest task > of their own. Without overriding, each new task in Ant could > potentially break build files. Namespaces are supposed to be a better/different solution for this problem. I agree we should keep overriding for the default namespace, for backward compat. But I don't think it is a good idea to support overriding in any other case, and the default namespace should have a strong warning that namespaces should be used instead of overriding. Costin --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
http://mail-archives.apache.org/mod_mbox/ant-dev/200308.mbox/%[email protected]%3E
CC-MAIN-2016-26
refinedweb
162
65.12
NAMEioNote: this page describes the raw Linux system call interface. The wrapper function provided by libaio uses a different type for the ctx_id argument. See NOTES. The io_submit() system call queues nr I/O request blocks for processing in the AIO context ctx_id. The iocbpp argument should be an array of nr AIO control blocks, which will be submitted to context ctx_id. The iocb (I/O control block) structure defined in linux/aio_abi.h defines the parameters that control the I/O operation. #include <linux/aio_abi.h> struct iocb { __u64 aio_data; __u32 PADDED(aio_key, aio_rw_flags); __u16 aio_lio_opcode; __s16 aio_reqprio; __u32 aio_fildes; __u64 aio_buf; __u64 aio_nbytes; __s64 aio_offset; __u64 aio_reserved2; __u32 aio_flags; __u32 aio_resfd; }; The fields of this structure are as follows: - aio_data - This data is copied into the data field of the io_event structure upon I/O completion (see io_getevents(2)). - aio_key - This is an internal field used by the kernel. Do not modify this field after an io_submit() call. - aio_rw_flags - This defines the R/W flags passed with structure. The valid values are: - RWF_APPEND (since Linux 4.16) - Append data to the end of the file. See the description of the flag of the same name in pwritev2(2) as well as the description of O_APPEND in open(2). The aio_offset field is ignored. The file offset is not changed. - RWF_DSYNC (since Linux 4.13) - Write operation complete according to requirement of synchronized I/O data integrity. See the description of the flag of the same name in pwritev2(2) as well the description of O_DSYNC in open(2). - RWF_HIPRI (since Linux 4.13) - High priority request, poll if possible - RWF_NOWAIT (since Linux 4.14) - Don't wait if the I/O will block for operations such as file block allocations, dirty page flush, mutex locks, or a congested block device inside the kernel. If any of these conditions are met, the control block is returned immediately with a return value of -EAGAIN in the res field of the io_event structure (see io_getevents(2)). - RWF_SYNC (since Linux 4.13) - Write operation complete according to requirement of synchronized I/O file integrity. See the description of the flag of the same name in pwritev2(2) as well the description of O_SYNC in open(2). - aio_lio_opcode - This defines the type of I/O to be performed by the iocb structure. The valid values are defined by the enum defined in linux/aio_abi.h: enum { IOCB_CMD_PREAD = 0, IOCB_CMD_PWRITE = 1, IOCB_CMD_FSYNC = 2, IOCB_CMD_FDSYNC = 3, IOCB_CMD_POLL = 5, IOCB_CMD_NOOP = 6, IOCB_CMD_PREADV = 7, IOCB_CMD_PWRITEV = 8, }; - aio_reqprio - This defines the requests priority. - aio_fildes - The file descriptor on which the I/O operation is to be performed. - aio_buf - This is the buffer used to transfer data for a read or write operation. - aio_nbytes - This is the size of the buffer pointed to by aio_buf. - aio_offset - This is the file offset at which the I/O operation is to be performed. - aio_flags - This is theOn success, io_submit() returns the number of iocbs submitted (which may be less than nr,,. VERSIONSThe asynchronous I/O system calls first appeared in Linux 2.5. CONFORMING TOio_submit() is Linux-specific and should not be used in programs that are intended to be portable. NOTESGlibc does not provide a wrapper.
https://man.archlinux.org/man/io_submit.2.en
CC-MAIN-2022-21
refinedweb
536
74.29
Just a quick question (worth printing to next issue?) Is there any (good) GNU licensed programs to help the creation of businessplans? —Jussi Kallioniemi [[email protected]] TO: Representatives of... OS user groups, consumer advocacy orgs., ISD associations, software vendors and distributors FROM: James Capone, of the Linux Info Website Diane Gartner, Co-ordinator of IACT. But we need your help to do it! Join us as *Project Associates* of the Freedom of Choice project. In the spirit of teamwork and co-operation, we are asking you to sign your name to this project, and make a strong commitment to helping the world's computer users regardless of platform, background, nationality or expertise. The Next Step ------------- Please r.s.v.p. to this letter and tell us if you can commit your organization's resources and name to the Freedom of Choice Project. If you cannot help in an active role as a Project Associate, then please at least consider posting links and announcements for us. Your active participation would include a few, simple volunteer responsibilities: 1. Devote a small amount of your web space to a “Freedom of Choice” page on your group's website, to outline the goals/benefits of the project and to post frequent updates on the project's growth. Just be sure that your page gives equal emphasis to all OSs and to all computer users regardless of platform, background, nationality or expertise. 2. Directly contact your own members + associates, urging them to visit your website's Freedom of Choice page and then to respond to our Freedom of Choice Poll, so that your group's concerns about software choice will reach the PC makers with maximum impact. 3. Help us to publicize the project by using your media, political and IT contacts. Forward the project's official announcements to your contacts, and urge them to support the project either directly or indirectly. 4. Any other resources or ideas that you'd like to offer.... If you choose to join us as a Project Associates, then with your permission we will add your name(s) to this same Invitation when we send new copies to more organizations; your group likewise will be given a spotlight on our own Freedom of Choice web pages, for the public to see. The Freedom of Choice Poll is at regards, Diane Gartner, [email protected] James Capone, [email protected] I really enjoyed reading Alessandro Rubini's article “Software Libre and Commercial Viability” in the February issue. It is often hard to see where things are going in the Linux and Open Source world, and Mr. Rubini's article was very intelligent and enlightening with quite a few insights into what's up. —Robert Lynch [email protected] I've used Linux since 0.99, and taken LJ since somewhere around issue 6 or 7. PLEASE PLEASE don't get political on me. The recent article by Dr. Mann filled the first 3.5 pages with his personal libertarian paranoid philosophy as it pertains to the use of intelligent devices in society. I am happy for him to tell us about his project, how Linux allows him to be “COSHER”, and to describe his applications of Linux. However, I don't (and don't think most LJ subscribers) give a whit about Dr Mann's personal politics. To read the first three pages of the article, I suppose I should start taking apart my telephones, TV's Microwaves, and temperature-sensing shower heads because there may be a hidden videocam inside them. Keep up the good work. I love your mag. —Bill Menger, [email protected] I've been subscribing to L.J. for a few months now, and I eagerly await each edition. I think the magazine needs to dedicate more resources toward editing and reviewing the magazine for errors. As an example, the latest edition's squid article states Netscape's proxy server does not support ICP, which is incorrect. I think the quality of the magazine could be greatly improved if the editors worked more closely with the authors. As Linux gains in popularity, I'm hoping the advertising dollars come in to enable you to continue to improve the overall quality (and width!) of each issue. —A Demanding Fan, Jim Ford, [email protected] Neil Parker wrote: Read Joseph Pranevich's article on Linux 2.2 in Kernel Korner (LJ: December 1998) with interest. He mentions support for a 'subset of the Linux kernel' on 80286 and below machines. On this tack I was wondering if there is anything geared towards making effective use of ageing 386 systems with 4Meg or so of memory. I have a lab full of these all running Windows 3.1 and would like to convert them to Linux if possible. Probably there are many thousands of similar machines worldwide. No, not that I know of. That's a bit out of bounds for ELKS. (Linux for 8086-80286) Your best option here is to compile a main Linux kernel, even a 2.2 kernel with the absolute minimum requirements for your hardware. There are also patches floating around for reducing memory requirements even further, but I can't give you a pointer right now. Having done that, you should strip down the a system to the *bare* essentials. No daemons, no loadable modules, nothing that isn't absolutely necessary. You'll also want to recompile your X server (Mono, SVGA, or VGA16?) with the minimum options and no unneeded drivers. Of course, this will all have to be done on a bigger Linux system, recompilations on a 386 take *forever* With all of that done, you'll get a working system. You may need to hack the X source a tad to not configure things like unneeded mouse drivers and etc. With any luck, that will fit, but barely. If you run all applications after that remotely, it might not be so bad. Make sure you have swap space, Linux 2.2 actually runs faster with it because it swaps some unneeded bits out. And I've heard that Linux 2.2 would be faster than Linux 2.0 but you might want to check both. Alternatively, you could look into building it with a Linux 1.0 or 1.2 series kernel, if your hardware would allow it. And building a libc5 (as opposed to glibc) system might save you a tad more room. And finally, make *sure* that you have no static libs and compile everything shared. I hope this helps. Joe Pranevich, [email protected] At the UW I work with all platforms but for ease of development I've used Linux since .99pl33 While working on a Journal article (for a medical publication) on computer security, I bought a copy of WinNT Magazine (Dec. 1998) and came across some comments about Linux that I think are unfounded, but I would like some expert and authoritative rebuttals. /****** On page 35, Craig Barth writes “According to intelectual property lawyers, the Linux licensing agreement binds any developers who produce software using components of the Linux OS (eg. libraries and runtimes) to release the source code for their additions. This will stop mainstream commerical development dead in its tracks” On page 122, Mark Russinovich writes “... whereas WinNT and all commercial UNIX implement kernel mode threads, Linux does not.” “... the schedular cannot preempt the kernel ... Because Linux kernel is onot preemptable, it is not as responsive to high priority processes as other kernels are ...” “... the Linux kernel is not reentrant, which means that only one processor in a multi mode system can execute kernel code at one time” “ For the next couple of years, Linux is stuck with being only a valid choice for small uniprocessor servers ...” ****/ If you could respond with some sources, I would send such back to WinNT magazine. thanks, —Steve, [email protected] Several months ago I wrote to you expressing my concerns over certain packages being made available just for users of the Red Hat distribution package. When this letter was published in the January issue I though I might get some feed back but I was amazed to see in the Feb. issue a letter by Fred Nance praising Red Hat. I think you have all missed the point! Yes, Red Hat and all the other distributions are doing very positive work in promoting LINUX, Without Red Hat several packages would not have been written and we would have definitely have a smaller user base and certainly not a significant force against Micro Soft. How long will it be before one large company sides with Caldera, one with Suse and one with Red Hat and we all start to make incompatible packages? If we don't start pulling together soon we will destroy everything already built up and MS will be laughing all the way to the bank. Could Red Hat give me a positive statement that all their packages including Oracle are compatible with other distributions and they don't use their own libraries and why was glibc2 picked up by Red Hat before it was formally released forcing incompatibility issues? Thank you for your time. —Bob Weeks I was just reading the “Best Of Tech Support” in the Linux Journal 1999, and was struck by the “Shutting Down” question, and its answer, since I have a habit of “unorthodox”(?) ways of shutting down. Among other things, I don't particularly like the ctrl-alt-del method of shutdown, mentioned as “the only way... for any user to safely shut down a linux system is to be physically present at the keyboard and press ctrl-alt-del.” I've found that the odd DOS user will sometimes reboot a system because their window is not responding, so I sometimes disable it in inittab. Also, if this is a “headless” machine with the console on a serial line, there may not *be* a console to type ctrl-alt-del on. Here is a reasonably simple, and relatively secure way of solving the problem (better checking of the arguments might be wiser). Compile the following, called shutme.c and set the permissions as below (with “user” replaced by the GID of the user, or of a login group allowed to use shutdown: #include <unistd.h> main(int argc, char **argv) { int i; for (i=0; i<argc-1; ++i) { argv[i] = argv[i+1]; } argv[argc-1] = 0; /* execv linkes a null-terminated list of args */ execv("/sbin/shutdown",argv); } -r-sr-s--- 1 root user 4186 Jan 25 01:31 Monty_only/shutme*For paranoia, I put this in a directory which was chmod 700 and owned by the user in question, too. Then, assuming that directory is in the user's path, they can run % shutme -rf nowwithout being root, on most unix machines I've used, including Debian 2.1, and almost certainly Red Hat. A nice thing about this is that any flags to “shutdown” are available, including the ability to cancel a running shutdown. Irix has some pickiness about that, and if Red Hat is similar, then there are some other options. It's easy for any root process to initiate a shutdown by sending INIT the right signal; replacing the above with just main() { execv("init 6",0); /* or execv("telinit -t 10 6",0); */ }or, if only shutdown is picky, execv(“reboot”) or execv(“halt”) should be just as good. The disadvantage of these is that they don't issue a wall to all the users, but that can be included in the program as well. Running sync just before is traditional, but init should take care of that without a hitch in modern times. There are a few other options which *should* work, and do for some linux versions: Make a group of people allowed to run shutdown, say “shutters”, and # chgrp shutters /sbin/sutdown # chmod 550 /sbin/shutdown # chmod +s /sbin/shutdownand then anyone in “shutters” should be able to run shutdown. Lastly, if you make an account called shutdown whose UID is 0 (root) and whose shell is a shell script that runs “shutdown -rf now” (or whatever), you can give people that password to that account and they can % ssh host -l shutdownor % rlogin ssh -l shutdownor % su - shutdownand so forth. I have used all of the above on various unix systems, in various states of security and/or partially-crashedness... I was thinking about it because I had to reboot a half-wedged SGI which I didn't have the root password for, but did have root on an NFS server whose disks it was mounting recently... Anyway, not meaning to be too picky, but with a name like “best of tech support,” I think this answer fell below standards. I did learn both about the “cp --one-file-system” flag from your column this month, so I *do* appreciate it, by the way, and the info that all zip disks come as partition #4 was an interesting confirmation of a trend I've noticed. anyway, hope this helps, Thomas... —Mark Montague, [email protected], [email protected] Just a quick suggestion for two articles on subjects that are sorely needed in the Linux community in my view. Some of us users have some old DOS and Windows applications that we still use and there are no Linux alternatives. This makes us having to use a dual boot system. Now I have had partial success with Wine in using a specialized communications program, but I have some DOS apps I would like to run under emulation that are indeed simple programs and should be amenable to run under Dosemu. I have not had success with Dosemu in 4 months of trying. I have not been able to find a source that simply explains how the program works and the theory behind how to get it running. The readme files, in my opinion, are very rudimentary. We need some comprehensive articles on how to set up these applications. Now I know Dosemu has the reputation of just being a hack that so many people are using to run games but there are people like me who want to be able to do some useful work with Dos apps, not necessarily games. I've had some success with Wine with a specialized communication program that I use to access my patient database at the hospital I work for. It is an old Windows 3.1 app that runs good with a little distortion of the fonts. It is still very usable and negates the need for me to have to boot to Windows every time I need it. One neat thing is that it uses the modem under Windows to make a 9600bps connection to the hospital server and under emulation, I do not see a speed hit whatsoever even though it is running through the extra layer of emulation. I would hope you would see fit to find an appropriate author(s) who could tackle this task and I believe you would be doing a great service to the Linux community in helping to make Linux more mainstream in being able to run Dos/Windows apps in a Linux system. I think the time is right to carry this out as I feel Linux is reaching “critical mass” and if a person can see that they do not have to give up on being able to run Dos/Windows programs by migrating to Linux, it might only give an extra boost in acquiring more Linux users. You have my permission to edit this to your standards if you see fit to publish it in the Linux Journal. Best regards, —Kurt Savegnago, M.D., [email protected] I read, from time to time, about user friendly configuration tools for Linux. The last time I did read about is the Davis Brians Letter on LJ # 58 (feb. '99, pag. 6 “Linux Installation and the Open Source Process”). I'm sorry, but I totally disagree with the idea that such a tool is essential, nor useful. The Linux community, the real one, may need tools that help to spread the knowledge about Linux to neophytes, but don't need windozian tools at all. Those that need Bill's nightmare simply must stay with windoze. Linux is the evolution of UNIX (even if not the only one) and it is light-years far from MS, both in power and flexibility. People not sufficiently skilled to install Linux from himselves has at least three choices: 1) to buy a pre-installed system; 2) became a more skilled one or 3) stay with their best silly O.S. Everyone is free to get his own way but, please, don't blame others for your incompetence: Linux may be free (or Open Source or whatever you like to call it) but in the real life nobody can have hothing without some personal effort. —Franco Favento, [email protected] I have read Robert's letter on page 94, LJ Jan 1999. I definitely agree with you that we don't need the Intel-Hat, or sth else which is another M$ WinDos alike OS. In the long terms, it maybe comes to the end that there is only one distribution of Linux surviving in the market, but right now, I hope all the exisitng distributions work ahead, as this will bring the benefits to all Linux users. In fact, all Linux distributions have the same problems: or says, the aims - they need easier way to install/maintain, and more the better GUI-based applications for business. Best, —Frederic, [email protected] I am a M$ separatist, hooked now on Linux. The reason I switched is described in the following little poem: Flamed on the GPS newsgroup (? by Bill Gates), response in rhyme Somebody said that I lied That's bullshit, I quickly replied An honest mistake Can anyone make I'll explain, and then you decide My ThinkPad and Windows were wed At the factory in the same bed They shipped 95 And to keep it alive An upgrade to 98, I ded 98 was a great deal much wurse I could tell right away it was cursed (some kind of f.king virus) It choked very hard When I tried a sound card The factory defaults I restured The most outragous abuse When Netscape I tried to use Explorer stomped in And GPF'd me like sin My computer I had to rebuse My times not worth much anymore I'm just an old profes*sor* With me its my health Compared to his wealth His geek thugs are what I deplore When at the young age of 23 Keats got a bad case of TB He then wrote some lines That have help me define What a loss Gates handed to me (“When I have thoughts that I may cease to be, Before my pen has gleaned my teeming brain”) Better spent my own time would be Writing for pos-ter-i-te So, I've tried to put it in rhyme Think about this in your spare time And then you will see What my time's worth to me To take it away was a crime. —M David Tilson, [email protected] I have a total of four PC's, two at home and two in my office. I have in both places one PC with an Uncle Bill Gates' system for talking with electrical engineers, and one PC with Linux system for talking with physicists. I have decided to keep these machines separate, since Uncle Bill has constructed a windowing system with an internal search and destroy program. This program is automatically run as soon as another system (e.g. Linux) is detected on the hard disk. Do not bother to tell me how to overcome such programming games, since I so much enjoyed viewing this program on a friends PC. In an unplanned (by the users) civil war program, Windows 95 took out NT. However, I would like to express concern about Linux. After looking over the shoulder of a student depositing RedHat 5.0 on an office box, I easily enough installed it again on a box at home. Later I bought some RedHat “power tools”. I ignored the warning on the box that I would need RedHat 5.2. Then I had to go out and buy RedHat 5.2. My discomfort is not yet my anger at Uncle Bill for changing the file format in Microsoft Word, thus increasing the price of talking with electrical engineers. But really, ... 5.0 ... 5.2? I am perfectly willing to pay a reasonable price for packaged software, without using up my students valuable time searching the overcrowded web sites for the free versions. But I do see some bad handwriting on the wall. Further, when you merely update from 5.0 to 5.2, rather than doing a fresh install, things that used to work can stop working. My postscript printer file thought the printer moved from America to Europe. It changed the internal view of the paper size and forgot to tell me. The menu bars for increasing the console window size no longer show any words, but I can use the menus. I merely have to remember what the menus used to read when they actually had words on them. I am afraid that when Intel buys into Linux, the commercial outlook will lessen the quality of the system. Can you get ruined by success? —Allan Widom, [email protected] Hi Jeff Alami, Regarding your statement in 32bitsonline article “Small Linux Machines” “Which system has the bragging rights of being the smallest Linux computer around? My guess is Schlumberger's CyberFlex Open 16K smart card with the Linux kernel. The computer chip on card stores card holder information and even biometric information for secure authentication. Now there's a small Linux machine.” I think you are mistaken here. According to the documentation on Schlumberger's CyberFlex web site, this is a smartcard that runs Java programs on a Java Virtual Machine under control of a small operating system called GPOS (General Purpose Operating System). The notion that the card runs Linux seems to have started with Marjorie Richardson's claim to that effect in the January 1999 issue of Linux Journal when granting the 1998 Editor's Choice Award to the Cyberflex. I don't know whether she has issued a correction to her statement since I haven't seen the February 1999 issue of LJ. I'll copy her on this message in case others haven't pointed this out to her. So it looks like the smallest Linux machine is the one developed from off the shelf parts by Stanford University Wearables Lab. Now _there's_ a small machine that's actually running Linux. Regards, —Ronald L Fox, [email protected] See the MUSCLE web site for more about Linux and smart cards. In an 8/98 letter Mr. M. Leo Cooper states that one can Symbolically link the Netscape cookies file to the NULL device, thus preventing Heinous WebBots access to this info. ( a Most Laudable Goal! ) Mr. Cooper's soultion is thus: ln -s ~/.netscape/cookies /dev/nullMr. Cooper has a Great Idea, on My System; however it Fails, for some Subtle reasons. ( ln ) will Fail IF: The Second File EXISTS ! In other words, as Printed, IF /dev/null Exists, then the Link will Fail ! Important: On Most Unix / Linux systems /dev/null, and /dev/zero are Required for Proper Operation, therefore, Likely to have been Created and Exist. All is Not Lost though, here is how I was able to Accomplish the job: 1) Delete the ~/.netscape/cookies File, copy it to a Backup if you Want the Info, such as: cookies. Sav or cookies. Bak, Then delete cookies! 2) Use this form for the link command: ( ln -s /dev/null ~/.netscape/cookies )This procedure worked Flawlessly for Me, /dev/null is Preserved, and a New @cookies Link is Created under ~/.netscape ! To Test my theory, I logged into a server that I knew was Particularly Nasty about setting cookies. After the session I viewed cookies using the editor in Midnight Commander ( a Favorite ), and cookies displayed Absolutely Nothing, the Void that we Want that Uninvited WebBot to See ! This procedure is a little more involved than just piping something like: ls -l /home/cookies > /dev/nullBut Well Worth the effort as “Big Brother” is snooping Relentlessly, and people have a Right to be Concerned for their Privacy! Now, what do we do about [ Pentium III (c) Intel Inc. ] so called “Hardware Cookies” ? As an engineering student and systems programmer I, and some colleagues are discussing it! Sincerely, —Jim Boedicker, [email protected] On page 82 of your March 1999 issue, in the “Red Hat LINUX Secrets, Second Edition” review, you have 3 spelling errors, or mistyping. In the first column, around the 4th line down, it says “Linux kernel 2.2.35”, i believe that that is mistyped. I think it should be 2.0.35. Also, in the second column, 17th line down, it says “(2.2.35)” again, and 5 words later, “(2.2.32-34)”. I'm pretty sure Duane Hellums, the author, didnt mean to type these wrong versions. I just thought i would bring it to your attention. —scott miga [email protected] I just read the review of “Red Hat LINUX Secrets, Second Edition” on page 82 of the March 1999 Linux Journal, and I'd like to point out a few problems with the article: (i) Every time it mentions the kernel, it incorrectly refers to the versions as 2.2.x, when they should be 2.0.x. (ii) It says, “Also helpful would be a loadable module for sound card support to avoid having to manually configure and rebuild the kernel...” I haven't purchased the book, but if it indeed comes with Red Hat 5.1, as the article claims, then the kernel is already preconfigured for sound card use and installs all the modules that Red Hat supports. Red Hat also provides a nice tool, 'sndconfig', to configure your sound card and modify the /etc/conf.modules and /etc/isapnp.conf file (if needed). (iii) It says, “...Linux is in dire need of an intuitive, commercial-quality, freeware, GUI-based word processor...” While technically not freeware, both Corel's WordPerfect and Star Division's StarOffice are available for free for individual, non-commercial use. KDE also comes with a text editor that provides approximately the same functionality as Wordpad. —Jeff Bastian, [email protected] Being in the position in my company to “affect the purchase decisions”, I have encountered some very disturbing trends in the way corporate world sees and uses IT equipment. As many of you out there I have been a victim of the “main-stream” systems which means for example that a $3,500,000 installation by Sun known at the company as “mega-server” every once in a while gives messages saying “No more users allowed to log in”. You know - the licensing thing. The database on which the livelyhood of the company depends is something of a joke. In fact that's a bunch of databases designed by different vendors that handle various aspects of production. They can neither be modifyed nor discarded due to the nature of contracts with their vendors and the sheer amount of money invested in them. So I am currently involved in a project of designing a super-database that would join them all in one working mechanism and would provide a usable interface to the whole system. You would probably say what can be easier all the tools are avilable for any platform one would choose. Not so fast. The company went shopping for yet another vendor to build the database. The final choice was between (no-no, there was no mentioning Linux!) a company that would charge us $20,000 for the whole project and a company that would charge $500,000. (There's no typos in the zeroes, I am talking twenty thousand vs. half a million dollars.) Now try to guess which company won the gig. The half-a-million dollar one. Of course. And I voted for it too. Why? Because the bigger budget means more “petty-cash”, more restaurant invitations, more business trips etc. Besides some people in the company are dependent on the commissions... What kind of commissions do you get off a Linux-based job? $500, at most. All of this makes me very depressed. There is no way in the world that Linux can make its way into corporations like this one. But something happened recently that gives hope. I saw a demo by Silicon Grafics of their new Intel-based workstations. They come in two flavors: NT and Linux. And SGI offers full industry-standard support for these machines. Which is great, because they are not cheap (forget about free). What I am trying to say is, to make it into the corporate world we may need not just commercial, but extremely expensive systems. Like $3,500,000 servers. Regarding Chuck Jackson's letter in the March 1999 issue (in response to Mr. Havlik's letter), arguing that advertising can easily be skipped, will this may be true, skipping the ads always interrupts the flow of reading. The same holds true for letters and articles which are “ ... continued on page ...”. I don't see any reason why e.g. the letters to the editor aren't on two consecutive pages in the magazine, but instead are separated by almost ninety pages (in the March 1999 issue). I hope LJ will try to avoid this in the future. I know that most magazines can't do without advertising, however I would think that the main goal of LJ is to convey information about new trends to the linux community - not every ad in LJ matches this criterion. Personally, I subscribed to LJ because of the informative articles, not because I like the advertising so much, but perhaps there's a way out for all of us: I'd like to see a separate section packed with ads, e.g. in the second half of the magazine - just like the german c't magazine - which reduces the number of ads in the first half. This has advantages for both groups of readers, the ones who read the articles and the ones who are interested in the ads. Articles would be mainly in the first half and would contain less intrusive advertising (e.g. ads would only precede or follow the articles, not interrupt them). —Lars Michael, [email protected] Let me briefly state that I have been a loyal subscriber since around issue 5 and thoroughly enjoy LJ. I'm a bit disturbed by your comments to Reilly Burke who wrote the “Red Hat Phenomenon” letter. “ . . . However, Red Hat does seem to be the most popular distribution available, so they must be doing something right.” This sounds exactly like the answer I get when I try to convince users of Evil Empire software that there are alternatives which are more robust and feature rich. Reading between the lines, I get the impression that any distribution which gains market share is a good thing and if one dominates, it must be the most technically sound. By this logic, we should all dump Linux and run one of the flavors of Windows. We should also dump Emacs and LaTex and only use MS Office tools. Clearly they own the largest market share and are therefore are doing the most thing right. A year or two ago I probably would have agreed but we have seen an astounding increase in Linux coverage in general media. Not just technical publications devoted to Unix but a broad range of publications from Byte Magazine and Computer Shopper (both slanted towards MS) to Dr. Dobb's Journal to mainstream non-technical publications. This is mainly due to the announcements by major vendors to support Linux (Sybase, Informix, Corel, etc), and of course Netscape's decision to open the source to their browser. I personally use Red Hat and have been happy with it. I will say that I would not rule out switching to a different distribution if it seemed to have advantages. I am also a bit confused as to what Reilly means by “conventional Unix methods”. I have been using Unix for a number of years and have always thought that there were two main Unix flavors; SysV and BSD. Vendor specific Unix implementations are typically based upon one of these base OSs. I hope this does not sound too harsh, it is not meant to be. I think you and all the staff at LJ have been doing a great job. —John Basso, [email protected] Reilly Burke's letter (March 1999) criticized Red Hat and you responded that Red Hat is the most popular distribution so they must be doing something right. Well, Microsoft must be doing one hell of a lot of things right in that case, but that does not imply that what they produce is of high quality. Actually, I did not understand Reilly Burke's criticism of Red Hat as 'difficult to install'. Red Hat 5.2 installed for me with zero hassles. Nor his remark about Red Hat having a 'strange implementation' (unclear what that means). As a UNIX user since version 6 in 1975 and a Linux user since 1995, first using Slackware, now using Red Hat, I have always found all UNIXes to be arcane but very sound. Usually, when someone is criticizing a given UNIX or given text editor, or a given whatever, it is because they are used to something else and are not making room for the learning curve. That being said, we all know that we have a task ahead of us in developing easier install shells both for the Linux OS and for Linux applications, as well as continued work to agree upon a File Hierarchy Standard and Control Panel for GUI-based System Administration. regards, —Dennis G. Allard, [email protected] I the March 1999 issue your reply to Reilly Burke is quite assinine: “Sorry, you will have to ask Red Hat about their policies. I am not in their confidence. However, Red Hat does seem to be the most popular dis- tribution available, so they must be doing something right. —Editor” Sorry, why do you think people read your journal but to get information. For you to put someone off the way you did Reilly Burke is about the most pitiful reply I have ever seen. It makes you seem too lazy to ask Red Hat about it (what your subscribers expect you to do) and like an ass for answering in a flippant manner. WindowsXXX seems to be the most popular distribution, actually, they must be doing something right, you betcha, they market very well, however the software they release seems to me and large numbers of others to be nearly unusable because of stability and frustration caused by difficulty of use. The man has a legitimate question about Red Hat, so that leads me to conclude that you are (as most journals are) only interested in the advertising revenue. —Gene Imes, [email protected]
http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/issues/060/lte60more.html
CC-MAIN-2016-30
refinedweb
5,929
70.84
Newton Series Release Notes¶ 14.1.0¶ The fix for OSSA-2017-005 (CVE-2017-16239) was too far-reaching in that rebuilds can now fail based on scheduling filters that should not apply to rebuild. For example, a rebuild of an instance on a disabled compute host could fail whereas it would not before the fix for CVE-2017-16239. Similarly, rebuilding an instance on a host that is at capacity for vcpu, memory or disk could fail since the scheduler filters would treat it as a new build request even though the rebuild is not claiming new resources. Therefore this release contains a fix for those regressions in scheduling behavior on rebuild while maintaining the original fix for CVE-2017-16239. Note The fix relies on a RUN_ON_REBUILDvariable which is checked for all scheduler filters during a rebuild. The reasoning behind the value for that variable depends on each filter. If you have out-of-tree scheduler filters, you will likely need to assess whether or not they need to override the default value (False) for the new variable. This release includes a fix for bug 1733886 which was a regression introduced in the 2.36 API microversion where the forceparameter was missing from the PUT /os-quota-sets/{tenant_id}API request schema so users could not force quota updates with microversion 2.36 or later. The bug is now fixed so that the forceparameter can once again be specified during quota updates. There is no new microversion for this change since it is an admin-only API. 14.0.10¶ Security Issues¶ OSSA-2017-005: Nova Filter Scheduler bypass through rebuild action By rebuilding an instance, an authenticated user may be able to circumvent the FilterScheduler bypassing imposed filters (for example, the ImagePropertiesFilter or the IsolatedHostsFilter). All setups using the FilterScheduler (or CachingScheduler) are affected. The fix is in the nova-api and nova-conductor services. 14.0. 14.0.5¶ Known Issues¶ recommend setting the value to 0, which means do not trigger a timeout. (This has been made the default in Ocata and Pike.) To modify when a live-migration will fail with a timeout error, please now look at [libvirt]/live_migration_completion_timeoutand [libvirt]/live_migration_downtime. Security Issues¶ [CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets 14.0.4¶ Known Issues¶ Bug Fixes¶ Fixes bug 1662699 which was a regression in the v2.1 API from the block_device_mapping_v2.boot_indexvalidation that was performed in the legacy v2 API. With this fix, requests to create a server with boot_index=Nonewill be treated as if boot_indexwas not specified, which defaults to meaning a non-bootable block device. 14.0.2¶ Prelude¶ A new database schema migration is included in this release to fix bug 1635446. Known Issues¶ Use of the newly introduced optional placement RESTful API in Newton requires WebOb>=1.6.0. This requirement was not reflected prior to the release of Newton in requirements.txt with the lower limit being set to WebOb>=1.2.3. 14.0.1¶ Prelude¶ Nova 13.0.0 (Mitaka) to 14.0.0 (Newton). That said, a few major changes are worth to notice here. This is not an exhaustive list of things to notice, rather just important things you need to know : Latest API microversion supported for Newton is v2.38 Nova now provides a new placement RESTful API endpoint that is for the moment optional where Nova compute nodes use it for providing resources. For the moment, the nova-scheduler is not using it but we plan to check the placement resources for Ocata. In case you plan to rolling-upgrade the compute nodes between Newton and Ocata, please look in the notes below how to use the new placement API. Cells V2 now supports booting instances for one cell v2 only. We plan to add a multi-cell support for Ocata. You can prepare for Ocata now by creating a cellv2 now using the nova-manage related commands, but configuring Cells V2 is still fully optional for this cycle. Nova is now using Glance v2 API for getting image resources. API microversions 2.36 and above now deprecate the REST resources in Nova used to proxy calls to other service type APIs (eg. /os-volumes). We’ll still supporting those until we raise our minimum API version to 2.36 which is not planned yet (we’re supporting v2.1 as of now) but you’re encouraged to stop using those resources and rather calling the other services that provide those natively. 14.0.0¶ New Features¶ Add perf event support for libvirt driver. This can be done by adding new configure option ‘enabled_perf_events’ in libvirt section of nova.conf. This feature requires libvirt>=2.0.0. Starting from REST API microversion 2.34 pre-live-migration checks are performed asynchronously. instance-actionsshould be used for getting information about the checks results. New approach allows to reduce rpc timeouts amount, as previous workflow was fully blocking and checks before live-migration make blocking rpc request to both source and destination compute node. New configuration option live_migration_permit_auto_converge has been added to allow hypervisor to throttle down CPU of an instance during live migration in case of a slow progress due to high ratio of dirty pages. Requires libvirt>=1.2.3 and QEMU>=1.6.0. New configuration option live_migration_permit_post_copy has been added to start live migrations in a way that allows nova to switch an on-going live migration to post-copy mode. Requires libvirt>=1.3.3 and QEMU>=2.5.0. If post copy is permitted and version requirements are met it also changes behaviour of ‘live_migration_force_complete’, so that it switches on-going live migration to post-copy mode instead of pausing an instance during live migration. Fix os-console-auth-tokens API to return connection info for all types of tokens, not just RDP. Hyper-V RemoteFX feature. Microsoft RemoteFX enhances the visual experience in RDP connections, including providing access to virtualized instances of a physical GPU to multiple guests running on Hyper-V. In order to use RemoteFX in Hyper-V 2012 R2, one or more DirectX 11 capable display adapters must be present and the RDS-Virtualization server feature must be installed. To enable this feature, the following config option must be set in the Hyper-V compute node’s ‘nova.conf’ file: [hyperv] enable_remotefx = True To create instances with RemoteFX capabilities, the following flavor extra specs must be used: os:resolution. Guest VM screen resolution size. Acceptable values: 1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160 ‘3840x2160’ is only available on Windows / Hyper-V Server 2016. os:monitors. Guest VM number of monitors. Acceptable values: [1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016 os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values: 64, 128, 256, 512, 1024 There are a few considerations that needs to be kept in mind: Not all guests support RemoteFX capabilities. Windows / Hyper-V Server 2012 R2 does not support Generation 2 VMs with RemoteFX capabilities. Per resolution, there is a maximum amount of monitors that can be added. The limits are as follows: For Windows / Hyper-V Server 2012 R2: 1024x768: 4 1280x1024: 4 1600x1200: 3 1920x1200: 2 2560x1600: 1 For Windows / Hyper-V Server 2016: 1024x768: 8 1280x1024: 8 1600x1200: 4 1920x1200: 4 2560x1600: 2 3840x2160: 1 Microversion v2.26 allows to create/update/delete simple string tags. They can be used for filtering servers by these tags. Added microversion v2.35 that adds pagination support for keypairs with the help of new optional parameters ‘limit’ and ‘marker’ which were added to GET /os-keypairs request. Added microversion v2.28 from which hypervisor’s ‘cpu_info’ field returned as JSON object by sending GET /v2.1/os-hypervisors/{hypervisor_id} request. Virtuozzo Storage is available as a volume backend in libvirt virtualization driver. Note Only qcow2/raw volume format supported, but not ploop. Virtuozzo ploop disks can be resized now during “nova resize”. Virtuozzo instances with ploop disks now support the rescue operation A new nova-manage command has been added to discover any new hosts that are added to a cell. If a deployment has migrated to cellsv2 using either the simple_cell_setup or the map_cell0/map_cell_and_hosts/map_instances combo then anytime a new host is added to a cell this new “nova-manage cell_v2 discover_hosts” needs to be run before instances can be booted on that host. If multiple hosts are added at one time the command only needs to be run one time to discover all of them. This command should be run from an API host, or a host that is configured to use the nova_api database. Please note that adding a host to a cell and not running this command could lead to build failures/reschedules if that host is selected by the scheduler. The discover_hosts command is necessary to route requests to the host but is not necessary in order for the scheduler to be aware of the host. It is advised that nova-compute hosts are configured with “enable_new_services=False” in order to avoid failures before the hosts have been discovered. On evacuate evacuate operation to the destination without verifying the scheduler. On live-migrate live-migrate operation to the destination without verifying the scheduler. The 2.37 microversion adds support for automatic allocation of network resources for a project when networks: autois specified in a server create request. If the project does not have any networks available to it and the auto-allocated-topologyAPI is available in the Neutron networking service, Nova will call that API to allocate resources for the project. There is some setup required in the deployment for the auto-allocated-topologyAPI to work in Neutron. See the Additional features section of the OpenStack Networking Guide for more details for setting up this feature in Neutron. Note The API does not default to ‘auto’. However, python-novaclient will default to passing ‘auto’ for this microversion if no specific network values are provided to the CLI. Note This feature is not available until all of the compute services in the deployment are running Newton code. This is to avoid sending a server create request to a Mitaka compute that can not understand a network ID of ‘auto’ or ‘none’. If this is the case, the API will treat the request as if networkswas not in the server create request body. Once all computes are upgraded to Newton, a restart of the nova-api service will be required to use this new feature. Nova now defaults to using the glance version 2 protocol for all backend operations for all virt drivers. A use_glance_v1config option exists to revert to glance version 1 protocol if issues are seen, however that will be removed early in Ocata, and only glance version 2 protocol will be used going forward. Adds a new feature to the ironic virt driver, which allows multiple nova-compute services to be run simultaneously. This uses consistent hashing to divide the ironic nodes between the nova-compute services, with the hash ring being refreshed each time the resource tracker runs. Note that instances will still be owned by the same nova-compute service for the entire life of the instance, and so the ironic node that instance is on will also be managed by the same nova-compute service until the node is deleted. This also means that removing a nova-compute service will leave instances managed by that service orphaned, and as such most instance actions will not work until a nova-compute service with the same hostname is brought (back) online. When nova-compute services are brought up or down, the ring will eventually re-balance (when the resource tracker runs on each compute). This may result in duplicate compute_node entries for ironic nodes while the nova-compute service pool is re-balancing. However, because any nova-compute service running the ironic virt driver can manage any ironic node, if a build request goes to the compute service not currently managing the node the build request is for, it will still succeed. There is no configuration to do to enable this feature; it is always enabled. There are no major changes when only one compute service is running. If more compute services are brought online, the bigger changes come into play. Note that this is tested when running with only one nova-compute service, but not more than one. As such, this should be used with caution for multiple compute hosts until it is properly tested in CI. Multitenant networking for the ironic compute driver is now supported. To enable this feature, ironic nodes must be using the ‘neutron’ network_interface. The Libvirt driver now uses os-vif plugins for handling plug/unplug actions for the Linux Bridge and OpenVSwitch VIF types. Each os-vif plugin will have its own group in nova.conf for configuration parameters it needs. These plugins will be installed by default as part of the os-vif module installation so no special action is required. Added hugepage support for POWER architectures. Microversions may now (with microversion 2.27) be requested with the “OpenStack-API-Version: compute 2.27” header, in alignment with OpenStack-wide standards. The original format, “X-OpenStack-Nova-API-Version: 2.27”, may still be used. Nova has been enabled for mutable config. Certain options may be reloaded by sending SIGHUP to the correct process. Live migration options will apply to live migrations currently in progress. Please refer to the configuration manual. DEFAULT.debug libvirt.live_migration_completion_timeout libvirt.live_migration_progress_timeout The following legacy notifications have been been transformed to a new versioned payload: instance.delete instance.pause instance.power_on instance.shelve instance.suspend instance.restore instance.resize instance.update compute.exception Every versioned notification has a sample file stored under doc/notification_samples directory. Consult for more information. Nova is now configured to work with two oslo.policy CLI scripts that have been added. The first of these can be called like “oslopolicy-list-redundant –namespace nova” and will output a list of policy rules in policy.[json|yaml] that match the project defaults. These rules can be removed from the policy file as they have no effect there. The second script can be called like “oslopolicy-policy-generator –namespace nova –output-file policy-merged.yaml” and will populate the policy-merged.yaml file with the effective policy. This is the merged results of project defaults and config file overrides. Added microversion v2.33 which adds paging support for hypervisors, the admin is able to perform paginate query by using limit and marker to get a list of hypervisors. The result will be sorted by hypervisor id. The nova-compute worker now communicates with the new placement API service. Nova determines the placement API service by querying the OpenStack service catalog for the service with a service type of ‘placement’. If there is no placement entry in the service catalog, nova-compute will log a warning and no longer try to reconnect to the placement API until the nova-worker process is restarted. A new [placement] section is added to the nova.conf configuration file for configuration options affecting how Nova interacts with the new placement API service. This contains the usual keystone auth and session options. The pointer_model configuration option and hw_pointer_model image property was added to specify different pointer models for input devices. This replaces the now deprecated use_usb_tablet option. The nova-policy command line is implemented as a tool to experience the under-development feature policy discovery. User can input the credentials information and the instance info, the tool will return a list of API which can be allowed to invoke. There isn’t any contract for the interface of the tool due to the feature still under-development. Add a nova-manage command to refresh the quota usages for a project or user. This can be used when the usages in the quota-usages database table are out-of-sync with the actual usages. For example, if a resource usage is at the limit in the quota_usages table, but the actual usage is less, then nova will not allow VMs to be created for that project or user. The nova-manage command can be used to re-sync the quota_usages table with the actual usage. Libvirt driver will attempt to update the time of a suspended and/or a migrated guest in order to keep the guest clock in sync. This operation will require the guest agent to be configured and running in order to be able to run. However, this operation will not be disruptive. This release includes a new implementation of the vendordata metadata system. Please see the blueprint at for a detailed description. There is also documentation in the Nova source tree in vendordata.rst. The 2.32 microversion adds support for virtual device role tagging. Device role tagging is an answer to the question ‘Which device is which,. The 2.32 microversion also adds the 2016-06-30 version to the metadata API. Starting with 2016-06-30, the metadata contains a ‘devices’ sections which lists any devices that are tagged as described in the previous paragraph, along with their hardware metadata. Known Issues¶ If a deployer has updated their deployment to using cellsv2 using either the simple_cell_setup or the map_cell0/map_cell_and_hosts/map_instances combo and they add a new host into the cell it may cause build failures or reschedules until they run the “nova-manage cell_v2 discover_hosts” command. This is because the scheduler will quickly become aware of the host but nova-api will not know how to route the request to that host until it has been “discovered”. In order to avoid that it is advised that new computes are disabled until the discover command has been run.. When running Nova Compute and Cinder Volume or Backup services on the same host they must use a shared lock directory to avoid rare race conditions that can cause volume operation failures (primarily attach/detach of volumes). This is done by setting the “lock_path” to the same directory in the “oslo_concurrency” section of nova.conf and cinder.conf. This issue affects all previous releases utilizing os-brick and shared operations on hosts between Nova Compute and Cinder data services. When using virtual device role tagging, the metadata on the config drive lags behind the metadata obtained from the metadata API. For example, if a tagged virtual network interface is detached from the instance, its tag remains in the metadata on the config drive. This is due to the nature of the config drive, which, once written, cannot be easily updated by Nova. Upgrade Notes¶ All cloudpipe configuration options have been added to the ‘cloudpipe’ group. They should no longer be included in the ‘DEFAULT’ group. All crypto configuration options have been added to the ‘crypto’ group. They should no longer be included in the ‘DEFAULT’ group. All WSGI configuration options have been added to the ‘wsgi’ group. They should no longer be included in the ‘DEFAULT’ group. Aggregates are being moved to the API database for CellsV2. In this release, the online data migrations will move any aggregates you have in your main database to the API database, retaining all attributes. Until this is complete, new attempts to create aggregates will return an HTTP 409 to avoid creating aggregates in one place that may conflict with aggregates you already have and are yet to be migrated. Note that aggregates can no longer be soft-deleted as the API database does not replicate the legacy soft-delete functionality from the main database. As such, deleted aggregates are not migrated and the behavior users will experience will be the same as if a purge of deleted records was performed. The nova-manage db online_data_migrations command will now migrate server groups to the API database. New server groups will be automatically created in the API database but existing server groups must be manually migrated using the nova-manage command. The get_metrics API has been replaced by populate_metrics in nova.compute.monitors.base module. This change is introduced to allow each monitor plugin to have the flexibility of setting it’s own metric value types. The in-tree metrics plugins are modified as a part of this change. However, the out-of-tree plugins would have to adapt to the new API in order to work with nova. For the Virtuozzo Storage driver to work with os-brick <1.4.0, you need to allow “pstorage-mount” in rootwrap filters for nova-compute. You must update the rootwrap configuration for the compute service if you use ploop images, so that “ploop grow” filter is changed to “prl_disk_tool resize”. The recordconfiguration option for the console proxy services (like VNC, serial, spice) is changed from boolean to string. It specifies the filename that will be used for recording websocket frames. ‘nova-manage db sync’ can now sync the cell0 database. The cell0 db is required to store instances that cannot be scheduled to any cell. Before the ‘db sync’ command is called a cell mapping for cell0 must have been created using ‘nova-manage cell_v2 map_cell0’. This command only needs to be called when upgrading to CellsV2. A new nova-manage command has been added which will upgrade a deployment to cells v2. Running the command will setup a single cell containing the existing hosts and instances. No data or instances will be moved during this operation, but new data will be added to the nova_api database. New instances booted after this point will be placed into the cell. Please note that this does not mean that cells v2 is fully functional at this time, but this is a significant part of the effort to get there. The new command is “nova-manage cell_v2 simple_cell_setup –transport_url <transport_url>” where transport_url is the connection information for the current message queue used by Nova. deprecated configuration option client_log_levelof the section [ironic]has been deleted. Please use the config options log_config_appendor default_log_levelsof the [DEFAULT]section. A new nova-manage command ‘nova-manage cell_v2 map_cell0’ is now available. Creates a cell mapping for cell0, which is used for storing instances that cannot be scheduled to any cell. This command only needs to be called when upgrading to CellsV2. The default value of the pointer_modelconfiguration option has been set to ‘usbtablet’. The following policy enforcement points have been removed as part of the restructuring of the Nova API code. The attributes that could have been hidden with these policy points will now always be shown / accepted. os_compute_api:os-disk-config- show / accept OS-DCF:diskConfigparameter on servers os-access-ips- show / accept accessIPv4and accessIPv6parameters on servers The following entry points have been removed nova.api.v21.extensions.server.resize- allowed accepting additional parameters on server resize requests. nova.api.v21.extensions.server.update- allowed accepting additional parameters on server update requests. nova.api.v21.extensions.server.rebuild- allowed accepting additional parameters on server rebuild requests. Flavors are being moved to the API database for CellsV2. In this release, the online data migrations will move any flavors you have in your main database to the API database, retaining all attributes. Until this is complete, new attempts to create flavors will return an HTTP 409 to avoid creating flavors in one place that may conflict with flavors you already have and are yet to be migrated. Note that flavors can no longer be soft-deleted as the API database does not replicate the legacy soft-delete functionality from the main database. As such, deleted flavors are not migrated and the behavior users will experience will be the same as if a purge of deleted records was performed. The 2.37 microversion enforces the following: networksis required in the server create request body for the API. Specifying networks: autois similar to not requesting specific networks when creating a server before 2.37. The uuidfield in the networksobject of a server create request is now required to be in UUID format, it cannot be a random string. More specifically, the API used to support a nic uuid with a “br-” prefix but that is a legacy artifact which is no longer supported. It is now required that the glance environment used by Nova exposes the version 2 REST API. This API has been available for many years, but previously Nova only used the version 1 API. imageRef input to the REST API is now restricted to be UUID or an empty string only. imageRef input while create, rebuild and rescue server etc must be a valid UUID now. Previously, a random image ref url containing image UUID was accepted. But now all the reference of imageRef must be a valid UUID (with below exception) otherwise API will return 400. Exception- In case boot server from volume. Previously empty string was allowed in imageRef and which is ok in case of boot from volume. Nova will keep the same behavior and allow empty string in case of boot from volume only and 400 in all other case. Prior to Grizzly release default instance directory names were based on instance.id field, for example directory for instance could be named instance-00000008. In Grizzly this mechanism was changed, instance.uuid is used as an instance directory name, e.g. path to instance: /opt/stack/data/nova/instances/34198248-5541-4d52-a0b4-a6635a7802dd/. In Newton backward compatibility is dropped. For instances that haven’t been restarted since Folsom and earlier maintanance should be scheduled before upgrade(stop, rename directory to instance.uuid, then start) so Nova will start using new paths for instances. The ironic driver now requires python-ironicclient>=1.5.0 (previously >=1.1.0), and requires the ironic service to support API version 1.20 or higher. As usual, ironic should be upgraded before nova for a smooth upgrade process. The ironic driver now requires python-ironicclient>=1.6.0, and requires the ironic service to support API version 1.21. Keypairs have been moved to the API database, using an online data migration. During the first phase of the migration, instances will be given local storage of their key, after which keypairs will be moved to the API database. Default value of live_migration_tunnelled config option in libvirt section has been changed to False. After upgrading nova to Newton all live migrations will be non-tunnelled unless live_migration_tunnelled is explicitly set to True. It means that, by default, the migration traffic will not go through libvirt and therefore will no longer be encrypted. With the introduction of os-vif, some networking related configuration options have moved, and users will need to update their nova.conf. For OpenVSwitch users the following options have moved from [DEFAULT]to [vif_plug_ovs]- network_device_mtu - ovs_vsctl_timeout For Linux Bridge users the following options have moved from [DEFAULT]to [vif_plug_linux_bridge]- use_ipv6 - iptables_top_regex - iptables_bottom_regex - iptables_drop_action - forward_bridge_interface - vlan_interface - flat_interface - network_device_mtu For backwards compatibility, and ease of upgrade, these options will continue to work from [DEFAULT]during the Newton release. However they will not in future releases. The minimum required version of libvirt has been increased to 1.2.1 The minimum required QEMU version is now checked and has been set to 1.5.3 The network_api_class option was deprecated in Mitaka and is removed in Newton. The use_neutron option replaces this functionality. The newton release has a lot of online migrations that must be performed before you will be able to upgrade to ocata. Please take extra note of this fact and budget time to run these online migrations before you plan to upgrade to ocata. These migrations can be run without downtime with nova-manage db online_data_migrations. The notify_on_state_changeconfiguration option was StrOpt, which would accept any string or None in the previous release. Starting in the Newton release, it allows only three values: None, vm_state, vm_and_task_state. The default value is None. The deprecated auth parameter admin_auth_token was removed from the [ironic] config option group. The use of admin_auth_token is insecure compared to the use of a proper username/password. The previously deprecated config option listen```of the group ``serial_consolehas been removed, as it was never used in the code. The ‘manager’ option in [cells] group was deprecated in Mitaka and now it is removed completely in newton. There is no impact. The following deprecated configuration options have been removed from the cindersection of nova.conf: ca_certificates_file api_insecure http_timeout The ‘destroy_after_evacuate’ workaround option has been removed as the workaround is no longer necessary. The config options ‘osapi_compute_ext_list’ and ‘osapi_compute_extension’ were deprecated in mitaka. Hence these options were completely removed in newton, as v2 API is removed and v2.1 API doesn’t provide the option of configuring extensions. The deprecated config option remove_unused_kernelshas been removed from the [libvirt]config section. No replacement is required, as this behaviour is no longer relevant. The extensible resource tracker was deprecated in the 13.0.0 release and has now been removed. Custom resources in the nova.compute.resources namespace selected by the compute_resources configuration parameter will not be loaded. The legacy v2 API code was deprecated since Liberty release. The legacy v2 API code was removed in Newton release. We suggest that users should move to v2.1 API which compatible v2 API with more restrict input validation and microversions support. If users are still looking for v2 compatible API before switch to v2.1 API, users can use v2.1 API code as v2 API compatible mode. That compatible mode is closer to v2 API behaviour which is v2 API compatible without restrict input validation and microversions support. So if using openstack_compute_api_legacy_v2 in /etc/nova/api-paste.ini for the API endpoint /v2, users need to switch the endpoint to openstack_compute_api_v21_legacy_v2_compatible instead. The ‘live_migration_flag’ and ‘block_migration_flag’ options in libvirt section that were deprecated in Mitaka have been completely removed in Newton, because nova automatically sets correct migration flags. New config options has been added to retain possibility to turn tunnelling, auto-converge and post-copy on/off, respectively named live_migration_tunnelled, live_migration_permit_auto_converge and live_migration_permit_post_copy. The ‘memcached_server’ option in DEFAULT section which was deprecated in Mitaka has been completely removed in Newton. This has been replaced by options from oslo cache section. The service subcommand of nova-manage was deprecated in 13.0. Now in 14.0 the service subcommand is removed. Use service-* commands from python-novaclient or the os-services REST resource instead. The network_device_mtu option in Nova is deprecated for removal in 13.0.0 since network MTU should be specified when creating the network. Legacy v2 API code is already removed. A set of policy rules in the policy.json, which are only used by legacy v2 API, are removed. Both v2.1 API and v2.1 compatible mode API are using same set of new policy rules which are with prefix os_compute_api. Removed the security_group_apiconfiguration option that was deprecated in Mitaka. The correct security_group_api option will be chosen based on the value of use_neutronwhich provides a more coherent user experience. The deprecated volume_api_classconfig option has been removed. We only have one sensible backend for it, so don’t need it anymore. The libvirt option ‘iscsi_use_multipath’ has been renamed to ‘volume_use_multipath’. The ‘wsgi_default_pool_size’ and ‘wsgi_keep_alive’ options have been renamed to ‘default_pool_size’ and ‘keep_alive’ respectively. The following deprecated configuration options have been removed from the neutronsection of nova.conf: ca_certificates_file api_insecure url_timeout The ability to load a custom scheduler host manager via the scheduler_host_managerconfiguration option was deprecated in the 13.0.0 Mitaka release and is now removed in the 14.0.0 Newton release. DB2 database support was removed from tree. This is a non open source database that had no 3rd party CI, and a set of constraints that meant we had to keep special casing it in code. It also made the online data migrations needed for cells v2 and placement engine much more difficult. With 0% of OpenStack survey users reporting usage we decided it was time to remove this to focus on features needed by the larger community. Delete the deprecated glance.host, glance.port, glance.protocolconfiguration options. glance.api_serversmust be set to have a working config. There is currently no default for this config option, so a value must be set. Only virt drivers in the nova.virt namespace may be loaded. This has been the case according to nova docs for several releases, but a quirk in some library code meant that loading things outside the namespace continued to work unintentionally. That has been fixed, which means “compute_driver = nova.virt.foo” is invalid (and now enforced as such), and should be “compute_driver = foo” instead. The default policy for updating volume attachments, commonly referred to as swap volume, has been changed from rule:admin_or_ownerto rule:admin_api. This is because it is called from the volume service when migrating volumes, which is an admin-only operation by default, and requires calling an admin-only API in the volume service upon completion. So by default it would not work for non-admins. The deprecated osapi_v21.enabled config option has been removed. This previously allowed you a way to disable the v2.1 API. That is no longer something we support, v2.1 is mandatory. Now VMwareVCDriver will set disk.EnableUUID=True by default in all guest VM configuration file. To enable udev to generate /dev/disk/by-id Deprecation Notes¶ All barbican config options in Nova are now deprecated and may be removed as early as 15.0.0 release. All of these options are moved to the Castellan library. The cells.driver configuration option is now deprecated and will be removed at Ocata cycle. The feature to download Glance images via file transfer instead of HTTP is now deprecated and may be removed as early as the 15.0.0 release. The config options filesystemsin the section image_file_urlare affected as well as the derived sections image_file_url:<list entry name>and their config options idand mountpoint. As mentioned in the release notes of the Mitaka release (version 13.0.0), the EC2API support was fully removed. The s3 image service related config options were still there but weren’t used anywhere in the code since Mitaka. These are now deprecated and may be removed as early as the 15.0.0 release. This affects image_decryption_dir, s3_host, s3_port, s3_access_key, s3_secret_key, s3_use_ssl, s3_affix_tenant. The default_flavorconfig option is now deprecated and may be removed as early as the 15.0.0 release. It is an option which was only relevant for the deprecated EC2 API and is not used in the Nova API. The fatal_exception_format_errorsconfig option is now deprecated and may be removed as early as the 15.0.0 release. It is an option which was only relevant for Nova internal testing purposes to ensure that errors in formatted exception messages got detected. The image_info_filename_pattern, checksum_base_images, and checksum_interval_secondsoptions have been deprecated in the [libvirt]config section. They are no longer used. Any value given will be ignored. The following nova-manage commands are deprecated for removal in the Nova 15.0.0 Ocata release: nova-maange account scrub nova-manage fixed * nova-manage floating * nova-manage network * nova-manage project scrub nova-manage vpn * These commands only work with nova-network which is itself deprecated in favor of Neutron. The nova-manage vm listcommand is deprecated and will be removed in the 15.0.0 Ocata release. Use the nova listcommand from python-novaclient instead.). The config option snapshot_name_templatein the DEFAULTgroup is now deprecated and may be removed as early as the 15.0.0 release. The code which used this option isn’t used anymore since late 2012. The nova-allbinary is deprecated. This was an all in one binary for nova services used for testing in the early days of OpenStack, but was never intended for real use. Nova network is now deprecated. Based on the results of the current OpenStack User Survey less than 10% of our users remain on Nova network. This is the signal that it is time migrate to Neutron. No new features will be added to Nova network, and bugs will only be fixed on a case by case basis. The /os-certificatesAPI is deprecated, as well as the nova-certservice which powers it. The related config option cert_topicis also now marked for deprecation and may be removed as early as 15.0.0 Ocata release. This is a vestigial part of the Nova API that existed only for EC2 support, which is now maintained out of tree. It does not interact with any of the rest of nova, and should not just be used as a certificates as a service, which is all it is currently good for. All the APIs which proxy to other services were deprecated in this API version. Those APIs will return 404 on Microversion 2.36 or higher. The API user should use native API as instead of using those pure proxy for other REST APIs. The quotas and limits related to network resources ‘fixed_ips’, ‘floating ips’, ‘security_groups’, ‘security_group_rules’, ‘networks’ are filtered out of os-quotas and limit APIs respectively and those quotas should be managed through OpenStack network service. For using nova-network, you only can use API and manage quotas under Microversion ‘2.36’. The ‘os-fping’ API was deprecated also, this API is only related to nova-network and depend on the deployment. The deprecated APIs are as below: /images /os-networks /os-fixed-ips /os-floating-ips /os-floating-ips-bulk /os-floating-ip-pools /os-floating-ip-dns /os-security-groups /os-security-group-rules /os-security-group-default-rules /os-volumes /os-snapshots /os-baremetal-nodes /os-fping Nova option ‘use_usb_tablet’ will be deprecated in favor of the global ‘pointer_model’. The quota_driver configuration option is now deprecated and will be removed in a subsequent release. Corrected response for the case where an invalid status value is passed as a filter to the list servers API call. As there are sufficient statuses defined already, any invalid status should not be accepted. As of microversion 2.38, the API will return 400 HTTPBadRequest if an invalid status is passed to list servers API for both admin as well as non admin user. Fixed bug #1579706: “Listing nova instances with invalid status raises 500 InternalServerError for admin user”. Now passing an invalid status as a filter will return an empty list. A subsequent patch will then correct this to raise a 400 Bad Request when an invalid status is received. When instantiating an instance based on an image with the metadata hw_vif_multiqueue_enabled=true, if flavor.vcpus is less than the limit of the number of queues on a tap interface in the kernel, nova uses flavor.vcpus as the number of queues. if not, nova uses the limit. The limits are as follows: kernels prior to 3.0: 1 kernels 3.x: 8 kernels 4.x: 256 The API policy defaults are now defined in code like configuration options. Because of this, the sample policy.json file that is shipped with Nova is empty and should only be necessary if you want to override the API policy from the defaults in the code. To generate the policy file you can run: oslopolicy-sample-generator --config-file=etc/nova/nova-policy-generator.conf network_allocate_retries config param now allows only positive integer values or 0. The api_rate_limitconfiguration option has been removed. The option was disabled by default back in the Havana release since it’s effectively broken for more than one API worker. It has been removed because the legacy v2 API code that was using it has also been removed. The default flavors that nova has previously had are no longer created as part of the first database migration. New deployments will need to create appropriate flavors before first use. The network configuration option ‘fake_call’ has been removed. It hasn’t been used for several cycles, and has no effect on any code, so there should be no impact. The XenServer configuration option ‘iqn_prefix’ has been removed. It was not used anywhere and has no effect on any code, so there should be no impact. Virt drivers are no longer loaded with the import_object_ns function, which means that only virt drivers in the nova.virt namespace can be loaded. New configuration option sync_power_state_pool_size has been added to set the number of greenthreads available for use to sync power states. Default value (1000) matches the previous implicit default value provided by Greenpool. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons.
https://docs.openstack.org/releasenotes/nova/newton.html
CC-MAIN-2021-17
refinedweb
6,777
56.35
What is vanilla? vanilla is a Python wrapper around the macOS native Cocoa user interface own Interface Builder. vanilla was developed by Tal Leming, inspired by the classic WA Python wrapper around the native UI layer of Classic MacOS. library used in RoboFogA fork of Fontographer with a built-in Python interpreter.. A wide array of tools are created with vanilla, from small scripts to extensions and applications. RoboFont itself is built using vanilla. Using vanilla vanilla is embedded in RoboFont, so you can start using it right away in your scripts: from vanilla import * vanilla has a very complete documentation which is available directly from your scripting environment: from vanilla import Window help(Window) The above snippet will return the provided documentation for the Window object, including some sample code: from vanilla import * class WindowDemo: The screenshots below give an overview of the UI elements included in vanilla. These images are produced using vanilla’s own test files, which are included in the module. from vanilla.test.testAll import Test Test()
https://doc.robofont.com/documentation/topics/vanilla/
CC-MAIN-2021-39
refinedweb
172
52.6
Hi, You say you incorporated, but are not a C-Corp ... this means you must be an S-CORP? (or are you referring to setting up an LLC)? If you are an LLC and only one member the LLC IS a disregarded entity If, however, you actually incorporated as an S-Corp, the S-Corp will have to file an 1120_s Everything is mentioned in my question and I think you have no expertise in this field Either Way, if have income, you will have to file (either as the individual that does a schedule C to report the income from his disregarded entity OR as an s-corp shareholder, by taking the K-1 form that the S-Corp must issue and transferring that to the person return as well NO you said INCORPORATED, but then used the word disregarded entity the two are mutually exclusive ONly sole proprietors, LLSs care disregarded entities See this: Are These: You said you incorporated Disregarded entity DOES NOT mean that you don't have to file ... it just means that the income is reported from the individual rather than the single member LLC or the Sole proprietorship --- meaning that the ENTITY is disregarded, as a separate taxpayer Going back the original question ... if you mean to say that you SET UP a single member LLC and did not form a corporation (again a singlemember LLC is a hybrid between partnership and corporation) but NOT an actual corporation so you DID NOT incorporate ... then yes, the LLC WOULD BE a disregarded entity when you say it was recently incorporated, that implies that you have ALREADY either filed and 8832 or a 2553 to elect corporate tax treatment ... as eithe a or S, respectively It is not a corporation It is a LLC THen as a single member LLS it IS a disregarded entity sorry "LLC" THis DOES NOT, however, mean tat no retirn is to be file ... it just means that the LLC itself doesn't file Yes I know that but what about non resident aliens?Foreign: Is it true Delware LLC owned by non resident aliens can't be a disregarded entity YOu, as a non-resident alien, will have ECI ... income effectively connected to a business in the US, which means that you will need to file, as outlined above Let me check on that one ... going to both tax code and Del bus code now .... were you given any sort of citation on that (I DO NOT know that to be trus) Have to wonder if they were just saying that you will have to file If I opt to file the f8832 to elect it as C-Corp immediately after forming the LLC or at the time of filing returns. That will simply mean that (the C-Corp) will file it's own tax form 1120, and if you want to leave the money INSIDE the corp you can ... BUT if you either take salary or dividends, we're back t your having to file as a non-re alien having ECI what I own is not a C corp if you file the 8832 it will be we have 2 different questons goin on here (1) can non-res alien own a disregarded entity (and I think they can ... with check that for you) and (2) the C-corp questions you just asked about ON the first question... YES you can own a disregarded entity (single member llc) See THIS: Owning the Property as an Individual or Single Member LLC (SMLLC): An SMLLC is a limited liability company with only one member, the nonresident alien for purposes of this article. The SMLLC is beneficial because it provides a layer of liability protection for the nonresident alien from lawsuits from tenants. Additionally, we generally believe it is good practice to keep the bank account and assets related to the investment property separate from personal bank accounts and assets. The single member LLC approach allows for a clean and clear separation. This approach may require a little more upfront documentation, but does not complicate tax matters, because the owner may elect to complete his income tax return in the same manner as if the LLC did not exist. Although the SMLLC keeps the individual separate from the entity for legal purposes, the SMLLC is a 'disregarded entity' for tax purposes, unless the owners elects to be treated as a corporation. As a disregarded entity, the IRS looks through to the owner of the LLC to determine tax treatment of payments of income, withholding, etc. Any income received by the LLC will be reported by the owner, on his annual individual income tax return. If someone owns a Delware LLC the tax is passed on to the owners but for non resident alien doesn't have SSN or ITIN(in my case) I see you're still typing I'll wait... :) (or if you want a response here let me know) so how is it possible to file the returns ? getting a ITIN is not a viable option for me Then you are truly stuck here ... IRS requires the filing of a return (if there is a profit), because you will have ECI (Effectively Connected Income) to a US based corp... you CAN SHOW DOCUMENTS and use form W-& without being resident NOW... if you have a way of getting to the money... and you are outside a place where IRS has information sharing or tax treaties... then they may neve be able to enfore BUT I wouldn't be doing my job if I didn't tell you that THEY (IRS) say you'll need to file, IF you have that income Here's the IRS guidance on that: Depending on the NATURE of your business, if you are distributing or working through some other US company they MAY simply WITHHOLD taxes from the money they send you THIS would get it done (although you may end up paying more that way, because after filing you would probably find that a base level of that income is below taxation threshhold AND that you would be able to deduct business expenses) IF you can find a way to file W-7 and gete ITIN, you'll only have to pay tax on PROFITS let me rephrase it If I opt to file the f8832 to elect it as C-Corp, should I file f8832 immediately after forming the LLC or at the time of filing returns. I think the effect would be the same AS long as you file the 8832 DURING the first tax year of operation (so that by year-end you have a c-Corp) ... and actually this would be before filing of returns (which would be APR of the following year if an individual - through disregarded entity - or March 15, if a corporation ... BUT let me look at the form ... juuust a second OK... see this, (the deadline date is not there, bit I'll get that from the instructions in a minute) ... the problem I see here is what's required on the n8832 itself ... see this: LLC Taxed as Corporation An LLC can also elect to pay income tax as what the IRS calls "an association taxable as a corporation." This election is made on IRS Form 8832. Here are some things you need to know about this election: That last bullet point there gets you again, doesn't it? Ahh, and here is the WHEN to FILE 8832 part: When to FileThe. Have you look at the W-7 info? I'm not so sure that you cannot get an ITIN (although I don't know all of your situation) ...the w-7 would not need to be sent in until it's time to send in that first 1040NR individual tax return (which would be the situation is you go the disregarded entity route) ... and only then when there is enough profit to put you above the filing thresshold sorry "IF" you go the disregarded entity route... ... I'd like to continue to work with you on this .. There SHOULD be a way to get this done. Let me know … Lane If this HAS helped, I would appreciate a feedback rating of 3 (OK) or better … That's the only way they will pay us here. HOWEVER, if you need more on this, PLEASE COME BACK here, so you won't be charged for another question. I am a non resident alien andI don't have any visa. I am not residing in US and I don't have any visa but I have formed a LLC in Delaware. Though the type of business is really doesn't matter here I am including the answers for your questions. Incorporating in Delaware is easier and has various advantages. Being a US company it will be lot more easier to get a merchant account and US paypal account which has lower fee and it can be enrolled to BBB. Business is Web Designing( internet based) These are not relevant to my questions. I have thoroughly researched other factors before forming the LLC. Since you are single member LLC, and you are a foreign person for US tax purposes, then the income earned is treated as earned directly by the foreign person. Such income is not subject to U.S. taxation so long as the foreign person did not perform his services or sales activity in the U.S. himself or thru an employee or.
http://www.justanswer.com/tax/80r2a-own-delaware-llc-sole-member-recently.html
CC-MAIN-2015-18
refinedweb
1,590
62.72
At some point of building your project you may wish to allow users to store some extra data. If you use Django‘s built-in authentication system you are in luck - much of the code have been already written and you only have to: If you are not familiar with Django‘s authentication framework, please refer to Django‘s documentation on the topic. For convenience, richtemplates comes with basic UserProfile class. You may use it directly by adding following line in your settings: This model provides only most basic fields: However, more probably you would like to extend this class - simply follow guidelines described at Subclassing profile class. If needed (well, most probably it is needed in one’s project) UserProfile may be easy subclassed. Let’s say we have main application where we define our user profile model and app label is core. We need to add address field at profile model. Models code could look as follows: from django.db import models from richtemplates.models import UserProfile as RichUserProfile class UserProfile(RichUserProfile): address = models.CharField(max_length=128, null=True, blank=True) Then, at settings file of our project we need to point at this class: AUTH_PROFILE_MODULE = 'core.UserProfile' If we create pluggable application and want to make our user profile class abstract until AUTH_PROFILE_MODULE is pointed at our model, we can add simple check within Meta class of our model: from django.conf import settings from django.db import models from richtemplates.models import UserProfile as RichUserProfile class UserProfile(RichUserProfile): address = models.CharField(max_length=128, null=True, blank=True) class Meta: abstract = getattr(settings, 'AUTH_PROFILE_MODULE', '') != \ 'core.UserProfile'
https://pythonhosted.org/django-richtemplates/userprofiles.html
CC-MAIN-2016-44
refinedweb
269
56.05
GraphQL has been gaining wide adoption as a way of building and consuming Web APIs. GraphQL is a specification that defines a type system, query language, and schema language for your Web API, and an execution algorithm for how a GraphQL service (or engine) should validate and execute queries against the GraphQL schema. It’s upon this specification that the tools and libraries for building GraphQL applications are built. In this article, I'll introduce you to some GraphQL concepts with a focus on GraphQL schema, resolver, and the query language. If you’d like to follow along, you need some basic understanding of C# and ASP.NET Core. Why Use GraphQL? GraphQL was developed to make reusing the same API into a flexible and efficient process. GraphQL works for API clients with varying requirements without the server needing to change implementation as new clients get added or without the client needing to change how they use the API when new things get added. It solves many of the inefficiencies that you may experience when working with a REST API. Some of the reasons you should use GraphQL when building APIs are: - GraphQL APIs have a strongly typed schema - No more over- or under-fetching - Analytics on API use and affected data Let’s take a look. A Strongly Typed Schema as a Contract Between the Server and Client The GraphQL schema, which can be written using the GraphQL Schema Definition Language (SDL), clearly defines what operations can be performed by the API and the types available. It’s this schema that the server’s validation engine uses to validate requests from clients to determine if they can be executed. No More Over-or Under-Fetching GraphQL has a declarative way of requesting data using the GraphQL query language syntax. This way, the client can request any shape of data they want, as long as those types and its fields are defined in the schema. This is analogous to REST APIs where the endpoints return predefined and fixed data structures. This declarative way of requesting data solves two commonly encountered problems in RESTful APIs: Over-fetching Under-fetching Over-fetching happens when a client calls an endpoint to request data, and the API returns the data the client needs as well as extra fields that are irrelevant to the client. An example to consider is an endpoint /users/idwhich returns a user’s data. It returns basic information, such as (in this example, an online school’s database will be used) name and department, as well as extra information, such as address, billing information, or other pertinent information, such as courses they’re enrolled in, purchasing history, etc. For some clients or specific pages, this extra information can be irrelevant. A client may only need the name and some identifying information, like social security number or the courses they’re enrolled in, making the extra data such as address and billing information irrelevant. This is where over-fetching happens, affecting performance. It can also consume more of users’ Internet data plan. Under-fetching happens when an API call doesn’t return enough data, forcing the client to make additional calls to the server to retrieve the information it needs. If the API endpoint /users/id only returns data that includes the user’s name and one other bit of identifying data, clients needing all of the user’s information (billing details, address, courses completed, purchasing history, etc.) will have to request each piece of that data with separate API calls. This affects performance for these types of clients, especially if they’re on a slow connection. This problem isn’t encountered in GraphQL applications because the client can request exactly the bits of data they need from the server. If the client requirement changes, the server need not change its implementation but rather the client is updated to reflect the new data requirement by adding the extra field(s) it needs when querying the server. You will learn more about this and the declarative query language in GraphQL in the upcoming sections. Analytics on clients’ usage GraphQL uses resolver functions (which I’ll talk about later) to determine the data that the fields and types in the schema returns. Because clients can choose which fields the server should return with the response, it’s possible to track how those fields are used and evolve the API to deprecate fields that are no longer requested by clients. Setting Up the Project You’ll be building a basic GraphQL API that returns data from an in-memory collection. Although GraphQL is independent of the transport layer, you want this API to be accessed over HTTP, so you’ll create an ASP.NET Core project. Create a new ASP.NET Core project and install the dependencies shown in Figure 1. The first package you installed is the GraphQL package for .NET. It provides classes that allow you to define a GraphQL schema and also a GraphQL engine to execute GraphQL queries. The second package provides an ASP.NET Core middleware that exposes the GraphQL API over HTTP. The third package is referred to as the GraphQL Playground, which works in a similar way to Postman for REST APIs. It gives you an editor in the browser where you can write GraphQL queries against your server and see how it responds. It gives you IntelliSense and you can view the GraphQL schema from it. The GraphQL Schema The GraphQL schema is at the center of every GraphQL server. It defines the server's API, allowing clients to know which operations can be performed by the server. The schema is written using the GraphQL schema language (also called schema definition language, SDL). With it, you can define object types and fields to represent data that can be retrieved from the API as well as root types that define the group of operations that the API allows. The root types are the Query type, Mutation type, and Subscription type, which are the three types of operations that you can run on a GraphQL server. The query type is compulsory for any GraphQL schema, and the other two are optional. Although you can define custom types in the schema, the GraphQL specification also defines a set of built-in scalar types. They are Int, Float, Boolean, String, and ID. There are two ways of building GraphQL server applications. There’s the schema-first approach where the GraphQL schema is designed up front. The other approach is the code-first approach where the GraphQL is constructed programmatically. The code-first approach is common when building a GraphQL server using a typed language like C#. You’re going to use the code-first approach here and later look at the generated schema. Let’s get started with the schema. Create a new folder called GraphQL and add a new file Book.cs with the content in the following snippet: public class Book{ public int Id { get; set; } public string Title { get; set; } public int? Pages { get; set; } public int? Chapters { get; set; }} Add another class BookType.cs and paste the content from the next snippet into it. u sing GraphQL.Types; public class BookType : ObjectGraphType<Book> { public BookType() { Field(x => x.Id); Field(x => x.Title); Field(x => x.Pages, nullable: true); Field(x => x.Chapters, nullable: true); } } The code in the last snippet represents a GraphQL object type in the schema. It’ll have fields that will match the properties in the Book class. You set the Pages and Chapters fields to be nullable in the schema. If not set, by default, the GraphQL .NET library sets them as non-nullable. The application you’re building only allows querying for all the books and querying for a book based on its ID. The book type is defined so go ahead and define the root query type. Add a new file RootQuery.cs in the GraphQL folder, then copy and paste the code from Listing 1 into it. The RootQuery class will be used to generate the root operation query type in the schema. It has two fields, book and books. The books field returns a list of Book objects, and the book field returns a Book type based on the ID passed as an argument to the book query. The type for this argument is defined using the IdGraphType, which translates to the built in ID scalar type in GraphQL. Every field in a GraphQL type can have zero or more arguments. You’ll also notice that you’re passing in a function to the Resolve parameter when declaring the fields. This function is called a Resolver function, and every field in GraphQL has a corresponding Resolver function used to determine the data for that field. Remember that I mentioned that GraphQL has an execution algorithm? The implementation of this execution algorithm is what transforms the query from the client into actual results by moving through every field in the schema and executing their Resolver function to determine its result. The books resolver calls the GetBooks() static function to return a list of Book objects. You’ll notice that it’s returning a list of Book objects and not BookType, which is the type tied to the schema. GraphQL for .NET library takes care of this conversion for you. The Book resolver calls context.GetArgument with id as the name of the argument to retrieve. This argument is then used to filter the list of books and return a matching record. The last step needed to finish the schema is to create a class that represents the schema and defines the operation allowed by the API. Add a new file GraphSchema.cs with the content in the following snippet: using GraphQL; using GraphQL.Types; public class GraphSchema : Schema { public GraphSchema(IDependencyResolver resolver) : base(resolver) { Query = resolver.Resolve<RootQuery>(); } } In that bit of code, you created the schema that has the Query property mapped to the RootQuery defined in Listing 1. It uses dependency injection to resolve this type. The IDependencyResolver is an abstraction over whichever dependency injection container you use, which, in this case, is the one provided by ASP.NET. Configuring the GraphQL Middleware Now that you have the GraphQL schema defined, you need to configure the GraphQL middleware so it can respond to GraphQL queries. You’ll do this inside the Startup.cs file. Open that file and add the following using statements: using GraphQL; using GraphQL.Server; using GraphQL.Server.Ui.Playground; Go to the ConfigureServices method and add the code snippet you see below to it. services.AddScoped< IDependencyResolver > ( s => new FuncDependencyResolver (s.GetRequiredService)); services.AddScoped< GraphSchema >(); services.AddGraphQL() .AddGraphTypes(ServiceLifetime.Scoped); The code in that snippetconfigures the dependency injection container so that when something requests a particular type of FuncDependencyResolver from IDependencyResolver that it should return. In the lambda, you call GetRequiredServicesto hook it up with the internal dependency injection in ASP.NET. Then you added the GraphQL schema to the dependency injection container and used the code services.AddGraphQLextension method to register all of the types that GraphQL.net uses, and also call AddGraphTypesto scan the assembly and register all graph types such as the RootQueryand BookTypetypes. Let’s move on to the Configuremethod to add code that sets up the GraphQL server and also the GraphQL playground that is used to test the GraphQL API. Add the code snippet belowto the Configure method in Startup.cs. app.UseGraphQL<GraphSchema>(); app.UseGraphQLPlayground (new GraphQLPlaygroundOptions()); The GraphQL Query Language So far, you’ve defined the GraphQL schema and resolvers and have also set up the GraphQL middleware for ASP.NET, which runs the GraphQL server. You now need to start the server and test the API. Start your ASP.NET application by pressing F5 or running the command dotnet run. This opens your browser with the URL pointing to your application. Edit the URL and add /ui/playground at the end of the URL in order to open the GraphQL playground, as you see in Figure 2. You’ll notice that the query is structured similarly to the schema language. The books field is one of the root fields defined in the query type. Inside the curly braces, you have the selection set on the books field. Because this field returns a list of Book type, you specify the fields of the Book type that you want to retrieve. You omitted the pages field; therefore it isn’t returned by the query. You can test the book(id) query to retrieve a book by its ID. Look at Figure 4 and run the query you see there to retrieve a book. In that query, you set the id argument to a value of 3, and it returned exactly what you need. You'll notice that I have two queries, books and book(id: 3). This is a valid query. The GraphQL engine knows how to handle it. What’s Next? So far, I’ve covered some basics of GraphQL. You looked at defining a schema using the code-first approach, writing resolver functions, and querying the GraphQL API. You created a server using the GraphQL package for .NET, the NuGet package GraphQL.Server.Transports.AspNetCore, which is the ASP.NET Core middleware for the GraphQL API, and GraphQL.Server.Ui.Playground package that contains the GraphQL playground. You used the GraphQL playground to test your API. I explained that in GraphQL, there are three operation types. In this article, you worked with the query operation; in the next article, you'll look at mutations and accessing a database to store and retrieve data. You’ll update your schema so you can query for related data, e.g., authors with their books, or books from a particular publisher. Stay tuned!!
https://www.codemag.com/article/1909061
CC-MAIN-2019-47
refinedweb
2,306
72.76
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, I am making a DEMO for my research work of crowd simulation. The people may walk and stop during the animation. The position and drection(Velocity) of the crowd have been recorded in a text file. I have written a Python script successfully read the data file at each keyframe. Now, I use TP partilce to animate the crowd. And, I subsitute the particle with a Man model through XPresso Pshape tags. Shown as followed. When the position is not changed the model's motion should stop(stay at present keyframe) untill he moved again. The motion of the model is pre-rendered like a re-cycle movie. The keyframe of the Model as follow: What should I do to hold on the present keyframe? Sorry ablout my poor question description! My Python scripts as followed: # Boids for Py4D by smart-page.net import c4d import math # particles' params boids_number = 1000 currentframe = None # used for read user data frame by frame frame_step = boids_number+1 frame_total = 0 def main(): global tp global doc global currentframe currentframe = doc.GetTime().GetFrame(doc.GetFps()) # particles born at 0 frame if currentframe == 0: tp.FreeAllParticles() tp.AllocParticles(boids_number) # life time for particles lt = c4d.BaseTime(1000) # user data for paritlces filename = op[c4d.ID_USERDATA, 1] # open the user file. First, read a frame of data from the user file. Then, read lines one bye one to feed the particles. with open(filename, 'r') as fn: # read all lines of the user data lines = fn.readlines() # compute how many frames of data in the file frame_total = int(len(lines) / frame_step) # frame = 1 i=0 #read a frame of data according to the scene keyframe for frame in range(frame_total): if frame == currentframe: t_lines = lines[frame * frame_step:frame * frame_step + frame_step - 1] #pase lines of the readed data for line in t_lines: if line == t_lines[0]: # filter the first line of each frame in the text file, because is just flag words print(line) else: #split position(x,y,z) and direction (dx,dy,dz) x, y, z, dx, dy, dz = line.split() pos = c4d.Vector(float(x), float(y), float(z) ) vol = c4d.Vector(float(dx), float(dy), float(dz)) temp=(pos-tp.Position(i)).GetLength() # some codes wanted here if temp==0.0: # the motion of the man should stop. # should I edit the keyframe of the walking Man model? # when temp==0, then the walking Man stay at present keyframe but not go ahead along with the scene keyframe. # align to velocity direction vel = vol.GetNormalized() side = c4d.Vector(c4d.Vector(0, 1, 0).Cross(vel)).GetNormalized() up = vel.Cross(side) m = c4d.Matrix(c4d.Vector(0), side, up, vel) tp.SetAlignment(i, m) # set position tp.SetPosition(i,pos) # set life time for particle i tp.SetLife(i, lt) i=i+1 c4d.EventAdd() if __name__=='__main__': main() code_text My user data format as followed.
https://plugincafe.maxon.net/user/happygrass_cn
CC-MAIN-2022-27
refinedweb
525
68.97
Google Colab: import data from google drive as pandas dataframe Carvia Tech | August 24, 2019 | 3 min read | 73 views In this article, we will learn how to read file from drive in Google Colab: Load data from Google Drive in Jupyter using pydrive Import data as pandas dataframe using read_csv Here, we will be assuming that you are familiar with the Jupyter Notebook. In case, you are not then you can follow this article. Introduction to Google Colab Notebook Google youtube video link to get overview of Google Colaboratory. Introduction to PyDrive PyDrive is a wrapper library of google-api-python-client that simplifies many common Google Drive API tasks. Here, we will be using Pydrive for authenticating and then read data directly from google drive itself. Step 1 : Importing libraries & Google Authentication Import necessary libraries to read data import pandas as pd (1) from pydrive.auth import GoogleAuth (2)) After running cell containing above code, google will prompt you to visit a link. After clicking that link, You will have to choose google account of which it will be accessing google drive. Copy the verification code and paste it into notebook. Press enter after pasting verification code. import logging logging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR) Step 2: Loading data from google drive Now, you need to upload data to google drive and copy the id from there. You can see the id of folder in the link to file like in screenshot. so here id is 1stjtV19iKK1BdrPasHYqDNOk98-MEdsR This id is folder id which we can use to get id of files in this folder and then load those files. Step 3: Get Id of data file and load data in Google Colab file_list = drive.ListFile({'q': "'1stjtV19iKK1BdrPasHYqDNOk98-MEdsR' in parents and trashed=false"}).GetList() for file1 in file_list: print('title: %s, id: %s' % (file1['title'], file1['id'])) After running the above code, you will see all the files in the folder listed down with their IDs. title: Untitled1.ipynb, id: 1j6iOyUA0NGSmwI6EuBX9mXKNIUIMqEnq title: stack-overflow-data.csv, id: 1sCIPWY2yVYh3hwREVLwVlloEH1-WbUT4 title: Untitled0.ipynb, id: 1PwWlRHgIT2iRpaf_cMXAnTFmVmlZ_blr Step 4: Import data as Pandas DataFrame with read_csv Now, we will be getting content of file by using id. You can see that we have copied code from above and used here in drive.CreateFile data_downloaded = drive.CreateFile({'id': '1sCIPWY2yVYh3hwREVLwVlloEH1-WbUT4'}) data_downloaded.GetContentFile('stack-overflow-data.csv') Now, we can access the data with same file name and load it as pandas dataframe. data = pd.read_csv('stack-overflow-data.csv',low_memory=False, lineterminator='\n') Now, you are good to go and work on the data. Thanks for reading this guide. Top articles in this category: - Google Data Scientist interview questions with answers - Top 100 interview questions on Data Science & Machine Learning - Python coding challenges for interviews - Python Flask Interview Questions - Why use feature selection in machine learning - Creating custom Keras callbacks in python - Imbalanced classes in classification problem in deep learning with keras.
https://www.javacodemonk.com/google-colab-import-data-from-google-drive-as-pandas-dataframe-079a3609
CC-MAIN-2019-47
refinedweb
492
54.52
I know it's been a while since I wrote part 1. You have probably read lots of resources about ASP.NET MVC by now. Thus for now I will skip the how to do part and concentrate more on why. Also as we go along this series, I will demonstrate some exciting new features that are available to ASP.NET MVC. Please remember wherever in this post I mention MVC, it refers to MVC 4. In this post, we will will limit our discussion to Model. Why model is used, what approach to take while working with models, and validating them. Sounds a bit old and an out of scope topic, but even now I see lots of discussion on how web form developers are planning to migrate to MVC. It's been a hot topic, hence I like to share some insights in this regard. For new developers MVC is a charmer, but for those who are considering to migrate from webform to MVC, the primary concern is familiarity with features that they are used to over the years. Most of webform developers' concerns is the ASP.NET webform framework has a lot to offer in the context of: Now the common question is does ASP.NET MVC fulfill these needs? Does it? Humm, let's look at what ASP.NET MVC has to offer. Even though their architectural approaches are quite different, ASP.NET MVC and Web Forms actually have a lot in common. Bingo, as you can see a Web Forms developer looking to learn ASP.NET MVC already is further ahead than he thinks. Productivity wise, ASP.NET web forms and MVC do not differ much, we can obviously raise the issue of RAD controls. I agree RAD controls speed up development but think how UX designers can play with views without the developer's involvement. Developers do not have to give their effort to implement cool UX utilizing RAD controls. Combining these final development efforts, the difference is very thin between webforms and MVC. Moreover there are a lot of good frameworks like angularJs/backbone js available these days to ease development effort. Productivity is the one of the major concerns for developers who wish to move from web forms to mvc. This is no surprise like Web Forms, ASP.NET MVC is built on top of the ASP.NET platform. So both frameworks rely on .NET languages C# and Visual Basic .NET to interact with the .NET Framework, and can access compiled assemblies developed with any other language that the .NET Framework supports. Even interesting is while developing with MVC you will see the very familiar web.config and Global.asax. If you Google, you will probably get thousands of resources on these comparisons. Now you can ask the obvious question, DB, Model or Code first? What are all these and how are they related to MVC? Fair enough, but to answer that let's step a back for a while to Part 1 which discusses the model part of MVC. Let's refresh our memory: Models are those parts of the MVC application that play with the application's data domain and retrieve/store model state in a database. So model is the heart of your application (from a business perspective) that deals with the single most important element: "data". Model represents your data structure and will contain functions that will help you to do CRUD (Create/Retrieve/Update/Delete) operations. Now depending upon the architecture, application type/scope, risk analysis, we need different approaches to implement the model entity. This is where EF (Entity Framework) comes in play. With EF we have the flexibility to work with models as we want to. Entity Framework provides three ways to define the model of your entities. Using the database first workflow, with the model first workflow, and last but not least, code first approach. Code first approach of designing the model is becoming a popular choice due to complete control over the classes being written or implemented. But that doesn't mean DB first or model first are not cool to be used. The trick is to know when to use what, if you can analyze your application scenario/scope/risks and choose the one prefect for your scenario, in my opinion that is cool. Let's dive into our main topics. Consider a scenario where an existing enterprise database tends to have many tables, each with many columns. They are not normalized as you prefer but can't complain as it's an existing database. No one will allow you to break and restructure the DB when there is a business in stack. In this situation the entity model of your application must be compatible to the existing database; you can’t bend the database to suit your preferred object model, in other words your hands are tied. Database First is popular in a scenario like this, also called a “brown field” scenario, with developers who build applications that access existing production databases holding large numbers of tables and columns. When to adopt DB first approach is a different opinion over forums, groups, user meet-ups. Without reinventing the wheel I like to summarize those and here is the list; Legacy systems built upon existing DBs designed by DBAs, developed separately, or if you have an existing DB. Now the question is, is it bad to use DB first? Many experts have their own view, my opinion is despite the non-comforting object model if the application is serving as it is supposed to be, then there is no reason to be unhappy about it. Every approach we are discussing here has its own pros and cons. Though DB first approach was the successor, changed development scenarios demanded Domain Driven Development, and model first brings the essence of DDD but not truly. Model first design came into prevalence where the Database Schema was based upon the Model. Though conceptually database and model first approaches are different, from an implementation perspective there in not much difference. Both end up with generating lots of auto generated code. Just like in DB first, when to adopt this approach has varied answers, and my summarized version is, In the Code First approach, you avoid working with the Visual Model Designer (EDMX) completely. You write your POCO classes first and then create the database from these POCO classes. This is an ideal candidate for Domain-Driven Design (DDD). Code First uses classes and properties to identify tables and columns, respectively.. I am a fan of Code First so I guess my opinion on disadvantages is biased. There is no hard and fast rule on when to use Code First, but in general you can use Code First because, As we discussed in the above three approaches, model objects deal with data and perform business logic on that data. Models are application specific and hence the ASP.NET framework imposes no restrictions on the construction of model objects. But we can impose custom validation on models. The most common and convenient approach to add such validation is Code First because this approach ensures complete control over the model and two popular methods of validation are DataAnnotation and FluentValidation. DataAnnotation and FluentValidation both serve a similar purpose. They ensure that the values that have been assigned to the object properties satisfy the business rules that are supposed to be applied. Does that mean Database First or Model First approach does not need any model validation? While using the Database First or Model First, the EDMX file has the same purpose - it contains all the details and mapping definitions of your model. Whatever restriction (or constraint) you define, your database/database model is translated by EDMX mapping definitions, so you do not have to put extra effort to define your own. Fluent API or data annotations and conventions replace the EDMX file only when using Code-First. One of the major focus of mvc is always been testing. Thus this post is incomplete to with out discussing a bit how to conduct unit testing on models. Just a heads up I will try to show positive and negative testing both. Hope to do discuss in details future.For example purpose lets take a very simple class with DataAnnotation, and only do the property set ruls testing. FYI, many experts recommends to test properties that are derived instead of testing each properties. public class User { public string FName { private get; set; } public string LName { private get; set; } [Required] [StringLength(10)] public string Name { get { return string.Format("{0} {1}", this.FName, this.LName); } } } Now, finally for the testing, for the sack of simplicity I am just going to test the Name rule, I have written two tests. one positive and one negative . Positive test: is to make sure name length satisfy allowed length [TestMethod] public void Name_Should_have_valid_length() { User _userToTest = new User( ); _userToTest.LName = "Sh."; //3 character _userToTest.FName = "Iqbal"; //5 character Assert.IsTrue(_userToTest.Name.Length <= 10); } [TestMethod] public void Name_Should_Fail_for_invalid_length() { User _userToTest = new User(); _userToTest.LName = "Shahriar"; //8 character _userToTest.FName = "Iqbal"; //5 character Assert.IsTrue(_userToTest.Name.Length <=.
http://www.codeproject.com/Articles/611156/Why-s-and-How-s-of-ASP-NET-MVC-Part-2?fid=1836865&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed&fr=11
CC-MAIN-2014-41
refinedweb
1,529
64.71
Up to [DragonFly] / src / sys / netinet Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN>> Remove spl*() calls from netinet, replacing them with critical sections. A slight rearrangement of COMMON_START() in tcp_usrreq.c was necessary to ensure that the inp is loaded after entering the critical section. encap_getarg() was not properly loading the pointer argument associated with the m_tag, returning garbage which would then panic the box. The code path is primarily used by the GIF interface. Crash-Reported-by: Peter Avalos <[email protected]> Reviewed-by: Jeffrey Hsu Cosmetic cleanups. Remove struct ipprotosw. It's identical to protosw, so use the generic version directly. Correct type-o in last commit. oops.. if ipv6 doesnt need oldstyle prototypes maybe its time we took them out of ipv4's code Kernel Police: - Fix Mbuf/Malloc flag misuse. - Remove redundant memset() redifiniton (ng_l2tp.c) Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.1.2.5
http://www.dragonflybsd.org/cvsweb/src/sys/netinet/ip_encap.c?f=h
CC-MAIN-2015-22
refinedweb
179
52.66
C code for a project that read a dat file and sorts the info Budget $10-30 USD The data you will use is a list of taxonomy information provided by The Society for the Study of Amphibians and Reptiles (SSAR), Species, Subspecies, Type, and Common Name of an amphibian or reptile. You will be provided code that reads and stores this data in an array of Structs. Your program will repeatedly ask the user if they would like to do one of the following actions: List the common name of all frogs. List the common name of all lizards. List the common name of all turtles. List the common name of all snakes. List the common name of all crocodilians. List the common name of all salamanders. Enter a targeted mode which allows the user to input a common name and the program then identifies its full Genus, Species, and Subspecies. Or exit the program. To read an entire string, including spaces, from the keyboard use scanf(" %[^\n]s",nameInput); started program is below #include <stdio.h> #include <stdlib.h> //put function headers here int main(){ FILE *herp = fopen("[url removed, login to view]", "r"); if(herp == NULL){ printf("Cannot find dataset.\nSave in the same place as the program.\n"); return 1; } int numRecords; fscanf(herp,"%d",&numRecords); char **genus = (char **)calloc(numRecords, sizeof(char*)); char **species = (char **)calloc(numRecords, sizeof(char*)); char **sub = (char **)calloc(numRecords, sizeof(char*)); char **common = (char **)calloc(numRecords, sizeof(char*)); char **type = (char **)calloc(numRecords, sizeof(char*)); int i; for(i = 0; i < numRecords; i++){ genus[i] = (char *) calloc(25, sizeof(char)); species[i] = (char *) calloc(25, sizeof(char)); sub[i] = (char *) calloc(25, sizeof(char)); type[i] = (char *) calloc(25, sizeof(char)); common[i] = (char *) calloc(100, sizeof(char)); } //read the titles char ignore[40]; fscanf(herp,"%s%s%s%s%s",ignore,ignore,ignore,ignore,ignore); //read the data for(i = 0; i < numRecords; i++) fscanf(herp, "%s%s%s%s %[^\n]s", genus[i],species[i],sub[i],type[i],common[i]); fclose(herp); //add your code here //for debugging purposes //for(i = 0; i < numRecords; i++) // printf("%s,%s,%s,%s,%s\n", genus[i],species[i],sub[i],type[i],common[i]); //do not remove the following lines of code for(i = 0; i < numRecords; i++){ free(genus[i]); free(species[i]); free(sub[i]); free(type[i]); free(common[i]); } free(genus); free(species); free(sub); free(type); free(common); return 0; } //add functions here Tildelt til: 3 freelancere byder i gennemsnit $28 på dette job I work in c for my Daily job, so it would be easy for me Relevant Skills and Experience C, c++, Java, embedded c
https://www.dk.freelancer.com/projects/c-programming/code-for-project-that-read/
CC-MAIN-2018-43
refinedweb
454
55.88
22 September 2011 11:08 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> January LLDPE, the most actively traded contract on the Dalian Commodity Exchange (DCE), closed at yuan (CNY) 9,645/tonne ($1,509/tonne) on Thursday, down by CNY410/tonne or 4.1% lower from Wednesday’s settlement price. About 3.12m tonnes of LLDPE or 1.25m contracts were traded for January delivery, according to DCE data. NYMEX WTI crude futures were hovering at $83/bbl (€61/tonne) levels late Thursday afternoon in “The physical market sentiment has become more bearish as the futures fell beyond the psychological line of CNY10,000/tonne mark, which is the second time within this week,” a Zhejiang-based futures broker said. “The fear of eurozone crisis is intensifying, and also, many people are agreeing that US is already in [a] double-dip recession. The gloomy economic climate is depressing the plastics industry,” he added. ($1 = €0.73 /
http://www.icis.com/Articles/2011/09/22/9494226/china-lldpe-futures-fall-4-as-crude-slumps.html
CC-MAIN-2014-15
refinedweb
156
54.12
Essentials All Articles What is LAMP? Linux Commands ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD ONLamp Topics App Development Database Programming Sys Admin Libtap is a library for testing C code. It implements the Test Anything Protocol, which is emerging from Perl's established test framework. One of the ideas behind Extreme Programming (XP) is to "design for today, code for tomorrow." Rather than making your design cover all eventualities, you should write code that is simple to change should it become necessary. Having a good regression test suite is a key part of this strategy. It lets you make modifications that change large parts of the internals with the confidence that you have not broken your API. A good test suite can also be a way to document how you intend people to use your software. Having worked where people thought that writing tests was a waste of time, I can't tell you how much time I wasted trying to fix bugs that had emerged as a result of bugs being fixed or new features added. If we'd had a proper regression test suite, we could have found those immediately, and I would have lots of extra time to write new features. Taking the time to produce good tests (and actually running them) actually ends up saving a lot of time, not wasting it. Perl distributions normally ship with a test suite written using Test::Simple, Test::More, or the older (and now best avoided) Test module. These modules contain functions to produce plain-text output according to the Test Anything Protocol (TAP) based on the success or failure of the tests. The output from a TAP test program might look something like this: Test::Simple Test::More Test 1..4 ok 1 - the WHAM is overheating ok 2 - the overheating is detected not ok 3 - the WHAM is cooled not ok 4 - Eddie is saved by the skin of his teeth Related Reading C in a Nutshell By Peter Prinz, Tony Crawford The 1..4 line indicates that the file expects to run four tests. This can help you detect a situation where your test script dies before it has run all the intended tests. The remaining lines consist of a test success flag, ok or not ok, and a test number, followed by the test's "name" or short description. Obviously, the second and third lines indicate a successful test, while the last two indicate test failures. 1..4 ok not ok Perl modules usually invoke the tests either by running the prove program or by invoking make test or ./Build test (depending on whether you're using ExtUtils::MakeMaker or Module::Build). All three approaches use the Test::Harness module to analyze the output from TAP tests. If all else fails, you can also run the tests directly and inspect the output manually. prove make test ./Build test ExtUtils::MakeMaker Module::Build Test::Harness If Test::Harness is given a list of tests programs to run, it will run each one individually and summarize the result. Tests can run in quiet and verbose modes. In the quiet mode, the harness prints only the name of the test script (or scripts) and a result summary. Verbose mode prints the test "name" for each individual test. Besides Perl, helper libraries for producing TAP output are available for many languages including C, Javascript, and PHP (see the Links & Resources section). Suppose that you want to write tests for the module Foo, which provides the mul(), mul_str(), and answer() functions. The first two perform multiplication of numbers and strings, while the third provides the answer to life, the universe, and everything. Here is an extremely simple Perl test script for this module: Foo mul() mul_str() answer() use Test::More tests => 3; use Foo; ok(mul(2,3) == 6, '2 x 3 == 6'); is(mul_str('two', 'three'), 'six', 'expected: six'); ok(answer() == 42, 'got the answer to everything'); The tests => 3 part tells Test::More how many tests it intends to run (referred to as planning). Doing this allows the framework to detect whether you exit the test script without actually running all the tests. It is possible to write test scripts without planning, but many people consider this a bad habit. tests => 3 Hey! Isn't this article supposed to be about testing C? It is. Libtap is a C implementation of the Test Anything Protocol. It is to C what Test::More is to Perl, though using it doesn't tie you into using Perl. However, for convenience you probably want to use the prove program to interpret the output of your tests. Libtap implements a convenient way for your C and C++ programs to speak the TAP protocol. This allows you to easily declare how many tests you intend to run, skip tests (some apply only on specific operating systems, for example), and mark tests for unimplemented features as TODO. It also provides the convenient exit_status() function for indicating whether any of the tests failed through the program's return code. exit_status() How would you would write the test for the Foo module in C, using libtap? The #include <foo.h> line is analogous to the use Foo; of the Perl version. However, as this is C, you also need to link with the libfoo library (assuming this implements the functions declared in foo.h). #include <foo.h> use Foo; libfoo For this test, I will show the full source of the test program, including any #include lines; I will show only shorter fragments below. Notice again the difference in the number passed to the plan_tests() function and the number of actual tests that actually run: #include plan_tests() #include <tap.h> #include <string.h> #include <foo.h> int main(void) { plan_tests(3); ok1(mul(2, 3) == 6); ok(!strcmp(mul_str("two", "three"), "six"), "expected: 6"); ok(answer() == 42, "got the answer to everything"); return exit_status(); } The exit_status() function returns 0 if the correct number of tests ran and if they all succeeded; it returns nonzero otherwise. In the Perl version the test framework makes magic happen behind the scenes so that you don't have to twiddle the exit status by hand. One notable difference between the Perl version and the C version is the ok1() macro, a wrapper around the ok() call. Instead of having to call ok() with a test condition as the first parameter and diagnostic as the second (and any subsequent) parameter, this macro stringifies its argument and uses that for the diagnostic message. This can be very convenient for simple tests. ok1() ok() Both the Perl and C tests above, when run, print something along the lines of: 1..3 not ok 1 - mul(2, 3) == 6 # Failed test (basic.c:main() at line 12) ok 2 - expected: 6 ok 3 - got the answer to everything The line starting with # is a diagnostic message; libtap prints these occasionally to help you find which test is failing. In this case, it identifies the line in the test file that contained the failing test. # Pages: 1, 2, 3 Next Page Sponsored by: © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.linuxdevcenter.com/pub/a/onlamp/2006/01/19/libtap.html
CC-MAIN-2017-04
refinedweb
1,218
61.16
About Spamhaus | Press Office | FAQs Tweet Follow @spamhaus Spamhaus Botnet Threat Update: Q1-2019 2019-04-25 17:00:00 UTC | by Spamhaus Malware Labs | Category: malware , cybercrime , botnets. Meanwhile ‘.com’ & ‘.UK’ are leading the way when it comes to the top-level domains (TLDs) that are associated with botnet C&Cs. However, there’s no change when it comes to most abused hosting provider: Cloudflare. Spotlight This quarter we’re putting the spotlight on a single bulletproof hosting outfit. Since January, we have seen an upswing in the number of fraudulent domain name registrations in the ccTLD spaces ‘.UG’ (Uganda) and ‘.NG’ (Nigeria). While both ccTLDs have had an increase in the number of fraudulent domain name registrations, ‘.UG’ has gone through the roof. In February 2019, 35% of all domain names within ‘.UG’ that Spamhaus Malware Labs observed were registered for the sole purpose of hosting a botnet controller (C&C). Who is responsible for this massive increase of fraudulent domain name registrations in the African domain namespace? During our investigation, we discovered that a single bulletproof hosting outfit is connected to these domain registrations which is selling its services on underground sites and the dark web. The setup is simple: They register a ‘.UG’ domain name for their customer with the operator ‘i3c.co.ug’ and use a Chinese based DNS provider ‘DNSPod’ (Tencent). From a cybercriminal’s perspective, this has a big advantage: Both i3c.co.ug and DNSPod are exceptionally slow to investigate abuse reports, that’s if they are investigated at all. This makes a cybercriminal’s botnet C&C infrastructure almost 100% bulletproof to takedown requests. Spamhaus is trying to work together with both i3c.co.uh and DNSPod to resolve this issue. While communication between these operators can be challenging these efforts are starting to pay off, with the percentage of fraudulent domain registrations within ccTLD ‘.UG’ reducing from 35% to 29%. Looking for the path of least resistance Once Spamhaus identifies a botnet C&C, in addition to listing the entity across our range of services, we typically send takedown requests to the relevant domain registry, registrar and network owner. Needless to say, this has a substantial negative impact on a cybercriminal’s operations. Therefore, it is no surprise that spammers, phishers and botnet operators alike, are constantly looking for new ways to increase the uptime of their botnet C&C infrastructure to ensure that their operation is running smoothly. Number of botnet C&Cs observed in 2019 When we look at the number of newly detected botnet Command & Controllers (C&C), as a result of fraudulent sign-ups, it is evident that the upward trend detected in 2018 is continuing into 2019. In 2018 the number of botnet C&Cs identified from fraudulent sign-ups lifted 176% from 276 per month in January to 762 per month in December. The monthly average across 2018 was 530 botnet controller listings (BCL) per month. In this quarter we have observed another significant step-up in numbers across the first three months of this year. The number of newly detected botnet C&Cs reached 1,281 in March 2019, an additional 519 botnet C&Cs compared to December 2018’s figures. Meanwhile, the monthly average in 2019 has increased by 110% to 1,113 per month. What is a ‘fraudulent sign-up’? This is where a miscreant is using a fake, or stolen identity, to sign-up for a service, usually a VPS or a dedicated server, for the sole purpose of using it for hosting a botnet C&C. Geolocation of botnet C&Cs in Q1 2019 There has been no change in the location of botnet C&C traffic. The number one geolocation for botnet C&Cs remains the United States, followed by Russia and the Netherlands: Malware associated with botnet C&Cs, Q1 2019 We have identified substantial changes in the malware that is associated with botnet C&Cs across the first quarter of 2019. Most pertinent is the increase in the popularity of crimeware kits, which enable individuals with no previous coding experience to create, customize and distribute malware. AZORult: Throughout 2018 a total of 915 botnet C&Cs associated with AZORult were identified and blocked by our researchers. This averages out at 76 botnet C&C per month. In the first three months of this year, 1,155 botnet C&Cs have been identified. This takes 2019’s monthly average to 385 botnet C&Cs, which is a whopping 407% increase. Crimeware kits: Lokibot (#1) and AZORult (#2) botnet C&Cs account for 64% of all botnet traffic in Q1. There is a growth in popularity of crimeware kits, indicating that cybercrime is becoming increasingly ‘commoditized’. This commoditization wouldn’t occur without the right kind of demand and platform. Does this point to the cybercrime market growing in sophistication, driven by the increasing opportunities the dark web presents? JBifrost: Numbers associated with this Remote Access Tool (RAT) in 2018 proliferated, seeing it take the #2 spot, however in Q1 2019 it has moved down 4 places to #6. AZORult AZORult is a credential stealer ‘crimeware kit’ sold on underground hacker sites. It not only attempts to harvest and exfiltrate credentials from various applications such as web browsers but additionally tries to steal address books from email clients. JBifrost JBifrost, also known as Adwind, is a Remote Access Tool (RAT) based on Java. Java is a cross-platform environment which allows JBifrost to not only run on computers running the Windows operating system but also macOS and Android. Most abused top-level domains, Q1 2019 In addition to the issues that featured in our ‘Spotlight’, there has been lots of change for the top-level domains (TLDs) that are being used to host botnet C&Cs. Perhaps most interesting is that the top 2 entries are for well-known TLDs: ‘.com’ & ‘.uk’, particularly ‘.uk’ which has also seen a significant amount of shoeshow spamming activity in the past few months, as featured here. How cybercriminals protect against takedowns To make botnet C&Cs more resilient against takedowns cybercriminals usually register a dedicated domain name for hosting the botnet C&C. When the hosting provider takes down the botnet C&C, the botnet operator can easily change the DNS A record to a different server. Most abused domain registrars, Q1 2019 Namecheap has been knocked off its top spot by Register.com, which has moved up the ranks to #1 from #10 this quarter. Namecheap accounted for 65% of all domain registrations for botnet C&Cs in 2018 and now only accounts for 15%. We hope this is due to instigating a more vigorous vetting process, and not just a result of Namecheap not running any promotions in the past quarter. Register.com have 22% of the total domains used for botnet C&Cs registered through them in Q1 2019, compared to 0.55% across 2018. Poor processes leave operators open to abuse To register a domain name, a botnet operator must choose a domain registrar. Domain registrars play a crucial role in fighting abuse in the domain landscape: They not only vet the domain registrant (customer) but also have the ability to suspend or delete domain names. Unfortunately, many domain registrars do not have a robust customer vetting process, leaving their service open to abuse. ISPs hosting botnet C&Cs, Q1 2019 What hasn’t changed in Q1 2019 compared to 2018 is the preferred place that miscreants choose to host their botnet C&Cs: The US-based CDN provider Cloudflare. Cloudflare is followed some way behind by three Russian based hosting providers called Stajazk, Timeweb and Reg.ru. Cloudflare While Cloudflare does not directly host any content, it provides services to botnet operators, masking the actual location of the botnet controller and protecting it from DDoS attacks. We look forward to seeing you in July when we’ll be providing you with Quarter 2’s update. Spamhaus Information Press Office Spamhaus News Index Spamhaus in the media About Spamhaus Spamhaus Official Statements Article Information Permanent link to this news article: Spamhaus Botnet Threat Update: Q1-2019 Spamhaus News Quotes Permission to quote from or reproduce Spamhaus News articles is granted automatically providing you state the source as Spamhaus and link to the news record. Legal | Privacy
https://www.spamhaus.org/news/article/784/spamhaus-botnet-threat-update-q1-2019
CC-MAIN-2021-31
refinedweb
1,388
51.78
Connects the holes of pwh with its outer boundary. This is done by locating the topmost vertex in each hole in the polygon with holes pwh, and connecting it by a vertical segment to the polygon feature located directly above it (a vertex or an edge of the outer boundary, or of another hole). The function produces an output sequence of points, which corresponds to the traversal of the vertices of the input polygon; this traversal starts from the outer boundary and moves to the holes using the auxiliary vertical segments that were added to connect the polygon with its holes. The value-type of oi is Kernel::Point_2. pwhis bounded (namely it has a valid outer boundary). #include <CGAL/connect_holes.h>
https://doc.cgal.org/4.7/Boolean_set_operations_2/group__boolean__connect__holes.html
CC-MAIN-2019-30
refinedweb
122
55.98
There. you need to add AJAX request that will run once on document-ready to get rating from PHP page and update the rating $(document).ready( function(){ $('#productvote-20').load('Rating.php?productId=1'); }) And in that PHP file, you need to output this.. but don't cache that page <input type='checkbox' name='vote' value='1' /> <input type='checkbox' name='vote' value='2' /> <input type='checkbox' name='vote' value='3' /> <input type='checkbox' name='vote' value='4' /> <input type='checkbox' name='vote' value='5' /> After adding one control you have a form with runat="server" tag: <form id="Form1" runat="server"> <asp:Menu You already mentioned you have 16 usercontrol in one page. So you have 16 forms with runat="server" tag. Which is not allowed. Solution: As Chris Lively suggested, strip out the the form tag from your wsercontrols. Add just one form tag in the page. You should be ok. Here's how should all your controls should look like: <asp:Menu <Items> <asp:MenuItem <asp:MenuItem <asp:MenuItem <asp:MenuItem <asp:MenuItem </Ite WebForms are not ideal for asynchronous operations. Add SignalR to your project and use a Hub to push status data back to your page to update the current state of the process you are running Asynchronously. An example of a technique to perform this type of asynchronous notification is covered in my blog post titled "A Guide to using ASP.Net SignalR with RadNotifications" Client side javascript can know about the query string on a URL. In javascript, it can be accessed through location.search. You could have your javscript show and hide different sections of the page based on the information in the query string. Yes, there is. It's called dependency injection : Create this interface public interface ISocket{ void close(): OutputStream getOutputStream(); } And make sure your socket class herits from it. Your constructor becomes private void sendToRemoteServer(){ ISocket client; When you want to test locally, just pass as argument any implementation of ISocket that do whatever you want with the close method and the getOutputStream methods. please use insert or replase raw query instead this look this. SQLite "INSERT OR REPLACE INTO" vs. "UPDATE ... WHERE" String query = INSERT OR REPLACE INTO DATABASE_TABLE_PROJ (CATEGORY_COLUMN_ID,CATEGORY_COLUMN_TITLE,CATEGORY_COLUMN_CONTENT,CATEGORY_COLUMN_COUNT) VALUES ('1', 'Muhammad','xyz','2'); db.execSQL(query); Here is library which provides useful file uploading functions you are looking for from your android device. See this link also (it may be helpful to your requirements). Your understanding is correct. PHP needs to keep running, and in PHP you will need a loop, and you'll quickly run out of free Apache threads. If you need to handle lots of connections you need to use event-based server like Node.js or Tornado that can handle lots of open connections. If you'd rather use PHP, then a partial solution is to close connection after few seconds. The browser will reconnect, so you'll get a hybrid of polling and SSE. In PHP you can check sys_getloadavg() to decide whether you can keep connection open or you're running short on free processes. have you looked at SignalR. Its a server Push Framework which uses HTML5 WWeb Socket beneath. I have created chat server here using SignalR. Hard to tell but by the sounds of it you have a submit button and the onclick needs to update this label, something like this should work. I'm using viewstate but session would work here, as would a redirect to the same page with a querystring parameter. Not sure if I've understood your question correctly though. protected void Page_Load(object sender, EventArgs e) { if(!IsPostBack) { if(Viewstate["updateLabel"] == "true") { lblYourLabel.Text = "I'm updated now!"; Viewstate["updateLabel"] = ""; } } } protected void btnYourButton_Click(Object sender, Eventargs e) { ViewState["updateLabel"] = "true"; //Do other stuff here if you want } The Session would be the best place to store small data that you need across PostBacks but can't sent to the client. You could put a Dictionary indexed by row identifier into session and use that to fetch your secret key. Note that sessions are maintained across pages, so unless you are ok with the information being in memory till the user signs out(gets logged out eventually) you will have to delete it when done. By the extension I guess you would use Classic ASP. Then something like this should work: <!--#include file="header.asp"--> You can put this in each file you want to have a header. Of couse, you should create that "header.asp" page first ;) For highligthing the tab of the page you're in, there're several methods. IMHO, I suggest a clientside script to do that. JS or jQuery of course. You could check the file name of the URL you are in and give the proper class to the tab so it will be highligthed. Example ( jQuery needed ): var currentPage = window.location.pathname.substring(url.lastIndexOf('/')+1); if(currentPage == 'default.asp') $('li.homepage a').addClass('current'); This simple code retrives the file name and, by it, add a class to the corresponding element in your na Try this: <asp:UpdatePanel <ContentTemplate> <div id="MyElement" runat="server"> </div> </ContentTemplate> </asp:UpdatePanel> in code behind protected void Page_Load(object sender, EventArgs e) { if(IsPostBack) { var c = MyElement; } } in javascript var MyElement=document.getElementById('<%= MyElement.ClientID'); Post-update hooks doesn't exist in Subversion, but they aren't needed in your case: There aren't reasons to require update WC in the case of missing intersection between files in transaction and changed by earlier commits, not existing in WC If such intersection exist, commit will be blocked automatically and developer must update and merge foreign changes before own commit As already stated in the comments, you may extract only the logic pieces into the external file, while preserving a small inline script for passing the variables. As a very simplified example, you may extract from: <script> alert(${helloWorld}); </script> the following part into a separate file: function sayHello(message) { alert(message); } And keep the variables in an inline script: <script> var helloWorld = ${helloWorld}; sayHello(helloWorld); </script> This also encourages some separation of concerns which is a good design recommendation anyway. I am not diving into any other good practices such as the module pattern in this place. The example code is shown below: HTML CODE: <!-- Hidden Field --> <asp:HiddenField <asp:ModalPopupExtender </asp:ModalPopupExtender> <!-- Panel --> <asp:Panel <center> <h2 class="style2"> Information</h2> <p> <h3> <asp:Label </asp:Label></h3> </p> <!-- Label in the Panel to turn off the popup --> <asp Basically, you are doing it right. The watcher watches the watched object, so the best thing you can do is to tell it to close one eye for next watch. You can use $timeout to set a temporary flag that gets cleaned up ASAP _skipWatch = false rollBackLocally = (newVal, oldVal) -> _skipWatch = true angular.copy oldVal, newVal # schedule flag reset just at the end of the current $digest cycle $timeout (-> _skipWatch = false), 0 $scope.$watch 'obj.value', (newVal, oldVal) -> return if _skipWatch MyService.doSomething().then ((response) -> $scope.results.push(response)), (-> rollBackLocally newVal, oldVal) Are you using GET or POST to submit your form data? If you're using POST, you can check the request method: if($_SERVER['REQUEST_METHOD'] == 'POST') { // Validate form data } else { // Display form } If you're using GET you should check if any of the data you need is set. For example: if(isset($_GET["arg1"]) || isset($_GET["arg2"]) || isset($_GET["arg3"]) || ...) { // Validate data } else { // Display form } Inorder to pass the user Geolocation from client side to server side as soon as the user allows for his Geolocation to be used, is by Assigning user position variable (pos) to a HTML element (location) Then, auto submitting a HTML form that contains a POST method. Both the above codes have to be written within Google Geolocation API, as shown below in the excerpt. var infowindow = new google.maps.InfoWindow({ map: map, position: pos, content: 'Location found using HTML5.' }); // the below line has been inserted to assign a JS variable to HTML input field called 'geolocation' document.getElementById('location').value = pos map.setCenter(pos); // the below line has been inserted to autosubmit the form. document.getElementById("geolocation").submit(); Below is th <%= %> is an equivalent of Response.Write which is not intended to work with AJAX (which what UpdatePanel is). If you need to display something in an UpdatePanel - assign it to a property of a control (it could be some standard ASP.NET Control like Label and its Text property or even a SPAN with runat="server" and its innerHTML property) You do it the other way around. You write HTML that works with server side code, then layer JavaScript over the top that stops the HTTP request that would trigger the server side code. The specifics depend on exactly what you want to do, but will usually involve calling the Event.preventDefault() method having bound an event listener. For example, given a form: function calc(evt) { var form = this; alert(+this.elements.first.value + +this.elements.second.value); evt.preventDefault() } var form = document.querySelector('form'); form.addEventListener('submit', calc); See also: Progressive Enhancement and Unobtrusive JavaScript Try this: btnSubmitPhaseBackward.Attributes.Add("disabled", "disabled"); UPDATE: So it turns out that the disabled attribute does not really do what you think it should, to truly disable an HtmlAnchor you have to remove the href attribute, like this: btnSubmitPhaseBackward.Attributes.Remove("href"); To re-enable the HtmlAnchor you would need to add the href attribute back, like this: btnSubmitPhaseBackward.Attributes.Add("href", ""); When I was testing my server-side I didn't find any other way rather than connecting via wifi, this way it was working just fine (in your code you need to use ipv4, not localhost, just as reminder). I'm not good with network stuff, but I don't think it's possible using usb to create some kind of LAN. Have u tried from emulator? How to connect to my web server from Android Emulator in Eclipse Please find the link below to integrate Sandbox Paypal Integration You could use a two button approach. Use the delete button you have now to call the JavaScript. <asp:Button ID="ButtonDelete" runat="server" Text="Delete" OnClientClick='<%# Eval("DataBoundGuidField","javascript:RunJavaScript({0});") %>' /> A secondary button will actually post the page back. <asp:Button Use Javascript to set your hidden filed and click a hidden button to do server side processing function RunJavaScript(id){ $('#<%=TargetField.ClientID %>').value = id; //Call the hidden delete button for server side $('#<%=HdnDeleteButton.ClientID %>').click(); } You can use ASPxGridView.InitNewRow to catch when the new insert is initiated. You can then use ASPxGridView.CancelRowEditing to catch a cancellation. You have to get the attribute from the ServletContext instead of from the HttpServletRequest Try this inside your RPC method: public class MyServiceImpl extends RemoteServiceServlet implements MyService { @Override public void MyMethod() { this.getThreadLocalRequest() .getSession().getServletContext() .getAttribute("config"); } } In cases like these, I like to take advantage of HTML5 data attribute () This allows you to embed a target for your checkbox in its markup. The following hasn't been tested, and I took a guess about your checkbox markup, hopefully it gives you the idea. function loadImage(array, index) { for (i = 0; i < array.length; i++) { var imageId = 'img_"+i+"'; $("#parentDiv").append("<span style='padding-left: 10px; margin-top: 5px;' id='span_" + i + "'></span>"); $("#span_" + i + "").append("<img style='width: 100px; height: 90px;' id='" + imgeId + "' src='" + array[index] + "' />"); $("#parentDiv").append('<input type="che You are missing an implicit conversion from the java.util.List returned and the scala List that you want. Try adding the following import: import scala.collection.JavaConversions._ And tweak this line: val notifications:List[PushedNotification] = javapns.Push.payload(payload, keystoreFile, keystorePassword, false, devices) To this: val notifications:List[PushedNotification] = javapns.Push.payload(payload, keystoreFile, keystorePassword, false, devices).toList Also, it looks like you will be pushing two times to each device here as the calls to payload and alert both push notifications to the devices. If you only really wanted to send the complex payload that you build, then your code should probably be: val results = javapns.Push.payload(payload, keystoreFile, keystorePassword, A quicksearch learnes us the following: You could be using this: Step 1 . download the phpqrcode library and put it in a folder on your server Step 2. Use the following script: if (!isset($_GET['text'])) die('No text given'); include('../lib/full/qrlib.php'); include('config.php'); QRcode::png($_GET['text']); Step 3. Save it and enjoy it! It may be that the server doesn't respect the iDisplayLength value. In this case you must also update the server side. It can be done using XMLHttpRequest, as long as both HTML files will be stored in the same location (to avoid CORS constraints). Have a look at this MDN article for examples on how to use XMLHttpRequest. The high-level view would be: // live webpage var xhr = new XMLHttpRequest(); xhr.onload = function () { /* parse xhr.repsonseText */ }; xhr.onerror = function () { /* ... */ }; xhr.open('get', 'server-updated-page.html', true); xhr.send(); Your instinct is correct, it's probably not the way to do it. Session data should only be ephemeral information that is not too troublesome to lose and recreate. For example, the user will just have to login again to restore it. Configuration data or anything else that's necessary on the server and that must survive a logout is not part of the session and should be stored in a DB. Now, if you really need to easily keep this information client-side and it's not too much of a problem if it's lost, then use a session cookie for logged in/out state and a permanent cookie with a long lifespan for the rest of the configuration information. If the information it too much size-wise, then the only option I can think of is to store the data other than the logged in/out state in a DB. I think you have several options here. Get them to use node.js so they can use their JavaScript skills. Get them to use Ruby on Rails, they will probably appreciate using a new technology, and you get a great framework to boot. But probably more importantly, you should ask them what they think. You might be surprised. I stumbled on how to solve this problem by trial and error. Instead of var IMEI = $find("<%=cbIMEI.ClientID %>"); I used var IMEIID = '<%= cbIMEI.ClientID %>'; var IMEI = document.getElementById(IMEIID); Doing this, IMEI.disabled = false; works perfectly. I don't at all understand why, but there it is. In order for your dynamically created controls to interact with viewstate, events and other stages in the page life-cycle, they need to be created in the Init event. Creating these controls later in the life-cycle will exclude them from taking part in post value binding, viewstate binding, etc. Also note that you must recreate the controls on EVERY postback. protected void Page_Init(object sender, EventArgs e) { // Do any dynamic control re/creation here. }
http://www.w3hello.com/questions/-Update-server-side-asp-variable-in-page-behind-code-
CC-MAIN-2018-17
refinedweb
2,510
56.25
Which of the following operators should be preferred to overload as a global function rather than a member method? (A) Postfix ++ (B) Comparison Operator (C) Insertion Operator << (D) Prefix++ Answer: (C) Explanation:. Following is an example. #include <iostream> using namespace std; class Complex { private: int real; int imag; public: Complex(int r = 0, int i =0) { real = r; imag = i; } friend ostream & operator << (ostream &out, const Complex &c); }; ostream & operator << (ostream &out, const Complex &c) { out << c.real; out << "+i" << c.imag; return out; } int main() { Complex c1(10, 15); cout << c1; return 0; } Attention reader! Don’t stop learning now. Get hold of all the important C++ Foundation and STL concepts with the C++ Foundation and STL courses at a student-friendly price and become industry ready.
https://www.geeksforgeeks.org/c-operator-overloading-question-4/?ref=rp
CC-MAIN-2021-10
refinedweb
128
55.95
Intro This is seminar #4, of a series of free seminars given this semester at NKU on Papervision3D and CS4. It demonstrates how to create a molfile molecule viewer in CS4 (Gumbo). Click the image below to see a demo. YouTube Discussion This is now my second version of a 3D molecule viewer. The first version was written in Papervision3D and was about 1400 lines of code and required that you rewrite the molfile into xml, and didn’t show bonds. This version is in CS4 (Gumbo or Flex 4) and is only about 350 lines of code, has bonds (single, double, triple, aromatic) and parses the molfile directly. It took me a day to write this application(from 4 to 11), and that was with breaks: playing with the kids, gym, Shakespeare club, an ice cream float party, and several episodes of Hawaii-5-0 and Enterprise. Yeah, this is the life! Molfile Parser Parsing the Molefile was much easier than I thought it was going to be. Here is how you do it: - You just bring the molfile into Flex 4 using http services command (just as you would an XML file but with no e4x format) <mx:HTTPService id=”molService” url=”assets/0420.mol” result=”initMoleFile(event)” /> - Then you convert it to a string, take out the special characters, split the contents, stuff it into an array, and remove all empty array elements. // Convert to string myString=molService.lastResult.toString(); //Remove special characters myString = myString.split(“\n”).join(” “); myString = myString.split(“\r”).join(” “); //Stuff into an Array myArray = myString.split(/ /); //Filter out bland array elements myArray = myArray.filter(filterMe); function filterMe(element:String, index:int, array:Array):Boolean{ return (element != “” ); } - After you’ve stuffed your molefile into an array, you need to know where to go in that array so you can start counting off the data elements.Where do you go? Go to the version number of the molfile. So run a forEach and grab it’s index value. myArray.forEach(myStart); function myStart(element:String, index:int, array:Array):void{ if(element == “V2000″||element == “V3000″){ myStartVert=index;} The index value of the version number is the heart of your molfile parser. Above we allow for versions V2000 or V3000. This index value will take you to any piece of data that you need in order to display your molecule. From this point on, its just a big counting game, everything is referenced to your version index value. Atom File To display your model you need an atom shape that can change size, color, and fill based upon the atom type. The atom color, type, and radius were previously handled in an earlier version of the molecule viewer, created in Papervision3D (video, source). And the atom shape and fill were handled in a previous post in this blog on 3D Lines in CS4 using drawPath. By combining these two programs you can come up with a suitable atom. To do this: - Place the atom switch case from the PV3D Viewer into the Ball class created in the 3D Lines post, then change the ball class to Atom along with its constructor and class names. - In the switch case change the myColor value from grabbing a jpg to equaling a color value for all possible atoms: case “H”: //myColor = “colors/white.jpg”; myColor = 0xffffff; radius = 25; - In the Constructor function change it to receive only one parameter type, which is the letter of your atom: H, C, O… public function init(type:String):void{… - Finally, change the fill method to gradient fill so your atoms looks cool. graphics.beginGradientFill(myType, [myColor, 0x000022],[1,1], [1, 40]); Now that your atom subclass is completed, it’s time to start building molecules. Creating Molecules and Bonds There are two parts to creating molecules: placing atoms in 3D, and creating their bonds. The first part of a molfile gives you the atom position values, and the second part give the bond relationships (which atoms are connected to which) and types. Placing Atoms Essentially all you have to do is replace the random placement of circles from the post on drawPaths to the atomic positions found in your molfile. atom.xpos = myArray[myStartVert+1 +j*16] * myScale – offsetx; atom.ypos = myArray[myStartVert+2 +j*16] * myScale – offsety; atom.zpos = myArray[myStartVert+3 +j*16] * myScale – offsetz; myHolder.addChild(atom); You get to those positions by counting 1 position forward for x, 2 for y, and 3 for z, from the version number index, as shown above. The offset is just an average of the molecular positions and gives you the ability to spin your molecule around its center. Everything else pretty much follows what was done in the drawPaths post. Creating Bonds Creating bonds is easy as well, with one big conceptual change from the drawPaths post. Double processing is eliminated by just duplicating the atom and placing it into a marks array. The lines follow the duplicated atom in the marks array and the other atom which exist in the atoms array are used for sorting. atoms.push(atom); marks.push(atom); The big problem (in creating bonds) was figuring out how to create double, triple, and aromatic bonds. And it turns out to be just a big counting game, coupled with offsetting lines. It starts with figuring out what type of bond you have and using that information in a switch case. The information for what type of bond you have is carried in the second part of your molfile which starts at startBondArray=myStartVert+myNumAtoms*16+1 Adding 2 to this number gives you the bond type location (OK once again it’s a big counting game – check out the molfile description under the read more button to see the molfile structure – I know it well). So, each time you create a double, triple, or aromatic bond you have to keep track of where all the data is in your array. This was accomplished by adding the following counting tracker to your command and data arrays: commands[2*k+2*dB+4*tB+2*aB] mydata[4*k+4*dB+8*tB+4*aB] which are needed to for the drawPath command shown below myHolder.graphics.drawPath(commands, mydata); The variables dB, tB, and aB are iterated (by one) each time you create a double, triple, or aromatic bond respectively. These values are then zeroed after each molecular drawing iteration and the process is restarted on each onEnterFrame tick. Creating the bond offsets was not very sophisticated as shown below: mydata[4*k+4*dB+8*tB+4*aB] = marks[myArray[startBondArray+7*k]-1].x-bond2Sep; mydata[4*k+1+4*dB+8*tB+4*aB] = marks[myArray[startBondArray+7*k]-1].y-bond2Sep; mydata[4*k+4+4*dB+8*tB+4*aB] = marks[myArray[startBondArray+7*k]-1].x+bond2Sep; mydata[4*k+5+4*dB+8*tB+4*aB] = marks[myArray[startBondArray+7*k]-1].y+bond2Sep; You just subtract or add an offset value (bond2Sep) as shown above for the double bond case. Which Way Should We Go! In going through this, I found some molfiles on the web that were not well formed and would not work with this parser. That’s always the problem with parsers. The errors were easy to fix and many times just meant adding or subtracting an extra zero. But your end user can’t do that… I really think XML is the best way to go. That way you can target nodes and forget about counting. You can go even a step farther with XML and include chemical bonding information which would enable you to run chemical simulations. Wouldn’t that be cool! To see all the code, download it from the source link above or click the button below: Read the rest of this entry »
http://professionalpapervision.wordpress.com/2009/02/24/
CC-MAIN-2014-35
refinedweb
1,310
62.48
Hello everyone. I'm working on some exercises and I'm having some problems with a method. I want to create a method to calculate the Factorial of an int number. I already wrote code that asks the user to input an int number and it calculates the Factorial, and it works fine (ie: if I input 5 it outputs, as it should. Here's the code:, as it should. Here's the code:5! = 120 import java.util.Scanner; public class Factorial1 { public static void main(String[] args) { Scanner input = new Scanner(System.in); int number; int total = 1; System.out.print("Enter positive int number: "); number = input.nextInt(); while(number < 0) { System.out.print("Enter positive int number: "); number = input.nextInt(); } for(int i = number; i > 0; i--) total *= i; System.out.printf("%d! = %d", number, total); } } Now I want to make a method to re-use this code in other programs and I wrote this program: But when I run this program it outputs 0 instead of 120. I don't have the slightest idea of what is wrong with this code as it compiles just fine but doesn't work as intended. Any help will be very appreciated, and thanks in advance! Ricardo
http://www.javaprogrammingforums.com/whats-wrong-my-code/36108-factorial-program.html
CC-MAIN-2014-42
refinedweb
207
67.65
View all questions by Debananda Answered On : Sep 13th, 2009 View all answers by jv500 #include <stdio.h>#include <stdlib.h>#include <string.h>/* Program to print all combinations of an input */char *globalPtr;void swap (char *a, char *b){char t = *a; *a = *b; *b = t;}void displayComb (char *s, size_t len){int i; if (s == NULL) { return ; } if (len == 1) { printf ("%sn", globalPtr); return ; } displayComb (s+1, len-1); for (i = 1; i < len; i++) { swap (s, s+i); displayComb (s+1, len-1); swap (s+i, s); } return ;} If you think the above answer is not correct, Please select a reason and add your answer below. C C++ What will you do when you are not able to get your message across to your audience? What do you do when things do not go as planned? Cite an example. What do you think about our company? How to Overcome Employment Gap during interview. What are the career options available in Software Testing Industry? Do you like working in a team or alone? Explain. Tell me three things about you, which are not found in your resume How do you approach a problem? Do you seek advice from others when you have a question?.
http://www.geekinterview.com/question_details/55258
CC-MAIN-2013-20
refinedweb
206
79.19
By Eduardo Berrocal Garcia De Carellan, Published: 01/02/2018, Last Updated: 01/02/2018 In this article, I show how to transform a simple C++ program—in this case a simplified version of the famous UNIX command-line utility grep—in order to take advantage of persistent memory (PMEM). The article starts with a description of what the volatile version of the grep program does with a detailed look at the code. I then discuss how you can improve grep by adding to it a persistent caching of search results. Caching improves grep by1 adding fault tolerance (FT) capabilities and2 speeding up queries for already-seen search patterns. The article goes on to describe the persistent version of grep using the C++ bindings of libpmemobj, which is a core library of the Persistent Memory Developer Kit (PMDK) collection. Finally, parallelism (at file granularity) is added using threads and PMEM-aware synchronization. If you are familiar with any UNIX*-like operating system, such as GNU/Linux*, you are probably also familiar with the command-line utility grep (which stands for globally search a regular expression and print). In essence, grep takes two arguments (the rest are options): a pattern, taking the form of a regular expression, and some input file(s) (including standard input). The goal of grep is to scan the input, line by line, and then output those lines matching the provided pattern. You can learn more by reading the grep manual page (type man grep on the terminal or view the Linux man page for grep online). For my simplified version of grep, only the two aforementioned arguments are used (pattern and input), where input should be either a single file or a directory. If a directory is provided, its contents are scanned to look for input files (subdirectories are always scanned recursively). To see how this works in practice, let’s run it using its own source code as input and “int” as pattern. The code can be downloaded from GitHub*. To compile the code from the root of the pmdk-examples repository, type make simple-grep. libpmemobj must be installed in your system, as well as a C++ compiler. For the sake of compatibility with the Windows* operating system, the code does not make any calls to Linux-specific functions. Instead, the Boost C++ library collection is used (basically to handle filesystem input/output). If you use Linux, Boost C++ is probably already nicely packaged for your favorite distribution. For example, in Ubuntu* 16.04, you can install these libraries by doing: # sudo apt-get install libboost-all-dev If the program compiles correctly, we can run it like this: $ ./grep int grep.cpp FILE = grep.cpp 44: int 54: int ret = 0; 77: int 100: int 115: int 135: int 136: main (int argc, char *argv[]) As you can see, grep finds seven lines with the word “int” on it (lines 44, 54, 77, 100, 115, 135, and 136). As a sanity check, we can run the same query using the system-provided grep: $ grep int –n grep.cpp 44:int 54: int ret = 0; 77:int 100:int 115:int 135:int 136:main (int argc, char *argv[]) So far, we have the desired output. The following listing shows the code (note: line numbers on the snippets above do not match the following listing because code formatting differs from the original source file): #include <boost/filesystem.hpp> #include <boost/foreach.hpp> #include <fstream> #include <iostream> #include <regex> #include <string.h> #include <string> #include <vector> using namespace std; using namespace boost::filesystem; /* auxiliary functions */ int process_reg_file (const char *pattern, const char *filename) { ifstream fd (filename); string line; string patternstr ("(.*)("); patternstr += string (pattern) + string (")(.*)"); regex exp (patternstr); int ret = 0; if (fd.is_open ()) { size_t linenum = 0; bool first_line = true; while (getline (fd, line)) { ++linenum; if (regex_match (line, exp)) { if (first_line) { cout << "FILE = " << string (filename); cout << endl << flush; first_line = false; } cout << linenum << ": " << line << endl; cout << flush; } } } else { cout << "unable to open file " + string (filename) << endl; ret = -1; } return ret; } int process_directory_recursive (const char *dirname, vector<string> &files) { path dir_path (dirname); directory_iterator it (dir_path), eod; BOOST_FOREACH (path const &pa, make_pair (it, eod)) { /* full path name */ string fpname = pa.string (); if (is_regular_file (pa)) { files.push_back (fpname); } else if (is_directory (pa) && pa.filename () != "." && pa.filename () != "..") { if (process_directory_recursive (fpname.c_str (), files) < 0) return -1; } } return 0; } int process_directory (const char *pattern, const char *dirname) { vector<string> files; if (process_directory_recursive (dirname, files) < 0) return -1; for (vector<string>::iterator it = files.begin (); it != files.end (); ++it) { if (process_reg_file (pattern, it->c_str ()) < 0) cout << "problems processing file " << *it << endl; } return 0; } int process_input (const char *pattern, const char *input) { /* check input type */ path pa (input); if (is_regular_file (pa)) return process_reg_file (pattern, input); else if (is_directory (pa)) return process_directory (pattern, input); else { cout << string (input); cout << " is not a valid input" << endl; } return -1; } /* MAIN */ int main (int argc, char *argv[]) { /* reading params */ if (argc < 3) { cout << "USE " << string (argv[0]) << " pattern input "; cout << endl << flush; return 1; } return process_input (argv[1], argv[2]); } I know the code is long but, trust me, it is not difficult to follow. All I do here is to check whether the input is a file or a directory in process_input(). In the case of the former, the file is directly processed in process_reg_file(). Otherwise, the directory is scanned for files in process_directory_recursive(), and then all scanned files are processed one by one in process_directory() by calling process_reg_file() on each one. When processing a file, each line is checked to see whether it matches the pattern or not. If it does, the line is printed to standard output. Now that we have a working grep, let’s see how we can improve it. The first thing we see is that grep does not keep any state at all. Once the input is analyzed and the output generated, the program simply ends. Let say, for instance, that we are interested in scanning a large directory (with hundreds of thousands of files) for a particular pattern of interest, every week. And let’s say that the files in this directory could, potentially, change over time (although not likely all at the same time), or new files could be added. If we use the classic grep for this task, we would be potentially scanning some of the files over and over, wasting precious CPU cycles. This limitation can be overcome with the addition of a cache: If it turns out that a file has already been scanned for a particular pattern (and its contents have not changed since it was last scanned), grep can simply return the cached results instead of re-scanning the file. This cache can be implemented in multiple ways. One way, for example, is to create a specific data base (DB) to store the results of each scanned file and pattern (adding also a timestamp to detect file modifications). Although this surely works, a solution not involving the need to install and run a DB engine would be preferable, not to mention the need to perform DB queries every time files are analyzed (which may involve network and input/output overhead). Another way to do it would be to store this cache as a regular file (or files), loading it into volatile memory at the beginning, and updating it either at the end of the execution or every time a new file is analyzed. Although this seems to be a better approach, it still forces us to create two data models, one for volatile RAM, and another for secondary persistent storage (files), and write code to translate back and forth between the two. It would be nice to avoid this extra coding effort. Making any code PMEM-aware using libpmemobj always involves, as a first step, designing the types of data objects that will be persisted. The first type that needs to be defined is that of the root object. This object is mandatory and used to anchor all the other objects created in the PMEM pool (think of a pool as a file inside a PMEM device). For my grep sample, the following persistent data structure is used: Figure 1. Data structure for PMEM-aware grep. Cache data is organized by creating a linked list of patterns hanging from the root class. Every time a new pattern is searched, a new object of class pattern is created. If the pattern currently searched for has been searched before, no object creation is necessary (the pattern string is stored in patternstr). From the class pattern we hang a linked list of files scanned. The file is composed of a name (which, in this case, is the same as the file system path), modification time (used to check whether the file has been modified), and a vector of lines matching the pattern. We only create new objects of class file for files not scanned before. The first thing to notice here are the special classes p<> (for basic types) and persistent_ptr<> (for pointers to complex types). These classes are used to tell the library to pay attention to those memory regions during transactions (changes done to those objects are logged and rolled back in the event of a failure). Also, and due to the nature of virtual memory, persistent_ptr<> should always be used for pointers residing in PMEM. When a pool is opened by a process and mapped to its virtual memory address space, the location of the pool could be different than previous locations used by the same process (or other’s processes accessing the same pool). In the case of PMDK, persistent pointers are implemented as fat pointers; that is, they consist of a pool ID (used to access current pool virtual address from a translation table) + offset (from the beginning of the pool). For more information about pointers in PMDK you can read Type safety macros in libpmemobj, and also C++ bindings for libpmemobj (part 2) – persistent smart pointer. You may wonder why, then, is the vector ( std::vector) of lines not declared as a persistent pointer. The reason is that we do not need to. The object representing the vector, lines, does not change once it is created (during construction of an object of class file), and hence there is no need to keep track of the object during transactions. Still, the vector itself does allocate (and delete) objects internally. For this reason, we cannot rely only on the default allocator from std::vector (which only knows about volatile memory, and allocates all objects in the heap); we need to pass a customized one—provided by libpmemobj—that knows about PMEM. This allocator is pmem::obj::allocator<line>. Once we have declared the vector that way, we can use it as we would in any normal volatile code. In fact, you can use any of the standard container classes this way. Now, let’s jump to the code. In order to avoid repetition, only new code is listed (the full code is available in pmemgrep/pmemgrep.cpp). I start with definitions (new headers, macros, namespaces, global variables, and classes): ... #include <libpmemobj++/allocator.hpp> #include <libpmemobj++/make_persistent.hpp> #include <libpmemobj++/make_persistent_array.hpp> #include <libpmemobj++/persistent_ptr.hpp> #include <libpmemobj++/transaction.hpp> ... #define POOLSIZE ((size_t) (1024 * 1024 * 256)) /* 256 MB */ ... using namespace pmem; using namespace pmem::obj; /* globals */ class root; pool<root> pop; /* persistent data structures */ struct line { persistent_ptr<char[]> linestr; p<size_t> linenum; }; class file { private: persistent_ptr<file> next; persistent_ptr<char[]> name; p<time_t> mtime; vector<line, pmem::obj::allocator<line>> lines; public: file (const char *filename) { name = make_persistent<char[]> (strlen (filename) + 1); strcpy (name.get (), filename); mtime = 0; } char * get_name (void) { return name.get (); } size_t get_nlines (void) { return lines.size (); /* nlines; */ } struct line * get_line (size_t index) { return &(lines[index]); } persistent_ptr<file> get_next (void) { return next; } void set_next (persistent_ptr<file> n) { next = n; } time_t get_mtime (void) { return mtime; } void set_mtime (time_t mt) { mtime = mt; } void create_new_line (string linestr, size_t linenum) { transaction::exec_tx (pop, [&] { struct line new_line; /* creating new line */ new_line.linestr = make_persistent<char[]> (linestr.length () + 1); strcpy (new_line.linestr.get (), linestr.c_str ()); new_line.linenum = linenum; lines.insert (lines.cbegin (), new_line); }); } int process_pattern (const char *str) { ifstream fd (name.get ()); string line; string patternstr ("(.*)("); patternstr += string (str) + string (")(.*)"); regex exp (patternstr); int ret = 0; transaction::exec_tx ( pop, [&] { /* dont leave a file processed half way through */ if (fd.is_open ()) { size_t linenum = 0; while (getline (fd, line)) { ++linenum; if (regex_match (line, exp)) /* adding this line... */ create_new_line (line, linenum); } } else { cout << "unable to open file " + string (name.get ()) << endl; ret = -1; } }); return ret; } void remove_lines () { lines.clear (); } }; class pattern { private: persistent_ptr<pattern> next; persistent_ptr<char[]> patternstr; persistent_ptr<file> files; p<size_t> nfiles; public: pattern (const char *str) { patternstr = make_persistent<char[]> (strlen (str) + 1); strcpy (patternstr.get (), str); files = nullptr; nfiles = 0; } file * get_file (size_t index) { persistent_ptr<file> ptr = files; size_t i = 0; while (i < index && ptr != nullptr) { ptr = ptr->get_next (); i++; } return ptr.get (); } persistent_ptr<pattern> get_next (void) { return next; } void set_next (persistent_ptr<pattern> n) { next = n; } char * get_str (void) { return patternstr.get (); } file * find_file (const char *filename) { persistent_ptr<file> ptr = files; while (ptr != nullptr) { if (strcmp (filename, ptr->get_name ()) == 0) return ptr.get (); ptr = ptr->get_next (); } return nullptr; } file * create_new_file (const char *filename) { file *new_file; transaction::exec_tx (pop, [&] { /* allocating new files head */ persistent_ptr<file> new_files = make_persistent<file> (filename); /* making the new allocation the actual head */ new_files->set_next (files); files = new_files; nfiles = nfiles + 1; new_file = files.get (); }); return new_file; } void print (void) { cout << "PATTERN = " << patternstr.get () << endl; cout << "\tpattern present in " << nfiles; cout << " files" << endl; for (size_t i = 0; i < nfiles; i++) { file *f = get_file (i); cout << "###############" << endl; cout << "FILE = " << f->get_name () << endl; cout << "###############" << endl; cout << "*** pattern present in " << f->get_nlines (); cout << " lines ***" << endl; for (size_t j = f->get_nlines (); j > 0; j--) { cout << f->get_line (j - 1)->linenum << ": "; cout << string (f->get_line (j - 1)->linestr.get ()); cout << endl; } } } }; class root { private: p<size_t> npatterns; persistent_ptr<pattern> patterns; public: pattern * get_pattern (size_t index) { persistent_ptr<pattern> ptr = patterns; size_t i = 0; while (i < index && ptr != nullptr) { ptr = ptr->get_next (); i++; } return ptr.get (); } pattern * find_pattern (const char *patternstr) { persistent_ptr<pattern> ptr = patterns; while (ptr != nullptr) { if (strcmp (patternstr, ptr->get_str ()) == 0) return ptr.get (); ptr = ptr->get_next (); } return nullptr; } pattern * create_new_pattern (const char *patternstr) { pattern *new_pattern; transaction::exec_tx (pop, [&] { /* allocating new patterns arrray */ persistent_ptr<pattern> new_patterns = make_persistent<pattern> (patternstr); /* making the new allocation the actual head */ new_patterns->set_next (patterns); patterns = new_patterns; npatterns = npatterns + 1; new_pattern = patterns.get (); }); return new_pattern; } void print_patterns (void) { cout << npatterns << " PATTERNS PROCESSED" << endl; for (size_t i = 0; i < npatterns; i++) cout << string (get_pattern (i)->get_str ()) << endl; } } ... Shown here is the C++ code for the diagram in Figure 1. You can also see the headers for libpmemobj, a macro (POOLSIZE) defining the size of the pool, and a global variable (pop) to store an open pool (you can think of pop as a special file descriptor). Notice how all modifications to the data structure—in root::create_new_pattern(), pattern::create_new_file(), and file::create_new_line()—are protected using transactions. In the C++ bindings of libpmemobj, transactions are conveniently implemented using lambda functions (you need a compiler compatible with at least C++11 to use lambdas). If you do not like lambdas for some reason, there is another way. Notice also how all the memory allocation is done through make_persistent<>() instead of the regular malloc() or the C++ `new` construct. The functionality of the old process_reg_file() is moved to the method file::process_pattern(). The new process_reg_file() implements the logic to check whether the current file has already been scanned for the pattern (if the file exists under the current pattern and it has not been modified since last time): int process_reg_file (pattern *p, const char *filename, const time_t mtime) { file *f = p->find_file (filename); if (f != nullptr && difftime (mtime, f->get_mtime ()) == 0) /* file exists */ return 0; if (f == nullptr) /* file does not exist */ f = p->create_new_file (filename); else /* file exists but it has an old timestamp (modification) */ f->remove_lines (); if (f->process_pattern (p->get_str ()) < 0) { cout << "problems processing file " << filename << endl; return -1; } f->set_mtime (mtime); return 0; } The only change to the other functions is the addition of the modification time. For example, process_directory_recursive() now returns a vector of tuple<string, time_t> (instead of just vector<string>): int process_directory_recursive (const char *dirname, vector<tuple<string, time_t>> &files) { path dir_path (dirname); directory_iterator it (dir_path), eod; BOOST_FOREACH (path const &pa, make_pair (it, eod)) { /* full path name */ string fpname = pa.string (); if (is_regular_file (pa)) { files.push_back ( tuple<string, time_t> (fpname, last_write_time (pa))); } else if (is_directory (pa) && pa.filename () != "." && pa.filename () != "..") { if (process_directory_recursive (fpname.c_str (), files) < 0) return -1; } } return 0; } Sample Run Let’s run this code with two patterns: “int” and “void”. This assumes that a PMEM device (real or emulated using RAM) is mounted at /mnt/mem: $ ./pmemgrep /mnt/mem/grep.pool int pmemgrep.cpp $ ./pmemgrep /mnt/mem/grep.pool void pmemgrep.cpp $ If we run the program without parameters, we get the cached patterns: $ ./pmemgrep /mnt/mem/grep.pool 2 PATTERNS PROCESSED void int When passing a pattern, we get the actual cached results: $ ./pmemgrep /mnt/mem/grep.pool void PATTERN = void 1 file(s) scanned ###############) $ $ ./pmemgrep /mnt/mem/grep.pool int PATTERN = int 1 file(s) scanned ############### FILE = pmemgrep.cpp ############### *** pattern present in 14 lines *** 137: int 147: int ret = 0; 255: print (void) 327: print_patterns (void) 337: int 356: int 381: int 395: int 416: int 417: main (int argc, char *argv[]) 436: if (argc == 2) /* No pattern is provided. Print stored patterns and exit 438: proot->print_patterns (); 444: if (argc == 3) /* No input is provided. Print data and exit */ 445: p->print (); $ Of course, we can keep adding files to existing patterns: $ ./pmemgrep /mnt/mem/grep.pool void Makefile $ ./pmemgrep /mnt/mem/grep.pool void PATTERN = void 2 file(s) scanned ############### FILE = Makefile ############### *** pattern present in 0 lines *** ###############) Now that we have come this far, it would be a pity not to add multithreading support too; especially so, given the small amount of extra code required (the full code is available in pmemgrep_thx/pmemgrep.cpp). The first thing we need to do is to add the appropriate header for pthreads and for the persistent mutex (more on this later): ... #include <libpmemobj++/mutex.hpp> ... #include <thread> A new global variable is added to set the number of threads in the program, which now accepts a command-line option to set the number of threads ( -nt=number_of_threads). If -nt is not explicitly set, one thread is used as default: int num_threads = 1; Next, a persistent mutex is added to the pattern class. This mutex is used to synchronize writes to the linked list of files (parallelism is done at the file granularity): class pattern { private: persistent_ptr<pattern> next; persistent_ptr<char[]> patternstr; persistent_ptr<file> files; p<size_t> nfiles; pmem::obj::mutex pmutex; ... You may be wondering why the pmem::obj version of mutex is needed (why not use the C++ standard one). The reason is because the mutex is stored in PMEM, and libpmemobj needs to be able to reset it in the event of a crash. If not recovered correctly, a corrupted mutex could create a permanent deadlock; for more information you can read the following article about synchronization with libpmemobj. Although storing mutexes in PMEM is useful when we want to associate them with particular persisted data objects, it is not mandatory in all situations. In fact, in the case of this example, a single standard mutex variable—residing in volatile memory—would have sufficed (since all threads work on only one pattern at a time). The reason why I am using a persistent mutex is to showcase its existence. Persistent or not, once we have the mutex we can synchronize writes in pattern::create_new_file() by simply passing it to the transaction::exec_tx() (last parameter): transaction::exec_tx (pop, [&] { /* LOCKED TRANSACTION */ /* allocating new files head */ persistent_ptr<file> new_files = make_persistent<file> (filename); /* making the new allocation the * actual head */ new_files->set_next (files); files = new_files; nfiles = nfiles + 1; new_file = files.get (); }, pmutex); /* END LOCKED TRANSACTION */ The last step is to adapt process_directory() to create and join the threads. A new function— process_directory_thread()—is created for the thread logic (which divides work by thread ID): void process_directory_thread (int id, pattern *p, const vector<tuple<string, time_t>> &files) { size_t files_len = files.size (); size_t start = id * (files_len / num_threads); size_t end = start + (files_len / num_threads); if (id == num_threads - 1) end = files_len; for (size_t i = start; i < end; i++) process_reg_file (p, get<0> (files[i]).c_str (), get<1> (files[i])); } int process_directory (pattern *p, const char *dirname) { vector<tuple<string, time_t>> files; if (process_directory_recursive (dirname, files) < 0) return -1; /* start threads to split the work */ thread threads[num_threads]; for (int i = 0; i < num_threads; i++) threads[i] = thread (process_directory_thread, i, p, files); /* join threads */ for (int i = 0; i < num_threads; i++) threads[i].join (); return 0; } In this article, I have shown how to transform a simple C++ program—in this case a simplified version of the famous UNIX command-line utility, grep—in order to take advantage of PMEM. I started the article with a description of what the volatile version of the grep program does with a detailed look at the code. After that, the program is improved by adding a PMEM cache using the C++ bindings of libpmemobj, a core library of PMDK. To conclude, parallelism (at file granularity) is added using threads and PMEM-aware synchronization.
https://software.intel.com/content/www/us/en/develop/articles/boost-your-c-applications-with-persistent-memory-a-simple-grep-example.html?cid=em-elq-43168&utm_source=elq&utm_medium=email&utm_campaign=43168&elq_cid=1717881
CC-MAIN-2020-29
refinedweb
3,561
50.87
My requirement is to take the values of the HashMap and store that in a Collection in the order in which it was stored in the HashMap. The HashMap.Values() method returns a colection of values in the hashmap. But Collection in nature doesnot assure the order of reterival. My doubt is how to assure the order of reterival in the order in which it was stored in the map? Can anyone help in this ? regards, Karthick.J.G Remko (My website) SCJP 1.5, SCWCD 1.4, SCDJWS 1.4, SCBCD 1.5, ITIL(Manager), Prince2(Practitioner), Reading/ gaining experience for SCEA, i have the values stored in the hashmap.i wanted to reterive them in the order in which it was stored in the hashmap. Currently i am using HashMap.Values() method. It returns a collection. But reteriveing elements from the collection does not assure the order of reterival. Can you give the code snippet that does this functionality as i dont understand what you are saying or i might not have explained the problem space in the forum properly, so that you were not able to answer the question i raised. regards, Karthick. Sheriff My doubt is will this code produce output name1#name2#name3#name4 always. i assume the issue is not in using the HashMap or LinkedHashMap. it is when the the values are stored in the collection by the method Values. the order of reterival is not assured when it is reterived from the collection by it's nature. Or do you mean that if LinkedHashMap is used the output will be name1#name2#name3#name4 always. regards, Karthick Marshal Or do you mean that if LinkedHashMap is used the output will be name1#name2#name3#name4 always. Reading the JavaDoc should *always* be done first !! This is from the first paragraph of the JavaDoc for the LinkedHashMap. public class LinkedHashMap extends Hash.) Henry
https://coderanch.com/t/406392/java/HashMap
CC-MAIN-2018-22
refinedweb
320
75.71
Changelog for Net-LCDproc 0.104 2015-06-03T23:58:44Z [ Bugfixes ] - has_socket is called during DEMOLISH so make sure it is actually there (Ioan Rogers) - Fix a few metadata bugs in eg/music.pl (Ioan Rogers) [ Other ] - [Closes #3] namespace::sweep is now obsolete, switch to namespace::clean. Thanks ETHER. (Ioan Rogers) - Switch from semantic to basic versioning scheme 0.1.3 2014-03-06T09:15:36Z - Declare minimum required version of Moo (Ioan Rogers) 0.1.2 2014-03-03T13:49:47Z - Switched from Moose to Moo + Type::Tiny (Ioan Rogers) - Updated minimum perl to v5.10.2 (Ioan Rogers)
https://metacpan.org/changes/distribution/Net-LCDproc
CC-MAIN-2017-09
refinedweb
102
68.77
INTJ and Entrepreneurship Am I out of my Comfort Zone? Being an entrepreneur is a natural career choice for the INTJ personality type because they are very independent, strategic thinkers who are able to see the big picture of any given situation. They are not afraid to take risks and they thrive on analyzing the past in order to improve upon the future. They also enjoy being in control and doing things their own way, which means that they would rather create their own business than work for someone else. Are you an INTJ? Introverted — Intuitive — Thinking — Judgemental As an INTJ, you are very independent, and you like to be on your own. You are not interested in people who cannot think for themselves or who do not have their own strong opinions. You need to be with people who challenge you and make you feel intellectually stimulated. Relationships that lack that kind of excitement are not good for you. You need to spend time with people who share your interests and inspire you in some way. Being with people who cannot think for themselves is painful for you because it makes you feel like your energy is being drained away from you. You are a deep person underneath it all, but unlike Feeling types, your emotions are private. You do not want to express your feelings openly because it is not practical and can open yourself up to manipulation or exploitation by others. This is why the ENFJ type seems more emotional than the INTJ type — they wear their heart on their sleeves — but this does not mean that INTJs do not have feelings; rather, they suppress them in order to focus on what is logical and what makes sense in the long-term. You prefer quality over quantity when it comes to relationships, but this doesn’t mean that all of your relationships have to be long-term ones; some can be short-term if they are intense and meaningful while they last. Your ideal relationship would be with someone who challenges you intellectually, someone who stimulates your mind while also caring about how you feel emotionally and supporting you through life’s ups and downs. Your partner would also need to respect your independence while also wanting a serious commitment from you in return (which means that casual dating probably isn’t good enough for an INTP). The ideal relationship would allow both partners enough space for personal growth without feeling like one person was smothering the other or being held back by the other. You need plenty of alone time as well as time spent with others in order to stay centered and balanced emotionally (which is why many INTJs end up becoming loners at some point in their lives). However, even though being alone does refresh and reenergize the INTJ personality type, it doesn’t mean that introverted intuitive types don’t enjoy spending time around other people once in a while; it simply means that there needs to be balance between being around others vs being alone so that the INTJ can recharge their batteries when necessary without feeling restless or agitated (which is why many introverted intuitive types become workaholics). Why I became an Entrepreneur I became an entrepreneur because I wanted to be in control of my own life and to be able to do things the way that I saw fit, without having to follow someone else’s rules or expectations. It has been a very rewarding experience so far, and I am looking forward to seeing where it takes me in the future. I also enjoy being my own boss and using my creativity and strategic mind in order to come up with new ways of doing things that are more efficient, effective, and enjoyable for everyone involved. There is no limit to the amount of success that you can experience as an entrepreneur if you are willing to work hard for it. The biggest challenge that I have faced as an entrepreneur has been working from home — which means that I am always around my family and work is never far away from life. It takes discipline, focus, and a clear head in order to balance all of these aspects of life well if you want your business venture to succeed. This is why many entrepreneurs end up working long hours when they first start their business — they are trying their best not to let their personal life impact their professional one too much (but eventually most people realize that it is impossible to completely separate the two). You also need a support system around you, whether it is your spouse or friends who help you stay on top of things at home while you are focusing on your business (plus occasionally giving you advice about how best to manage certain situations). It can be very lonely when you work at home by yourself all day long since there is no one else around who understands what it’s like running a business like this; however, having this kind of freedom means that you get the chance do what you want when you want — even if others see “laziness” or “lack of drive” behind your actions. In reality, most entrepreneurs work very hard; it just looks different than someone who has a traditional job (which is why many people think that they don’t know how lucky they are). But if we didn’t enjoy doing what we were doing then we would have never started our businesses in the first place! Also, since I am responsible for everything in my business there is no one else who will take over my responsibilities when I am sick or burnt out (this comes with the territory though; if you don’t like responsibility then becoming an entrepreneur is probably not a good fit). However, there are plenty of other challenges to being a business owner, but they are all worth it in the end! How I handle Challenges I handle challenges by being persistent, staying focused on the task at hand, and knowing that even when things seem hopeless I will eventually get through them. Sometimes it’s just a matter of getting through one day at a time, so I do my best to focus on what is right in front of me and not worry about anything else. One of my biggest challenges has been dealing with negativity around me — whether this is from close friends or family members who seem to be against my decision to become an entrepreneur or from people who don’t understand why I work from home (or why I don’t want to have the same 9–5 job that everyone else has). This can be frustrating for many INTJs because they tend to prefer quality over quantity when it comes to relationships and they don’t want to spend time with people who do not share their ideals and goals for the future. As an entrepreneur your ideals are constantly changing and evolving as you learn more about yourself and your business, so you need people in your life who are willing to accept you for who you are right now vs judging you based on where you were in the past or where others think you should be going down the road. Most INTJs do not like feeling trapped by society’s expectations; instead they prefer finding their own way and being able to live life freely without having other people watching over their shoulders. This kind of freedom can be scary sometimes because there is no one there to help pick up the pieces if something goes wrong; however, this is also what makes entrepreneurship such an exciting challenge for many intuitive types — it allows them the freedom of thought that feels like heaven after spending years caged up in a traditional work environment (which takes so much energy out of people). This freedom also allows them the opportunity to use their minds creatively while also doing something productive with their lives. For example, most entrepreneurs started their business because they saw a problem in society that needed solving; otherwise, they would have just continued working for someone else instead of risking everything by going into business for themselves! This means that entrepreneurs remain passionate about what they do even after years of running their own businesses because every day brings new challenges and opportunities which inspire them to continue moving forward. How I Handle Failure As an entrepreneur failure is something that you have to learn to accept and manage effectively. When we fail we often feel like we are letting everyone down; however, this is not true since there are many other people who have failed before us and succeeded after. We also tend to blame ourselves for everything that goes wrong around us which means that we feel like failures every time something goes wrong in our business (even though it could be completely outside of our control). If you are going to run your own business then you need to learn how to take responsibility for your actions (both good and bad), but also how to let go of the things that are out of your control. This will enable you to move forward without feeling stuck or defeated by the past and instead focus on creating a bright future for yourself. I handle failure by using a combination of positive thinking, meditation / reflection, brainstorming, goal-setting and occasionally reaching out for help from someone who has been through similar experiences in the past (or asking them how they would handle certain problems — this can be very helpful). Most entrepreneurs learn from their mistakes (even though it may not seem like it at the time) but in order for that learning process happen they need time before jumping back into their business again. So if you fail know that most entrepreneurs have failed at some point as well — don’t let this hold you back from pursuing your dreams. How I Deal with Stress I deal with stress by taking time to myself to recharge my batteries. There are times when I need to take a break from my business so that I can spend more time with my family and friends — this is important because I don’t want anyone to feel like they are less important than my business. If you start feeling this way then it’s a sign that you need to take more time off, so that you can get back your passion for running your business and feeling good about what you do. If you fail to protect your personal life then it will eventually take a toll on your professional one since nothing will seem worth doing anymore (and trust me, this is the last thing that you want)! Many INTJs find hobbies such as writing, photography or painting very helpful in terms of relieving stress; however, there are also many entrepreneurs who don’t like these kinds of activities and prefer something else (such as playing sports or working out). However, the most important thing is finding something that works for you. For example, if you go running every day and this helps relieve stress then keep doing it; likewise if meditation helps bring some peace into your life then keep doing that too! The key here is finding the best way for you to keep calm, focused and happy! This is how I handle stress as an entrepreneur; however, each person will have their own ideal ways of dealing with their emotions in order to stay calm and productive throughout the day. Sometimes we have too much work on our plate so it makes sense to hire someone else to help us out — but there are also times when we just need some time alone or with loved ones so we can recharge our batteries (this is hard when everyone around us expects us to work all the time). The key here is finding the right balance between being productive and caring for yourself. It’s important not burn out because if we do then we become useless for everyone around us — which means that our business suffers as well! So whatever you decide to do make sure it works well for you. Remember, no one can be productive all the time, even INTJs, and we all need some time to ourselves so that we can come back to our work with more energy and ideas. The Answer: With the right support environment, the right motivations and the right strategies to manage your mental health, an INTJ can be within their comfort zone, while being an Entrepreneur..
https://alexanderlhk.medium.com/intj-and-entrepreneurship-56ad518d5b58
CC-MAIN-2021-31
refinedweb
2,131
56.02
Check it always returns false (I would've wrapped it in a setTimeoutand checked every 30ms) ui.WebView().eval_js('document.hidden') Ah perfect! thanks - would importing scenejust to use scene.Scene().pause()be overkill? EDIT: Never mind - this only works if I have an instance of a scene- I don't think subclassing scenejust for this pauseis a good practice. So I can't really use scene.pause It looks like if Scene is run in SceneView... it doesn't receive the pause/resume... import ui, scene class scTest(scene.Scene): def pause(self): print 'pause' def resume(self): print 'resume' if False: scene.run(scTest()) else: v = ui.View() sv = scene.SceneView() sv.scene = scTest() #sv.hidden = True v.add_subview(sv) v.present('panel') Yes. That is documented at: There is no guarantee that your script will continue to run once you launch safari. In fact, there is a guarantee that in a few minutes, the background timer will expire, and your script will be unceremoniously killed. If you just want to do something on a webpage, it is better to do it within a webview since you could make it modal for example, and continue only when the dialog is closed. Someone should really make a webview based clone of a full browser, with address bar, navigation, reload, stop, etc, in which case you'd never have to launch safari. I am trying to open Google Authenticator for 2FA - I want to send them there and then when they return I would fill a prompt with clipboard.get()- so in this case a webview doesn't really help me. (To keep my script running I could use console.set_idle_timer_disabled(flag)like cclaus has shown.) The idea in NoDoze.py is that you setup a notification to wake yourself back up just before you shut yourself down. An interesting departure point. ;-) @JonB I started working on something like that a while ago. I got history and bookmarks working as well. I don't really know how to get tabs working though. @hyshai It looks like this might do it.... import ui, scene def AmIBack(): global gbGone if gbGone and not sv.paused: gbGone = False print "Back here" else: if sv.paused: gbGone = True ui.delay(AmIBack, 1) gbGone = False v = ui.View() sv = scene.SceneView() sv.hidden = True v.add_subview(sv) v.present('panel') AmIBack() Edit: no need to set the scene... just SceneView will do @sebastian. Post to github, others will contrib. @hyshai. Ok, you originally said you wanted to open safari, not that you wanted to launch the authenticator app. Presumably you want to open a page which has an otpauth:// URI See this. Seems like you could create your own implementation in python, no external app required... After all, google authenticator is open source. You could implement the custom uri handler for this in webview, and even autofill it, or have a little counter on the page showing the code in real time. @JonB ah sorry - just was trying to give a trivial example. But that's a great idea - though it's more work ;) @tony hmm gonna try that - looks good @JonB - I just realized that you were giving me a solution for the opposite problem - I wasn't clear, my bad. I want for them to get the 2FA token so that they can login to their account. So implementing my own Google Authenticator app isn't a great solution because then I would have to store their secret in order to generate the token each time - and that's way too scary for me.
https://forum.omz-software.com/topic/992/check-if-pythonista-is-in-background/1
CC-MAIN-2019-09
refinedweb
601
76.22
Does anybody know what i can do about this? I have send the software code for photoduino to the Arduino one v3, and there it is the same, but i cannot put everywhere in the code this lcd.clear() command. lcd.begin(20,2); lcd.clear(); lcd.setCursor(0,0); lcd.print("Hello, World"); ...void setup() { lcd.begin(20, 2); lcd.print("hello, world!"); lcd.setCursor(0,1) lcd.print("it works!"); }void loop() { } #include <LiquidCrystal.h>// initialize the library with the numbers of the interface pinsLiquidCrystal lcd(12, 11, 10, 9, 8, 7);void setup() { // set up the LCD's number of columns and rows: lcd.begin(20(); // print from 0 to 9: for (int thisChar = 0; thisChar < 10; thisChar++) { lcd.print(thisChar); delay(500); } // turn off automatic scrolling lcd.noAutoscroll(); // clear screen for the next loop: lcd.clear();} Mabey it is the type of display You have this which does not match the example code or wiring.LiquidCrystal lcd(12, 11, 10, 9, 8, 7); I have put a delay of 10ms in front of a lcd.print, and dthat does the trick.The first charachter is now printed. But in the code for photoduino, there are a lot of lcd.print orders.Do i now have to put in every cas the delay function? Or can i make 1 code for all? Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=84370.msg632238
CC-MAIN-2015-11
refinedweb
260
76.42
Hi, guys. Could use a little more of that high quality help. This is actually a follow-up to my previous post here:... Things seemed to be running OK for a couple days, so I followed up trying dcdiag again. It tells me, among other things, it cannot find the NETLOGON share. When I look, I see that there are no sysvol or netlogon shares. The folders do not have in them anything but a few of the folders that should be there, and none of, for example, the GPO's. Also, the expected "DO_NOT_REMOVE_NtFrs_PreInstall_Directory" is not there. So the explanation of why dcdiag is throwing errors is simple. I could use some help with the fix. Most things I can find in Google assume that I can replicate from another DC, but this is the only one. I have made a test restore (to a different location) from backups of the SYSVOL directory to see what I have. It looks reasonable - the aforesaid "DO_NOT_REMOVE ... " directory is there, and there are policies and scripts in the policies and scripts directories. There is nothing in the SYSVOL\sysvol directory, but I think that normally just holds a link, which the backups probably did not pick up. Also, I can find on Google instructions for a (partial?) manual rebuild, but it is for server 2003 which uses File Replication Service instead of the newer DFS replication used by server 2008. And don't forget, the backup is from the OLD DC, which is now dead, not the one I am currently dealing with. But I am hoping the SYSVOL is supposed to be the same, due to normal replication. My prior backup does have some system state info from the old server. Specifically, it has four "files" labeled "Active Directory," "COM+ Database," "Registry, " and "System Volume." I have a call in to NovaBack tech support (the backup software on this machine) to see what these include, if that is helpful. So, any ideas on how I can rebuild this? I am sure it is not as easy as restoring the SYSVOL directory to the current DC and adding in some links. If anyone has been through this before, I would appreciate any tips and suggestions you have. 14 Replies May 6, 2015 at 8:33 UTC I believe you might be able to simply use that restored version of the Sysvol you have since you do not have a DC to use as a replication partner. Sticky situation. Also was the first DC a 2008 R2 in the domain, if not did you do the migration piece to start using DFS replication? this has some good information as well :... May 6, 2015 at 9:04 UTC Yes, the old and current DC are both 2008 R2. I'll see if I can get anything helpful out of the link. Thanks Follow up: Yep, that is the page I was looking at. It only references the FRS service, while the server I am on appears to have both FRS and DFS services running. (I don't know which is actually being used) Do you know if stopping both those sets of services and copying SYSVOL back into place, then establishing the missing links using mklink (instead of linkd, which is not on this server) would be a good way to go?Edited May 6, 2015 at 9:16 UTC May 6, 2015 at 9:22 UTC Brand Representative for Microsoft ok after reading your previous post I see that you seized the FSMO roles and now have the DC limping along. You still have an orphaned DC in both DNS (Name Servers and domain function pointers) that need to be deleted so that everyone is looking for AD in the right spot. After DNS is purged of all reference to the orphaned DC then you can test the existing DC to be sure it can update its DNS records successfully. Check the NIC settings on the new PDC and be sure only its IP4 address is primary and put 127.0.0.1 as secondary. Then test DC DNS registration with this series of steps: On the PDC Open CMD Right click CMD and Run As Administrator type: nltest /dsregdns The command should complete successfully and the time stamps for this server in DNS should be within 7 days of today if not today... Once you get all that lined up then you can either reboot the DC or restart the NetLogon Service and see if you get those SysVol and Netlogon shares to show. Report back any errors in the System or File Replication Service Logs. May 6, 2015 at 9:43 UTC The DNS was correct - and nltest completed successfully. Connection Status = 0. DNS showed all entries today or Monday (the day I started trying to clean all this up) Do I need to copy the old SYSVOL folders back into place before restarting the NetLogon service, or will they generate automatically? If the latter, should I clean up the folders to remove the directories that are there (an incomplete set)? I have numerous errors from the last couple of days. These are mainly the following 2: Event ID 14550 - The DFS namespace service could not initialize cross forest trust info Event ID 1129 - Group Policy failed because of lack of network connectivity to a DC Group policy also threw errors 1030 (failed to retrieve GP settings) and 1058 (failed to read a file "gpt.ini" which is supposed to be in a directory under PHCHSK.local\Policies\... that does not exist on the server) And thanks in advance for the help. :-) May 6, 2015 at 9:52 UTC Brand Representative for Microsoft NIC Card DNS? May 6, 2015 at 10:00 UTC Yes NIC points only to itself and 127.0.0.1 May 6, 2015 at 10:05 UTC Brand Representative for Microsoft ok bounce the Netlogon service and keep your eye on File Replication Logs.. Report back May 6, 2015 at 10:23 UTC After restarting the service, (just the netlogon service) no errors in the file replication service log or system log. I do note that I can no longer access AD Users and Computers, and receive an error that the specified domain does not exist or could not be contacted. I can see the stop / start of netlogon service in the system log, but nothing else there. I do see in the File Replication Service log that it is "scanning the data in the system volume" and cannot become a domain controller until this process is complete. It looks like replication from the old DC never took since we set this one up, which might mean we skipped a step somewhere back then. May 6, 2015 at 10:38 UTC Brand Representative for Microsoft give it a moment... come back in 30 and report. May 6, 2015 at 11:21 UTC since the service restart at about 5:10 (an hour) the system log shows one error, the same 14550 we saw before (DFS namespace could not initialize cross forest trust info ... DFS Replication, Directory Service, and File Replication Service logs show no additional entries. Group Policy log shows a group of 3 errors that repeat every 5 minutes - not new though. These are 7326 (Group Policy failed to discover the DC), 7320 (failed to register for connectivity notification) and 7006 (Periodic policy processing failed). I guess the last is not surprising, since there is no sysvol share, no policies folder under the domain folder, and no policies. I do see these things in the copy of SYSVOL I restored from backup to an alternate location, so not finding them here makes it unsurprising to find problems. :-) I have to admit, I keep wanting to copy some directories back over, but I am bugged by the thought that it may cause more harm than good, and would rather wait to see if you have any further ideas. Thanks for the patience in helping out on this. May 7, 2015 at 12:22 UTC Brand Representative for Microsoft Run netdom query fsmo what is returned? May 7, 2015 at 1:01 UTC OK, now this is odd, even for me. Recall, I mentioned that I could not access AD any more. And the result from running the netdom command was: "The specified domain either does not exist or could not be contacted. The command failed to complete successfully." Then I thought of something - yesterday I ran a vbs script that was supplied on a Microsoft site (don't remember just where, but I could probably track it down) that was supposed to be one way to remove the old domain controller from the domain. When I ran it, I was after that able to access AD services, at least some of them. So I ran it again just now. I can now access Users and Computers again. And when I ran the netdom command, the result this time was: schema master, domain naming master, pdc, rid pool manager, and infrastructure master are all "popetmp2008.phchsk.local", which is correct. And it completed successfully. It is like restarting the netlogon service undid something that the vbs script did during its execution. I seem to recall that the same thing happened after a restart a couple of days ago, but I cannot recall the details. weird symptoms. May 7, 2015 at 1:36 UTC Wish I would have found this post about 8 months ago. May 7, 2015 at 8:37 UTC If I was unclear in my message posted last night at 8:00, the problem is not resolved. I can access Users and computers again, but I still do NOT have a NETLOGON or SYSVOL share. Any more suggestions before I just try to restore the old SYSVOL from the dead server? This discussion has been inactive for over a year. You may get a better answer to your question by starting a new discussion.
https://community.spiceworks.com/topic/937242-ad-netlogon-and-sysvol-shares-missing
CC-MAIN-2017-39
refinedweb
1,668
69.11
-1to get unblocked if it really does work. Hi, I'm unable to load the projects in VS 2019. I'm getting this error in the Solution output: The expression "[System.IO.Path]::GetDirectoryName('')" cannot be evaluated. The path is not of a legal form. C:\Users\Shimmy.nuget\packages\msbuild.sdk.extras\2.0.54\Sdk\Sdk.props Please mention my name if you reply so I get notified. Thank you! User32.EnumDisplaySettings, and maybe another few. DISPLAY_DEVICEstructure also belongs there. It's used by the EnumDisplayDevicefunction that was merged. I can move these structures to Gdi32, but we'll have to add a project dependency. SafeDCHandleis defined in User32 because ReleaseDCis defined there. wingdi.hin Gdi32, and only move them into the User32assembly if User32needs access to these types. Windows.Corewe should define them as top-level types in the PInvokenamespace instead of as a nested type. That would make 'promoting' them later much easier. How do you deal with fns that have moved around from one apiset dll to another. For e.g., kernel32!FlushProcessWriteBuffers: "Introduced into api-ms-win-core-processthreads-l1-1-2.dll in 10.0.10240. Moved into api-ms-win-core-processthreads-l1-1-3.dll in 10.0.10586. Moved into api-ms-win-core-processthreads-l1-1-2.dll in 10.0.14393. Moved into api-ms-win-core-processthreads-l1-1-0.dll in 10.0.16299." When I looked at the exports, l1-1-0 had the export, but l1-1-2, l1-1-1 did not have the export (this was on recent Windows Insider Dev Branch build; so I guess the exports aren't preserved out of compat consideration etc.). For reference, Win8/8.1 had this in api-ms-win-core-processthreads-l1-1-1.dll. Hi, I don't know what I screwed up, but I'm unable to open any projects in VS2019. I get the following error when running Init: ❯ .\init -installlocality machine The netcore Credential Provider is already in C:\Users\Shimmy.nuget\plugins Downloading .NET Core SDK 3.1.100... Installing .NET Core SDK 3.1.100... Downloading .NET Core 2.1... Installing .NET Core 2.1... Downloading .NET Core 2.1... Installing .NET Core 2.1... Restoring NuGet packages D:\Users\Shimmy\Source\Repos\weitzhandler\pinvoke\src\BCrypt\BCrypt.csproj : warning MSB4242: The SDK resolver "NuGetSdkResolver" failed to run. Unable to find fallback package folder 'C:\Microsoft\Xamarin\NuGet\'. Write-Error: Failure while restoring packages. Any ideas? DEVMODEstructure (in Windows.Core). Explicitlayout in order for .NET to actually overlap fields over the same memory the way union structs do in native code. dmPosition. In C, there can't really be multiple identical fields because there would be no way to distinguish them. So while a single field may appear in multiple options in a union, it must be placed at exactly the same location so that it doesn't matter which one the user is thinking of when they access it. Span<T>and ReadOnlySpan<T>friendly overloads where any native pointer was that represented an array. The friendly overload even removes the "length" parameter as it takes it from the span. ReadOnlySpan<char>, but on .NET Framework it's still something of a mild pain.
https://gitter.im/dotnet/pinvoke?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
CC-MAIN-2021-25
refinedweb
547
62.44
Details Description. Issue Links Activity - All - Work Log - History - Activity - Transitions. I have certainly seen some weirdness with my cluster where a stop-all seemed to think it had succeeded but there were still datanode and tasktracker instances running. Locking at the directory level seems like a good hedge against that sort of problem. As a bonus - it'll prevent blocks from getting removed if someone mistakenly starts up a second set of datanodes against a different namenode, as long as the datanode daemon for the competing DFS instance is running on the machine. Keeping track of only one copy of a block for a given IP seems like a good start, but might be too simple. What if, by some unfortunate process, a lot of duplicates get stored on a node. If there's no way to detect this, it could be that a bunch of copies of a block end up being duplicated on a node unnecessarily. One way out of that would be to notice if a host is reporting more than one copy of a block, and kick off a knee-jerk fix: 1) Extra-replicate the block 2) Ask the node with dups to remove its copies of the block. Of course, in the case where two datanode instances are somehow servicing from the same directory, the knee-jerk reaction would kick off for all blocks that get stored into that space - so locking would definitely be necessary to make this work. This seems like something we should fix in the .2 release, no? Changing the summary to better describe what we intend to fix. I just talked with Owen, and we came up with the following plan: (0) store a node id in each dfs.data.dir; (1) pass the node id to the name node in all DataNodeProtocol calls; (2) the namenode tracks datanodes by <id,host:port> pairs, only talking to one id from a given host:port at a time. requests from an unknown host:port return a status that causes the datanode to exit and not restart itself. (3) add a hello() method to DataNodeProtocol, called at datanode startup only. this erases any entries for a datanode id, replacing them with the new entry. Thus when a second datanode is started it causes any existing datanode running on that host to be forgotten and to exit when it next contacts the namenode. There is a problem with shutting down the old datanode when the new one starts. The new datanode must be sure the old one is gone before taking over. I think a datanode should merely keep a FileLock in the dfs.data.dir while it is running. In this case the new node will not be able to start with the same data directory. Is that the problem we are trying to solve here? I believe we have three problems to address: 1) The namenode needs to know to purge old entries identical entries when a new datanode registers. Else we get rot. See doug's suggestions above. 2) You could have 2 or more datanodes on one server. They need to always be unique. We should asign a unique ID to each datanode home directory and make sure the datanode is started with a valid home directory as well. I like the idea of assigning a uniqueID to each datanode home. 3) You concern that two daemons might run on the same datadir. We should address all these concerns. Your suggestion of a startup lock and a uniqueID, plus (hello method) should together handle all of these. I see exactly 2 problems now. 1) Two datanodes can be started sharing the same data directory, and reporting the same blocks twice. I'm planning to solve this by placing a lock in the data dir. 2) Starting a replacement datanode. If a datanode fails, or intentionally shut down, the new datanode server can be started on the same node and port or on a different node or port, but connected to the same data directory. In current implementation if the new datanode starts before the namenode removed the old datanode from its list identical block will be considered belonging to both nodes, and consequently over-replicated. To solve this second problem the datanode should store a unique id per data directory as proposed. And the namenode should identify block reports by those ids (only!), rather than by hostname:port. Each datanode reporting the same id (no matter which hostname or port it has) should be considered a replacement to the old node, and the old datanode at this point should be removed from the namenode the same way it is removed when its hearbeats are not reaching the namenode for 10 minutes. +1 Here is an algorithm that imo solves the problem in the most general case. This is the registration part only, since the rest is rather straightforward. I'm trying to cover two issues. 1) Data nodes should be banned from reporting the same block copies multiple times if they are intentionally or unintentionally started to serve the same data storage. That is why data nodes need to register, and need to keep a persistent storageID. 2) Name node should be able to recognize registered data nodes, even if it is restarted or replaced by a spare name node serving the same name space. That is why name nodes need to keep a persistent namespaceID. This is the patch that fixes the problem. DFS_CURRENT_VERSION has been changed to -2, since internal file layouts have changed. I created a new package for exceptions.). Eric Baldeschwieler wrote: > why not store the cluster in the data node? We can alternatively store namespaceID in every data storage belonging to the cluster. May be this is conceptually cleaner. I preferred storing storageIDs in the meta data just because this gives the name node knowledge of which storages are missing, and lets it report it. +1, except the exceptions should not be in a separate package, but in the dfs package. If you make that change then I'd be happy to commit this, since it fixes a very critical bug. Thanks! I just applied this to trunk and unit tests failed with UnregisteredDatanodeExceptions. Yes, that was a MiniDFSCluster incomatibility. Fixed that, merged with the current version, removed the exception package. This patch does not apply to the current trunk. Can you please update your sources to the current trunk, resolving any conflicts, and re-generate the patch? Thanks! This adds a number of new public classes that I'm not certain should be public. Should user code ever need to access a DataStorage, DatanodeID or DatanodeRegistration, or are these only used internally? Also, several of these exceptions appear only to be used internally, but I'm not certain about all of them. Would you object if I simply make all of these new classes package-private? Then, if we need, we can reveal more later as needed. I just committed this, modified to make most of the new classes package-private rather than public. Thanks, Konstantin! Yes, I think it is ok to make the classes unaccessible from the outside of the project. Exceptions are different, they are returned to a client in the form of IOException right now, but if the client wants to distinguish between them then we need to keep them public. Did you potentially start identically configured datanodes as a different user? Right now the only "lock" preventing this is the pid file used by the nutch-daemon.sh script. Perhaps the datanode should lock each directory in dfs.data.dir? That should prevent this, no? I suppose this could also happen if the datanode lost its connection to the namenode, but the namenode had not yet timed out the datanode. Then the datanode would reconnect and blocks might be doubly-reported. To fix this, perhaps the namenode should refuse to represent more than one copy of a block from a given IP? If a second is reported, the first should be forgotten?
https://issues.apache.org/jira/browse/HADOOP-124
CC-MAIN-2015-22
refinedweb
1,346
71.95
About The Author David is a front-end developer from the UK who has been coding since 1998 when spacer gifs and blink tags were still a thing. He is a writer, speaker and coder … More about David… Battling BEM CSS: 10 Common Problems And How To Avoid Them Whether you’ve just discovered BEM or are an old hand (in web terms anyway!), you probably appreciate what a useful methodology it is. If you don’t know what BEM is, I suggest you read about it on the BEM website before continuing with this post, because I’ll be using terms that assume a basic understanding of this CSS methodology. This article aims to be useful for people who are already BEM enthusiasts and wish to use it more effectively or people who are curious to learn more about it. Further Reading on SmashingMag: - A New Front-End Methodology: BEM - Scaling Down The BEM Methodology For Small Projects - The Evolution Of The BEM Methodology Now, I’m under no illusion that this is a beautiful way to name things. It’s absolutely not. One of things that put me off of a user interface outweighed the right-side of my brain that complained, “But it’s not pretty enough!”. I certainly don’t recommend picking a living-room centrepiece this way, but when you need a life jacket (as you do in a sea of CSS), I’ll take function over form any day. Anyway, enough rambling. Here are the 10 dilemmas I’ve battled with and some tips on how to deal with them. 1. “What To Do About ‘Grandchild’ Selectors (And Beyond)?” To clarify, you would use a grandchild selector when you need to reference an element that is nested two levels deep. These bad boys are the bane of my life, and I’m sure their misuse is one of the reasons people have an immediate aversion to BEM. I’ll give you an example: <div class="c-card"> <div class="c-card__header"> <!-- Here comes the grandchild… --> <h2 class="c-card__header__title">Title text here</h2> </div> <div class="c-card__body"> <img class="c-card__body__img" src="some-img.png" alt="description"> <p class="c-card__body__text">Lorem ipsum dolor sit amet, consectetur</p> <p class="c-card__body__text">Adipiscing elit. <a href="/somelink.html" class="c-card__body__text__link">Pellentesque amet</a> </p> </div> </div> As you might imagine, naming in this way can quickly get out of hand, and the more nested a component is, the more hideous and unreadable the class names become. I’ve used a short block name of c-card and the short element names of body, text and link, but you can imagine how out of control it gets when the initial block element is named something like c-drop-down-menu. I believe the double-underscore pattern should appear only once in a selector name. BEM stands for Block__Element–Modifier, not Block__Element__Element–Modifier. So, avoid multiple element level naming. If you’re getting to great-great-great-grandchild levels, then you’ll probably want to revisit your component structure anyway. BEM naming isn’t strictly tied to the DOM, so it doesn’t matter how many levels deep a descendent element is nested. The naming convention is there to help you identify relationships with the top-level component block — in this case, c-card. This is how I would treat the same card component: > This means that all of the descendent elements will be affected only by the card block. So, we would be able to move the text and images into c-card__header or even introduce a new c-card__footer element without breaking the semantic structure. 2. “Should I Be Namespacing?” By now, you’ve probably noticed the use of c- littered throughout my code samples. This stands for “component” and forms the basis of how I namespace my BEM classes. This idea stems from Harry Robert’s namespacing technique, which improves code readability. This is the system I have adopted, and many of the prefixes will appear throughout the code samples in this article: I have found that using these namespaces has made my code infinitely more readable. Even if I don’t manage to sell you on BEM, this is definitely a key takeaway. You could adopt many other namespaces, like qa- for quality-assurance testing, ss- for server-side hooks, etc. But the list above is a good starting point, and you can introduce others as you get comfortable with the technique. You’ll see a good example of the utility of this style of namespacing in the next problem. 3. “What Should I Name Wrappers?” Some components require a parent wrapper (or container) that dictates the layout of the children. In these cases, I always try to abstract the layout away into a layout module, such as l-grid, and insert each component as the contents of each l-grid__item. In our card example, if we wanted to lay out a list of four c-cards, I would use the following markup: <ul class="l-grid"> > </ul> You should now have a solid idea of how layout modules and component namespaces should play together. Don’t be afraid to use a little extra markup to save yourself a massive headache. No one is going to pat you on the back for shaving off a couple of <div> tags! In some instances, this isn’t possible. If, for example, your grid isn’t going to give you the result you want or you simply want something semantic to name a parent element, what should you do? I tend to opt for the word container or list, depending on the scenario. Sticking with our cards example, I might use <div class=“l-cards-container”>[…]</div> or <ul class=“l-cards-list”>[…]</ul>, depending on the use case. The key is to be consistent with your naming convention. 4. “Cross-Component… Components?” Another issue commonly faced is a component whose styling or positioning is affected by its parent container. Various solutions to this problem are covered in detail by Simurai. I’ll just fill you in on what I believe is the most scalable approach. To summarize, let’s assume we want to add a c-button into the card__body of our previous example. The button is already its own component and is marked up like this: <button class="c-button c-button--primary">Click me!</button> If there are no styling differences in the regular button component, then there is no problem. We would just drop it in like so: > <!-- Our nested button component --> <button class="c-button c-button--primary">Click me!</button> </div> </div> However, what happens when there are a few subtle styling differences — for example, we want to make it a bit smaller, with fully rounded corners, but only when it’s a part of a c-card component? Previously, I stated that I find a cross-component class to be the most robust solution: > <!-- My *old* cross-component approach --> <button class="c-button c-card__c-button">Click me!</button> </div> </div> This is what is known on the BEM website as a “mix.” I have, however, changed my stance on this approach, following some great comments from Esteban Lussich. In the example above, the c-card__c-button class is trying to modify one or more existing properties of c-button, but it will depend on the source order (or even specificity) in order to successfully apply them. The c-card__c-button class will work only if it is declared after the c-button block in the source code, which could quickly get out of hand as you build more of these cross-components. (Whacking on an !important is, of course, an option, but I certainly wouldn’t encourage it!) The cosmetics of a truly modular UI element should be totally agnostic of the element’s parent container — it should look the same regardless of where you drop it. Adding a class from another component for bespoke styling, as the “mix” approach does, violates the open/closed principle of component-driven design — i.e there should be no dependency on another module for aesthetics. Your best bet is to use a modifier for these small cosmetic differences, because you may well find that you wish to reuse them elsewhere as your project grows. <button class="c-button c-button--rounded c-button--small">Click me!</button> Even if you never use those additional classes again, at least you won’t be tied to the parent container, specificity or source order to apply the modifications. Of course, the other option is to go back to your designer and tell them that the button should be consistent with the rest of the buttons on the website and to avoid this issue altogether… but that’s one for another day. 5. “Modifier c-card that applies the unique styles? It’s very easy to over-modularize and make everything a component. I recommend starting with modifiers, but if you find that your specific component CSS file is getting difficult to manage, then it’s probably time to break out a few of those modifiers. A good indicator is when you find yourself having to reset all of the “block” CSS in order to style your new modifier — this, to me, suggests new component time. The best way, if you work with other developers or designers, is to ask them for an opinion. Grab them for a couple of minutes and discuss it. I know this answer is a bit of a cop-out, but for a large application, it’s vital that you all understand what modules are available and agree on exactly what constitutes a component. 6. “How To Handle States?” This is a common problem, particularly when you’re styling a component in an active or open state. Let’s say our cards have an active state; so, when clicked on, they stand out with a nice border styling treatment. How do you go about naming that class? The way I see it, you have two options really: either a standalone state hook or a BEM-like naming modifier at the component level: <!-- standalone state hook --> <div class="c-card is-active"> […] </div> <!-- or BEM modifier --> <div class="c-card c-card--is-active"> […] </div> While I. Sticking to a standard set of state hooks makes sense. Chris Pearce has compiled a good list, so I recommend just pinching those. 7. “When Is It OK Not To Add A Class To An Element?” I can understand people being overwhelmed by the sheer number of classes required to build a complex piece of UI, especially if they’re not used to assigning a class to every tag. Typically, I will attach classes to anything that needs to be styled differently in the context of the component. I will often leave p tags classless, unless the component requires them to look unique in that context. Granted, this could mean your markup will contain a lot of classes. Ultimately, though, your components will be able to live independently and be dropped anywhere without a risk of side effects. Due to the global nature of CSS, putting a class on everything gives us control over exactly how our components render. The initial mental discomfort caused is well worth the benefits of a fully modular system. 8. “How To Nest Components?”> <!-- Uh oh!. you! 9. “Won’t Components End Up With A Million Classes?” Some argue that having a lot of classes per element is not great, and –modifiers can certainly add up. Personally, I don’t find this to be problematic, because it means the code is more readable and I know exactly what it is supposed to be doing. For context, this is an example of four classes being needed to style a button: <button class="c-button c-button--primary c-button--huge is-active">Click me!</button> I get that this syntax is not the prettiest to gaze upon, but it is explicit. However, if this is giving you a major headache, you could look at the extension technique that Sergey Zarouski came up with. Essentially, we would use .className [class^=“className”], [class*=” className”] in the style sheet to emulate extension functionality in vanilla CSS. If this syntax looks familiar to you, that’s because it is very similar to the way Icomoon handles its icon selectors. With this technique, your output might look something like this: <button class="c-button--primary-huge is-active">Click me!</button> I don’t know whether the performance hit of using the class^= and class*= selectors is much greater than using individual classes at scale, but in theory this is a cool alternative. I’m fine with the multi-class option myself, but I thought this deserved a mention for those who prefer an alternative. 10. “Can We Change A Component’s Type Responsively?” This was a problem posed to me by Arie Thulank and is one for which I struggled to come up with a 100% concrete solution. An example of this might be a dropdown menu that converts to a set of tabs at a given breakpoint, or offscreen navigation that switches to a menu bar at a given breakpoint. Essentially, one component would have two very different cosmetic treatments, dictated by a media query. My inclination for these two particular examples is just to build a c-navigation component, because at both breakpoints this is essentially what it is doing. But it got me thinking, what about a list of images that converts to a carousel on bigger screens? This seems like an edge case to me, and as long as it is well documented and commented, I think it is perfectly reasonable to create a one-off standalone component for this type of UI, with explicit naming (like c-image-list-to-carousel). Harry Roberts has written about responsive suffixes, which is one way to handle this. His approach is intended more for changes in layout and print styles, rather than shifts of entire components, but I don’t see why the technique couldn’t be applied here. So, essentially you would author classes like this: <ul class="c-image-list@small-screen c-carousel@large-screen"> These would then live in your media queries for the respective screen sizes. Pro tip: You have to escape the .c-image-list\@small-screen { /* styles here */ } I haven’t had much cause to create these type of components, but this feels like a very developer-friendly way to do it, if you have to. The next person coming in should be able to easily understand your intention. I’m not advocating for names like small-screen and large-screen — they are used here purely for readability. Summary BEM has been an absolute lifesaver for me in my effort to create applications in a modular, component-driven way. I’ve been using it for nearly three years now, and the problems above are the few stumbling blocks I’ve hit along the way. I hope you’ve found this article useful, and if you’ve not given BEM a go yet, I highly encourage you to do so. Note: This is an enhanced version of my original article “Battling BEM: 5 Common Problems and How to Avoid Them,” which appeared on Medium. I’ve added five more common problems, (some of which were asked about in the comments of that article) and I have altered my views on one of the original problems.
https://www.smashingmagazine.com/2016/06/battling-bem-extended-edition-common-problems-and-how-to-avoid-them/
CC-MAIN-2019-26
refinedweb
2,602
60.75
Overview: Archimedes provides functions to work with Tinkerpop/Blueprint objects. Ogre provides functions to work with the Gremlin graph query language. Built on top of Archimedes and Ogre, Titanium provides functions for working with the Titan graph database. Architecture notes: Archimedes and Ogre have been decoupled and don’t depend on one another. Archimedes should work with any database that follows the tinkerpop interfaces. Ogre doesn’t compile Gremlin pipelines until the last minute. The java api for Gremlin is such that you can’t reuse objects for multiple queries and so it is possible to get some unexpected results if you aren’t careful. Lazy compilation of pipelines was added in and the tests pass, but this is mostly undocumented and would be confusing for newcomers. In addition, the pipelines in Ogre are represented as vectors of anonymous functions until they are compiled into a GremlinPipeline object. Giving unique names to all the anonymous functions (like in-anon or dedup-anon) would make debugging queries easier. Because most of the methods for GremlinPipelines are fairly simplistic and don’t need much wrapping, I wrote two macros that creates functions based off signatures for the method. The above suggestion, about naming anonymous functions, would be a two line change for about 30 functions. Titanium currently depends on Archimedes. Titanium uses ztellman/potemkin to import functions into the appropriate namespaces and so users only need to think about interacting with Titanium when using Titan (instead of messing with Archimedes and Titanium at the same time). Main todo’s: No documentation exists for Archimedes. jonase/eastwood has found a bunch of mistakes in the test suites for all libraries such as tests not running or not doing what is commonly expected. Having the tests pass the eastwood test would be smart. Ogre’s lazy compilation is undocumented. Gremlin Side effects are mostly unexplored beyond some basic functions in Ogre. Unsure of how to go about integrating them or when they would even be needed. Ogre is missing tests for random, simple-path, optional, and the emit step of loop. Archimedes and Titanium uses dynamic vars to hold onto a single graph object at a time. This is not smart and makes it hard to do various things without much gain. The humbled branch on archimedes started work on refactoring. Titanium needs to be updated to 0.4.0. The documentation for Titan isn’t great and so a fine line must be walked between making up for the lack of good documentation and doing Think Aurelius job. Main todo’s: What I'm aiming for is to get the 'humbled' branch into a state where we think it might be the next release, then get titanium working against that version. I'll move to latest version of Titan and Blueprints libraries as part of that work.
https://groups.google.com/g/clojure-titanium/c/qtTgP0cGzYw
CC-MAIN-2021-39
refinedweb
474
63.39
Have you ever run into a bug that, no matter how careful you are trying to reproduce it, it only happens sometimes? And then, you think you’ve got it, and finally solved it – and tested a couple of times without any manifestation. How do you know that you have tested enough? Are you sure you were not “lucky” in your tests? In this article we will see how to answer those questions and the math behind it without going into too much detail. This is a pragmatic guide. The Bug The following program is supposed to generate two random 8-bit integer and print them on stdout: #include <stdio.h> #include <fcntl.h> /* Returns -1 if error, other number if ok. */ int get_random_chars(char *r1, char*r2) { int f = open("/dev/urandom", O_RDONLY); if (f < 0) return -1; if (read(f, r1, sizeof(*r1)) < 0) return -1; if (read(f, r2, sizeof(*r2)) < 0) return -1; close(f); return *r1 & *r2; } int main(void) { char r1; char r2; int ret; ret = get_random_chars(&r1, &r2); if (ret < 0) fprintf(stderr, "error"); else printf("%d %d\n", r1, r2); return ret < 0; } On my architecture (Linux on IA-32) it has a bug that makes it print “error” instead of the numbers sometimes. The Model Every time we run the program, the bug can either show up or not. It has a non-deterministic behaviour that requires statistical analysis. We will model a single program run as a Bernoulli trial, with success defined as “seeing the bug”, as that is the event we are interested in. We have the following parameters when using this model: - \(n\): the number of tests made; - \(k\): the number of times the bug was observed in the \(n\) tests; - \(p\): the unknown (and, most of the time, unknowable) probability of seeing the bug. As a Bernoulli trial, the number of errors \(k\) of running the program \(n\) times follows a binomial distribution \(k \sim B(n,p)\). We will use this model to estimate \(p\) and to confirm the hypotheses that the bug no longer exists, after fixing the bug in whichever way we can. By using this model we are implicitly assuming that all our tests are performed independently and identically. In order words: if the bug happens more ofter in one environment, we either test always in that environment or never; if the bug gets more and more frequent the longer the computer is running, we reset the computer after each trial. If we don’t do that, we are effectively estimating the value of \(p\) with trials from different experiments, while in truth each experiment has its own \(p\). We will find a single value anyway, but it has no meaning and can lead us to wrong conclusions. Physical analogy Another way of thinking about the model and the strategy is by creating a physical analogy with a box that has an unknown number of green and red balls: - Bernoulli trial: taking a single ball out of the box and looking at its color – if it is red, we have observed the bug, otherwise we haven’t. We then put the ball back in the box. - \(n\): the total number of trials we have performed. - \(k\): the total number of red balls seen. - \(p\): the total number of red balls in the box divided by the total number of green balls in the box. Some things become clearer when we think about this analogy: - If we open the box and count the balls, we can know \(p\), in contrast with our original problem. - Without opening the box, we can estimate \(p\) by repeating the trial. As \(n\) increases, our estimate for \(p\) improves. Mathematically: \[p = \lim_{n\to\infty}\frac{k}{n}\] - Performing the trials in different conditions is like taking balls out of several different boxes. The results tell us nothing about any single box. Estimating \(p\) Before we try fixing anything, we have to know more about the bug, starting by the probability \(p\) of reproducing it. We can estimate this probability by dividing the number of times we see the bug \(k\) by the number of times we tested for it \(n\). Let’s try that with our sample bug: $ ./hasbug 67 -68 $ ./hasbug 79 -101 $ ./hasbug error We know from the source code that \(p=25%\), but let’s pretend that we don’t, as will be the case with practically every non-deterministic bug. We tested 3 times, so \(k=1, n=3 \Rightarrow p \sim 33%\), right? It would be better if we tested more, but how much more, and exactly what would be better? \(p\) precision Let’s go back to our box analogy: imagine that there are 4 balls in the box, one red and three green. That means that \(p = 1/4\). What are the possible results when we test three times? The less we test, the smaller our precision is. Roughly, \(p\) precision will be at most \(1/n\) – in this case, 33%. That’s the step of values we can find for \(p\), and the minimal value for it. Testing more improves the precision of our estimate. \(p\) likelihood Let’s now approach the problem from another angle: if \(p = 1/4\), what are the odds of seeing one error in four tests? Let’s name the 4 balls as 0-red, 1-green, 2-green and 3-green: The table above has all the possible results for getting 4 balls out of the box. That’s \(4^4=256\) rows, generated by this python script. The same script counts the number of red balls in each row, and outputs the following table: That means that, for \(p=1/4\), we see 1 red ball and 3 green balls only 42% of the time when getting out 4 balls. What if \(p = 1/3\) – one red ball and two green balls? We would get the following table: What about \(p = 1/2\)? So, let’s assume that you’ve seen the bug once in 4 trials. What is the value of \(p\)? You know that can happen 42% of the time if \(p=1/4\), but you also know it can happen 39% of the time if \(p=1/3\), and 25% of the time if \(p=1/2\). Which one is it? The graph bellow shows the discrete likelihood for all \(p\) percentual values for getting 1 red and 3 green balls: The fact is that, given the data, the estimate for \(p\) follows a beta distribution \(Beta(k+1, n-k+1) = Beta(2, 4)\) (1) The graph below shows the probability distribution density of \(p\): The R script used to generate the first plot is here, the one used for the second plot is here. Increasing \(n\), narrowing down the interval What happens when we test more? We obviously increase our precision, as it is at most \(1/n\), as we said before – there is no way to estimate that \(p=1/3\) when we only test twice. But there is also another effect: the distribution for \(p\) gets taller and narrower around the observed ratio \(k/n\): Investigation framework So, which value will we use for \(p\)? - The smaller the value of \(p\), the more we have to test to reach a given confidence in the bug solution. - We must, then, choose the probability of error that we want to tolerate, and take the smallest value of \(p\) that we can. A usual value for the probability of error is 5% (2.5% on each side). - That means that we take the value of \(p\) that leaves 2.5% of the area of the density curve out on the left side. Let’s call this value \(p_{min}\). - That way, if the observed \(k/n\) remains somewhat constant, \(p_{min}\) will raise, converging to the “real” \(p\) value. - As \(p_{min}\) raises, the amount of testing we have to do after fixing the bug decreases. By using this framework we have direct, visual and tangible incentives to test more. We can objectively measure the potential contribution of each test. In order to calculate \(p_{min}\) with the mentioned properties, we have to solve the following equation: \[\sum_{k=0}^{k}{n\choose{k}}p_{min} ^k(1-p_{min})^{n-k}=\frac{\alpha}{2} \] \(alpha\) here is twice the error we want to tolerate: 5% for an error of 2.5%. That’s not a trivial equation to solve for \(p_{min}\). Fortunately, that’s the formula for the confidence interval of the binomial distribution, and there are a lot of sites that can calculate it: -: \(\alpha\) here is 5%. -: results for \(\alpha\) 1%, 5% and 10%. -: google search. Is the bug fixed? So, you have tested a lot and calculated \(p_{min}\). The next step is fixing the bug. After fixing the bug, you will want to test again, in order to confirm that the bug is fixed. How much testing is enough testing? Let’s say that \(t\) is the number of times we test the bug after it is fixed. Then, if our fix is not effective and the bug still presents itself with a probability greater than the \(p_{min}\) that we calculated, the probability of not seeing the bug after \(t\) tests is: \[\alpha = (1-p_{min})^t \] Here, \(\alpha\) is also the probability of making a type I error, while \(1 – \alpha\) is the statistical significance of our tests. We now have two options: - arbitrarily determining a standard statistical significance and testing enough times to assert it. - test as much as we can and report the achieved statistical significance. Both options are valid. The first one is not always feasible, as the cost of each trial can be high in time and/or other kind of resources. The standard statistical significance in the industry is 5%, we recommend either that or less. Formally, this is very similar to a statistical hypothesis testing. Back to the Bug Testing 20 times This file has the results found after running our program 5000 times. We must never throw out data, but let’s pretend that we have tested our program only 20 times. The observed \(k/n\) ration and the calculated \(p_{min}\) evolved as shown in the following graph: After those 20 tests, our \(p_{min}\) is about 12%. Suppose that we fix the bug and test it again. The following graph shows the statistical significance corresponding to the number of tests we do: In words: we have to test 24 times after fixing the bug to reach 95% statistical significance, and 35 to reach 99%. Now, what happens if we test more before fixing the bug? Testing 5000 times Let’s now use all the results and assume that we tested 5000 times before fixing the bug. The graph bellow shows \(k/n\) and \(p_{min}\): After those 5000 tests, our \(p_{min}\) is about 23% – much closer to the real \(p\). The following graph shows the statistical significance corresponding to the number of tests we do after fixing the bug: We can see in that graph that after about 11 tests we reach 95%, and after about 16 we get to 99%. As we have tested more before fixing the bug, we found a higher \(p_{min}\), and that allowed us to test less after fixing the bug. Optimal testing We have seen that we decrease \(t\) as we increase \(n\), as that can potentially increases our lower estimate for \(p\). Of course, that value can decrease as we test, but that means that we “got lucky” in the first trials and we are getting to know the bug better – the estimate is approaching the real value in a non-deterministic way, after all. But, how much should we test before fixing the bug? Which value is an ideal value for \(n\)? To define an optimal value for \(n\), we will minimize the sum \(n+t\). This objective gives us the benefit of minimizing the total amount of testing without compromising our guarantees. Minimizing the testing can be fundamental if each test costs significant time and/or resources. The graph bellow shows us the evolution of the value of \(t\) and \(t+n\) using the data we generated for our bug: We can see clearly that there are some low values of \(n\) and \(t\) that give us the guarantees we need. Those values are \(n = 15\) and \(t = 24\), which gives us \(t+n = 39\). While you can use this technique to minimize the total number of tests performed (even more so when testing is expensive), testing more is always a good thing, as it always improves our guarantee, be it in \(n\) by providing us with a better \(p\) or in \(t\) by increasing the statistical significance of the conclusion that the bug is fixed. So, before fixing the bug, test until you see the bug at least once, and then at least the amount specified by this technique – but also test more if you can, there is no upper bound, specially after fixing the bug. You can then report a higher confidence in the solution. Conclusions When a programmer finds a bug that behaves in a non-deterministic way, he knows he should test enough to know more about the bug, and then even more after fixing it. In this article we have presented a framework that provides criteria to define numerically how much testing is “enough” and “even more.” The same technique also provides a method to objectively measure the guarantee that the amount of testing performed provides, when it is not possible to test “enough.” We have also provided a real example (even though the bug itself is artificial) where the framework is applied. As usual, the source code of this page (R scripts, etc) can be found and downloaded...
http://www.r-bloggers.com/probabilistic-bug-hunting/
CC-MAIN-2016-22
refinedweb
2,323
68.81
PyPE 2.7.2 reviewDownload PyPE (Python Programmers Editor) is a lightweight but powerful editor PyPE (Python Programmers Editor) is a lightweight but powerful editor. Tools for the new and seasoned user alike are included out of the box, including syntax coloring, multiple open documents with tabs, per-document browsable source trees, and many others. The included wxProject.py was modified from the original version distributed with wxPython in order to support being a drag and drop source for files. Python Programmers Editor does not seem to work when run with a version of wxPython with unicode support. The beginnings of PyPE was written from 10:30PM on the 2nd of July through 10:30PM on the 3rd of July. Additional features were put together on the 4th of July along with some bug fixing and more testing for version 1.0. Truthfully, I've been using it to edit itself since the morning of the 3rd of July, and believe it is pretty much feature-complete (in terms of standard Python source editing). There are a few more things I think it would be nice to have, and they will be added in good time. On the most part, this piece of software should work exactly the way you expect it to. That is the way I wrote it. As a result, you don't get much help in using it (mostly because I am lazy). When questions are asked, I'll add the question and answer into the FAQ, which is at the end of this document. the built-in features, and this is likely as much of a learning experience for me as you. Requirements: PyPE has only been tested on Python 2.3 and wxPython 2.4.2.4. It should work on later versions of Python and wxPython...unless the namespace for wxPython changes radically. What's New in This Release: changed) files with at least 20,000 lines or at least 4 megs will no longer have their bookmark or fold states saved on close. This significantly reduces the time to close (and sometimes open) large files. (fixed) code that uses MainWindow.exceptDialog() will now properly create a dialog. (fixed) wxPython 2.7 incompatability in the Browsable directory tree in wxPython 2.7 - 2.7.1.2 . (removed) some unnecessary debug printouts. (changed) the 'Search' tab to better handle narrower layouts. (fixed) the 'Ignore .subdirs' option in the 'Search' tab now gets disabled along with 'Search Subdirectories' when applicable. (fixed) error when opening changelog.txt on unicode-enabled installations. (fixed) spell checker for unicode-enabled platforms. (fixed) case where various checkmarks in the Documents menu wouldn't update when a document was opened, closed, or created on some platforms. (added) --font= command line option for choosing the font that the editor will use. PyPE 2.7.2 keywords
https://nixbit.com/software/pype-review/
CC-MAIN-2021-39
refinedweb
474
66.33
Hello I need to crop an image on Python anywhere, which means I need the Image module. Could you include it please. Thanks Robert Hello I need to crop an image on Python anywhere, which means I need the Image module. Could you include it please. Thanks Robert To answer my own post... I see I can get it by doing from PIL import Image Please ignore me... Actually, you've hit an interesting problem there. PIL is a funny module, with a slightly schizophrenic attitude to how you should import things from it. If you look at their handbook, the tutorial pages say that you should to this: import Image im = Image.open("lena.ppm") ...but the reference pages say that you should do this: from PIL import Image im = Image.open("bride.jpg") This has led to a lot of confusion for a lot of people, and there are multiple packages of PIL that use different import patterns. The from PIL import Image pattern seems to be the most generally-popular one -- and it seems cleaner because it makes it clear which package's Image class you want to use. So we use a PIL install that works that way. That was freaky. I was reading the links in giles post above to see where they go. Being that I'm getting so accustomed to reading pythonanywhere.com at first I read pythonware as pythonanywhere...☺ The tricks our minds will play on us if we're not careful...
https://www.pythonanywhere.com/forums/topic/165/
CC-MAIN-2018-26
refinedweb
250
75
Introduction One of my very first memories of starting out as a developer was that I once received code in C++ from one of my dad's friends. This code was to create an analog clock (you know, the old round ones without digital numbers). It looked beautiful, but I did not understand any of the code. There and then, I decided that I wanted to be a decent enough developer to be able to to this one day. With today's article, I will demonstrate how to create an analog clock in VB. Drawing in VB I have always had a fascination with the Drawing namespaces in VB. Here are a few articles covering various drawing options in VB.I have always had a fascination with the Drawing namespaces in VB. Here are a few articles covering various drawing options in VB. - Creating Your Own Drawing Application with Visual Basic .NET, Part 1 - Creating your own Tetris game with VB.NET - Creating Your Own Hidden Object Game with VB.NET Part 1 - The Basics - Creating a Video Slot game with VB.NET - Creating a Themed Word Search Game with VB - Creating a Tile-Matching Game in VB Okay, I am sorry.... You need some math.... The good news is that I used to suck at math, mainly due to teachers not showing any interest in helping me. This was a long time ago when I was actually forced to take math. Yep, I was forced. I initially studied four different languages in high school, due to the subjects not being popular enough. This caused me to be two whole semesters behind in math and economics and no one wanted to help me catch up. Luckily, I figured everything out on my own. If my math teachers could see me today.... Anyway, all you need to know is the following: Our Project There is no design needed, just a simple Windows Forms project with an empty form. We will create the needed controls dynamically. Code Import the Drawing Namespace: Imports System.Drawing.Drawing2D Declare your variables: Const Convert As Double = Math.PI / 180 Const SecRadius As Double = 185 Const MinRadius As Double = 180 Const HrRadius As Double = 155 Dim SecAngle As Double Dim MinAngle As Double Dim HrAngle As Double Dim SecX As Single = 220 Dim SecY As Single = 20 Dim MinX As Single = 220 Dim MinY As Single = 20 Dim HrX As Single = 220 Dim HrY As Single = 20 Dim hrs, min, value As Integer Dim TimeString As String Dim WithEvents tmrClock As New Timer Dim WithEvents lblPanel As New Label Dim lblTB As New Label Dim StartPoint(60) As PointF Dim EndPoint(60) As PointF Dim NumberPoint() As PointF = {New PointF(285, 50), New PointF(350, 115), New PointF(376, 203), New PointF(350, 290), New PointF(285, 350), New PointF(205, 366), New PointF(125, 350), New PointF(60, 290), New PointF(38, 203), New PointF(55, 120), New PointF(112, 59), New PointF(196, 36)} 'Create the Pens Dim GreenPen As Pen = New Pen(Color.Green, 4) Dim BluePen As Pen = New Pen(Color.Blue, 4) Dim OrangePen As Pen = New Pen(Color.DarkOrange, 5) Dim BlackPen As Pen = New Pen(Color.Black, 6) Dim myPen As New Pen(Color.DarkBlue, 8) 'Create the Fonts Dim NumberFont As New Font("Arial", 25, FontStyle.Bold) Dim ClockFont As New Font("Arial", 18, FontStyle.Bold) 'Create the Bitmap to draw the clock face Dim ClockFace As New Bitmap(445, 445) Dim gr As Graphics = Graphics.FromImage(ClockFace) Add the following two events to set up the drawing operations: Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load BluePen.SetLineCap(LineCap.Round, LineCap.ArrowAnchor, _ DashCap.Flat) OrangePen.SetLineCap(LineCap.Round, LineCap.ArrowAnchor, _ DashCap.Flat) BlackPen.SetLineCap(LineCap.Round, LineCap.ArrowAnchor, _ DashCap.Flat) DoubleBuffered = True Me.Size = New Size(570, 470) Me.FormBorderStyle = Windows.Forms.FormBorderStyle.None Me.TransparencyKey = SystemColors.Control Me.CenterToScreen() CalculatePerimeter() DrawFace() tmrClock.Interval = 990 tmrClock.Start() End Sub Protected Overrides Sub OnPaint(ByVal e As _ System.Windows.Forms.PaintEventArgs) e.Graphics.SmoothingMode = SmoothingMode.HighQuality 'Draw Clock Background e.Graphics.DrawImage(ClockFace, Point.Empty) 'Draw Digital Time e.Graphics.DrawString(TimeString, ClockFont, _ Brushes.White, 170, 260) 'Draw Hands e.Graphics.DrawLine(BlackPen, 220, 220, HrX, HrY) e.Graphics.FillEllipse(Brushes.Black, 210, 210, 20, 20) e.Graphics.DrawLine(OrangePen, 220, 220, MinX, MinY) e.Graphics.FillEllipse(Brushes.DarkOrange, 212, 212, 16, 16) e.Graphics.DrawLine(BluePen, 220, 220, SecX, SecY) e.Graphics.FillEllipse(Brushes.Blue, 215, 215, 10, 10) End Sub Add the DrawFace Sub that is responsible for the drawing the clock's face: Sub DrawFace() gr.SmoothingMode = SmoothingMode.HighQuality 'Draw Clock Background gr.FillEllipse(Brushes.Beige, 20, 20, 400, 400) gr.DrawEllipse(GreenPen, 20, 20, 400, 400) gr.DrawEllipse(Pens.Red, 120, 120, 200, 200) 'Draw Increments around cicumferance For I As Integer = 1 To 60 gr.DrawLine(GreenPen, StartPoint(I), _ EndPoint(I)) Next 'Draw Numbers For I As Integer = 1 To 12 gr.DrawString(I.ToString, NumberFont, _ Brushes.Black, NumberPoint(I - 1)) Next 'Draw Digital Clock Background gr.FillRectangle(Brushes.DarkBlue, _ 170, 260, 100, 30) myPen.LineJoin = LineJoin.Round gr.DrawRectangle(myPen, 170, 260, 100, 30) End Sub Add the CaclulatePerimter Sub: Sub CalculatePerimeter() Dim X, Y As Integer Dim radius As Integer For I As Integer = 1 To 60 If I Mod 5 = 0 Then radius = 182 Else radius = 190 End If 'Calculate Start Point X = CInt(radius * Math.Cos((90 - I * 6) * _ Convert)) + 220 Y = 220 - CInt(radius * Math.Sin((90 - I * 6) * _ Convert)) StartPoint(I) = New PointF(X, Y) 'Calculate End Point X = CInt(200 * Math.Cos((90 - I * 6) * _ Convert)) + 220 Y = 220 - CInt(200 * Math.Sin((90 - I * 6) * _ Convert)) EndPoint(I) = New PointF(X, Y) Next End Sub Finally, to make it all work, add the Timer's tick event: Private Sub tmrClock_Tick(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles tmrClock.Tick TimeString = Now.ToString("HH:mm:ss") 'Set The Angle of the Second, Minute and Hour hand 'according to the time SecAngle = (Now.Second * 6) MinAngle = (Now.Minute + Now.Second / 60) * 6 HrAngle = (Now.Hour + Now.Minute / 60) * 30 'Get the X,Y co-ordinates of the end point of each hand SecX = CInt(SecRadius * Math.Cos((90 - SecAngle) * _ Convert)) + 220 SecY = 220 - CInt(SecRadius * Math.Sin((90 - SecAngle) * _ Convert)) MinX = CInt(MinRadius * Math.Cos((90 - MinAngle) * _ Convert)) + 220 MinY = 220 - CInt(MinRadius * Math.Sin((90 - MinAngle) * _ Convert)) HrX = CInt(HrRadius * Math.Cos((90 - HrAngle) * _ Convert)) + 220 HrY = 220 - CInt(HrRadius * Math.Sin((90 - HrAngle) * _ Convert)) Refresh() End Sub Conclusion Graphics can be quite fun, once you get the hang of them. I am including a working sample with this article. Happy drawing!
http://mobile.codeguru.com/columns/vb/building-an-analog-clock-in-visual-basic.html
CC-MAIN-2017-26
refinedweb
1,147
59.6
Using Google Guava (Google Commons), is there a way to merge two equally sized lists into one list, with the new list containing composite objects of the two input lists? Example: public class Person { public final String name; public final int age; public Person(String name, int age) { this.name = name; this.age = age; } public String toString() { return "(" + name + ", " + age + ")"; } } List<String> names = Lists.newArrayList("Alice", "Bob", "Charles"); List<Integer> ages = Lists.newArrayList(42, 27, 31); List<Person> persons = transform with a function that converts (String, Integer) to Person System.out.println(persons); [(Alice, 42), (Bob, 27), (Charles, 31)] Looks like this is not currently in Guava, but is a desired feature. See this guava-libraries discussion this github issue, in particular Iterators.zip().
https://codedump.io/share/wYftxoz4XgoE/1/google-guava-quotzipquot-two-lists
CC-MAIN-2016-44
refinedweb
124
57.67
emcmanus's blog Excellent article on Virtual MBeans My colleague Nick Stephen has written an excellent and detailed article about Virtual MBeans. Reimplementing the RMI protocol In mylast entry, I mentioned that I had reimplemented the RMI registry portably, before discovering that there was a much simpler solution to the security problem I was addressing. Here's the reimplementation for what it's worth. Securing the RMI registry. Multihomed Computers and RMI A multihomed computer is one that has more than one network interface. Problems arise when you export an RMI object from such a computer. Here's why, and some ways you can work around the problem. The Spring Experience 2006 (2) I'm writing this in what I used to think was the world's nastiest airport, where I have a five-hour stopover. I'm somewhat revising my opinion of the airport because I discovered a "Quiet Seating Area" with real seats and real quiet. A bit like a business-class lounge but for the plebs. The Spring Experience 2006 I'm at The Spring Experience 2006 in Hollywood, Florida (between Miami and Fort Lauderdale) where I've been invited to speak. A helper class for performance statistics I recently wanted to add some performance measurements to an application. To avoid duplicating code everywhere I needed to make measurements, I coded up a small helper class. MBeanInfo.equals: who's asking? Defining an equals(Object) method in a public class is not always straightforward. One reason it might not be is that the answer to the question "are these objects equal?" might be "who's asking?". Notes: A real example of a Dynamic MBean The JMX API includes the possibility to create "Dynamic MBeans", whose management interface is determined at run time. When might that be useful? Here's an example.
https://www.java.net/blogs/emcmanus?page=4
CC-MAIN-2015-35
refinedweb
304
56.86
Laravel Code Generator based on MySQL Database Do you have a well structed database and you want to make a Laravel Application on top of it. By using this tools you can generate Models which have necessary methods and property, Request class with rules, generate route from controllers method and its parameter and full features form with validation error message and more with a single line of command. So lets start. See demo code and slides composer require digitaldream/laracrud --dev This version are ready to use in Laravel 5.3 and above. If you are using 5.2 please have a look to config/laracrud.php and adjust folder path. Add this line to config/app.php providers array . Not needed if you are using laravel 5.5 or greater php LaraCrud\LaraCrudServiceProvider::class Then Run php php artisan vendor:publish --provider="LaraCrud\LaraCrudServiceProvider" Then you can see new commands by running 'php artisan' laracrud:model {tableName} {name?} {--on=} {--off=}: Create model based on table laracrud:request {Model} {name?} {--resource=} {--controller=} {--api}: Create Request Class/es based on table laracrud:Controller {Model} {name?} {--parent=} {--only=} {--api}: Create Controller Class based on Model laracrud:mvc {table} {--api}: Run above commands into one place laracrud:route {controller} {--api}: Create routes based on controller method laracrud:view {Model} {--page=(index|create|edit|show|form|table|panel|modal)} {--type=} {--name=} {--controller=} laracrud:migration {table}: Create a migration file based on Table structure. Its opposite of normal migration file creation in Laravel laracrud:policy {model} {--controller=} {--name=} laracrud:package {--name=} laracrud:transformer {model} {name?}: Create a dingo api transformer for a model laracrud:test {controller} {--api}: Create test methods for each of the method of a controller N.B: --api option will generate api resource. Like Controller, Request, Route, Test. Dingo API compatible code will be generated . See API documentation Theare are some good practice for model in Laravel. Use scope to define query, define fillable, dates, casts etc. ALso define relation, setAttribute and getAttribute for doing work before and after model save and fetch. We are going to create this thing automatically by reading table structure and its relation to others table. php php artisan laracrud:model usersBy default Model Name will be based on Table name. But Model name can be specified as second parameter. Like below php php artisan laracrud:model users MyUser An well structured table validate everything before inserting . You can not insert a illegal date in a birth_date column if its data type set to date.So if we have this logic set on table why we should write it on Request again. Lets use this table logic to create a request class in laravel. php artisan laracrud:request MyUser Here MyUser is Eloquent Model. From LaraCrud version 4.* this command accept Model Name instead of Table Like Model Name we can also specify a custom request name. php php artisan laracrud:request User RegisterRequest php php artisan laracrud:request User –-resource=index,show,create,update,destroy It will create a folder users on app/Http/Requests folder and create these request classes. Sometimes you may like to create individual request class for each of your controller method then. ```php php artisan laracrud:request User –-controller=UserController php artisan laracrud:request User --controller=UserController --api //this will generated Request for API usages It will read your controller and create request classes for your Public method Create Controller ```php php artisan laracrud:controller User //Or Give a controller name. php artisan laracrud:controller User MyUserController //Or we can give a sub namespace php artisan laracrud:controller User User/UserController //It will create a folder User to controllers php artisan laracrud:controller Comment --parent=Post // it will create a sub resource CommentController This will create a controller which have create, edit, save and delete method with codes . It also handle your relation syncronization A typical form represent a database table. E.g. for a Registration form it has all the input field which is necessary for users table. Most of the time we use Bootstrap to generate a form . It has error field highlighting if validation fails. Also display value. This all can be done by ```php php artisan laracrud:view User --page=form php artisan laracrud:view User --page=index --type=panel //There are three type of layout for index page panel,table and tabpan php artisan laracrud:view User --controller=UserController // Create all the views which is not created yet for this controller Here **User** is Eloquent Model. From LaraCrud version 4.* this command accept Model Name instead of Table This will create a complete users crud view. Create Route Routes are the most vital part of a laravel application. WE create routes by its public methods and parameter. Lets do this work to rotue command. ```php php artisan laracrud:route UserController php artisan laracrud:route UserController --api // generate api routes for this conroller If you have some routes already redine for then do not worry. It will create routes for which does not define yet. Please use forward slash(/) for sub namespace. For example, php php artisan laracrud:route Auth/AuthController Laravel have default policy generator. It works like same with one extra feature that is create policy method based on controller public methods. php php artisan laracrud:policy User // will create policy class with basic methods php artisan laracrud:policy User --controller=UserController // create method based on Controller public methods Packages gives us opportunity to create/use components into our existing application. That make our code reusable. Laravel package has similar structure as a Laravel application has. php php artisan laracrud:package HelloThis will create a folder same structure as a Laravel application has into your /packages folder See Package documentation Video tutorial We need to test our routes endpoints. To create test class based on a controller do the following php php artisan laracrud:test UserController // or to make api test just pass --api like below php artisan laracrud:test UserController --api Transformer are a vital part of Dingo API. To expose a model to api endpoint Transformer play media between api and model. php artisan laracrud:transformer User If we need all of the command to then just to php php artisan laracrud:mvc users php artisan laracrud:mvc users --api // create all the API related resourcesIt will create Model, Request, Controller, View. Then you just need to run route command to create routes. Somethings we may need to create a migration file from a table. Then this command will be useful. It will generate all the necessary code for your migration files. So your migration file is ready to use. php artisan laracrud:migration users Coding Style differ from developer to developer. So you can control how your code will be generated. Code templates are organized by folder in resources/vendor/laracrud/templates . Go there and change the style. After that your code will be generated by reading these files. Please do not remove or change @@[email protected]@. This will be replaced by application. It is recommended to take a look in the generated file before use it. Like my work? If so hire me on upwork
https://xscode.com/digitaldreams/laracrud
CC-MAIN-2021-10
refinedweb
1,194
56.45
D_nx = nx.petersen_graph() g_dgl = dgl.DGLGraph(g_nx) import matplotlib.pyplot as plt plt.subplot(121) nx.draw(g_nx, with_labels=True) plt.subplot(122) nx.draw(g_dgl.to_networkx(), with_labels=True) plt.show() There are many ways to construct a DGLGraph. Below are the allowed data types ordered by our recommendataion. - A pair of arrays (u, v)storing the source and destination nodes respectively. They can be numpy arrays or tensor objects from the backend framework. scipysparse matrix representing the adjacency matrix of the graph to be constructed. networkxgraph object. - A list of edges in the form of integer pairs. The examples below construct the same star graph via different methods. DGLGraph nodes are a consecutive range of integers between 0 and number_of_nodes(). DGLGraph edges are in order of their additions. Note that edges are accessed in much the same way as nodes, with one extra feature: edge broadcasting. import torch as th import numpy as np import scipy.sparse as spp # Create a star graph from a pair of arrays (using ``numpy.array`` works too). u = th.tensor([0, 0, 0, 0, 0]) v = th.tensor([1, 2, 3, 4, 5]) star1 = dgl.DGLGraph((u, v)) # Create the same graph in one go! Essentially, if one of the arrays is a scalar, # the value is automatically broadcasted to match the length of the other array # -- a feature called *edge broadcasting*. start2 = dgl.DGLGraph((0, v)) # Create the same graph from a scipy sparse matrix (using ``scipy.sparse.csr_matrix`` works too). adj = spp.coo_matrix((np.ones(len(u)), (u.numpy(), v.numpy()))) star3 = dgl.DGLGraph(adj) You can also create a graph by progressively adding more nodes and edges. Although it is not as efficient as the above constructors, it is suitable for applications where the graph cannot be constructed in one shot. g = dgl.DGLGraph() g.add_nodes(10) # A couple edges one-by-one for i in range(1, 4): g.add_edge(i, 0) # A few more with a paired list src = list(range(5, 8)); dst = [0]*3 g.add_edges(src, dst) # finish with a pair of tensors src = th.tensor([8, 9]); dst = th.tensor([0, 0]) g.add_edges(src, dst) # Edge broadcasting will do star graph in one go! g.clear(); g.add_nodes(10) src = th.tensor(list(range(1, 10))); g.add_edges(src, 0) # Visualize the graph. nx.draw(g.to_networkx(), with_labels=True) plt.show() Assigning a feature¶ You can also assign features to nodes and edges of a DGLGraph. The features are represented as dictionary of names (strings) and tensors, called fields. The following code snippet assigns each node a vector (len=3). Note DGL aims to be framework-agnostic, and currently it supports PyTorch and MXNet tensors. The following examples use PyTorch only. import dgl import torch as th x = th.randn(10, 3) g.ndata['x'] = x ndata is a syntax sugar to access the feature data of all nodes. To get the features of some particular nodes, slice out the corresponding rows. g.ndata['x'][0] = th.zeros(1, 3) g.ndata['x'][[0, 1, 2]] = th.zeros(3, 3) g.ndata['x'][th.tensor([0, 1, 2])] = th.randn((3, 3)) Assigning edge features is similar to that of node features, except that you can also do it by specifying endpoints of the edges. g.edata['w'] = th.randn(9, 2) # Access edge set with IDs in integer, list, or integer tensor g.edata['w'][1] = th.randn(1, 2) g.edata['w'][[0, 1, 2]] = th.zeros(3, 2) g.edata['w'][th.tensor([0, 1, 2])] = th.zeros(3, 2) # You can get the edge ids by giving endpoints, which are useful for accessing the features. g.edata['w'][g.edge_id(1, 0)] = th.ones(1, 2) # edge 1 -> 0 g.edata['w'][g.edge_ids([1, 2, 3], [0, 0, 0])] = th.ones(3, 2) # edges [1, 2, 3] -> 0 # Use edge broadcasting whenever applicable. g.edata['w'][g.edge_ids([1, 2, 3], 0)] = th.ones(3, 2) # edges [1, 2, 3] -> 0 After assignments, each node or edge field will be associated with a scheme containing the shape and data type (dtype) of its field value. print(g.node_attr_schemes()) g.ndata['x'] = th.zeros((10, 4)) print(g.node_attr_schemes()) Out: {'x': Scheme(shape=(3,), dtype=torch.float32)} {'x': Scheme(shape=(4,), dtype=torch.float32)} You can also remove node or edge states from the graph. This is particularly useful to save memory during inference. g.ndata.pop('x') g.edata.pop('w') Working with multigraphs¶ Many graph applications need parallel edges, which class:DGLGraph supports by default. g_multi = dgl.DGLGraph() g_multi.add_nodes(10) g_multi.ndata['x'] = th.randn(10, 2) g_multi.add_edges(list(range(1, 10)), 0) g_multi.add_edge(1, 0) # two edges on 1->0 g_multi.edata['w'] = th.randn(10, 2) g_multi.edges[1].data['w'] = th.zeros(1, 2) print(g_multi.edges()) Out: (tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 1]), tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])) An edge in multigraph cannot be uniquely identified by using its incident nodes \(u\) and \(v\); query their edge IDs use edge_id interface. eid_10 = g_multi.edge_id(1, 0, return_array=True) g_multi.edges[eid_10].data['w'] = th.ones(len(eid_10), 2) print(g_multi.edata['w']) Out: tensor([[ 1.0000, 1.0000], [ 0.0000, 0.0000], [-0.3486, -0.7761], [ 1.1867, -0.2986], [ 1.3186, 0.4512], [-0.6735, 1.1000], [-0.7579, -0.5126], [ 0.6677, 1.6272], [-1.3767, -0.3174], [ 1.0000, 1.0000]]) Note - Nodes and edges can be added but not removed. - Updating a feature of different schemes raises the risk of error on individual nodes (or node subset). Next steps¶ In the next tutorial you learn the DGL message passing interface by implementing PageRank. Total running time of the script: ( 0 minutes 0.524 seconds) Gallery generated by Sphinx-Gallery
https://docs.dgl.ai/tutorials/basics/2_basics.html
CC-MAIN-2020-29
refinedweb
984
62.14
Data::Printer::Filter - Create powerful stand-alone filters for Data::Printer Create your filter module: package Data::Printer::Filter::MyFilter; use strict; use warnings; use Data::Printer::Filter; # type filter filter 'SCALAR', sub { my ($ref, $properties) = @_; my $val = $$ref; if ($val > 100) { return 'too big!!'; } else { return $val; } }; # you can also filter objects of any class filter 'Some::Class', sub { my ($object, $properties) = @_; return $ref->some_method; # or whatever # see 'HELPER FUNCTIONS' below for # customization options, including # proper indentation. }; 1; Later, in your main code: use Data::Printer { filters => { -external => [ 'MyFilter', 'OtherFilter' ], # you can still add regular (inline) filters SCALAR => sub { ... } }, }; We are still experimenting with the standalone filter syntax, so filters written like so may break in the future without any warning! If you care, or have any suggestions, please drop me a line via RT, email, or find me ('garu') on irc.perl.org. You have been warned. Data::Printer lets you add custom filters to display data structures and objects, by either specifying them during "use", in the .dataprinter configuration file, or even in runtime customizations. But there are times when you may want to group similar filters, or make them standalone in order to be easily reutilized in other environments and applications, or even upload them to CPAN so other people can benefit from a cleaner - and clearer - object/structure dump. This is where Data::Printer::Filter comes in. It exports into your package's namespace the "filter" function, along with some helpers to create custom filter packages. Data::Printer recognizes all filters in the Data::Printer::Filter::* namespace. You can load them by specifying them in the '-external' filter list (note the dash, to avoid clashing with a potential class or pragma labelled 'external'): use Data::Printer { filters => { -external => 'MyFilter', }, }; This will load all filters defined by the Data::Printer::Filter::MyFilter module. If there are more than one filter, use an array reference instead: -external => [ 'MyFilter', 'MyOtherFilter' ] IMPORTANT: THIS WAY OF LOADING EXTERNAL PLUGINS IS EXPERIMENTAL AND SUBJECT TO SUDDEN CHANGE! IF YOU CARE, AND/OR HAVE IDEAS ON A BETTER API, PLEASE LET US KNOW The filter function creates a new filter for TYPE, using the given subref. The subref receives two arguments: the item itself - be it an object or a reference to a standard Perl type - and the properties in effect (so you can inspect for certain options, etc). The subroutine is expected to return a string containing whatever it wants Data::Printer to display on screen. This is the same as Data::Printer's p(), only you can't rename it. You can use this to throw some data structures back at Data::Printer and use the results in your own return string - like when manipulating hashes or arrays. This helper returns a string using the linebreak as specified by the caller's settings. For instance, it provides the proper indentation level of spaces for you and considers the multiline option to avoid line breakage. In other words, if you do this: filter ARRAY => { my ($ref, $p) = @_; my $string = "Hey!! I got this array:"; foreach my $val (@$ref) { $string .= newline . p($val); } return $string; }; ... your p($val) returns will be properly indented, vertically aligned to your level of the data structure, while simply using "\n" would just make things messy if your structure has more than one level of depth. These two helpers let you increase/decrease the indentation level of your data display, for newline() and nested p() calls inside your filters. For example, the filter defined in the newline explanation above would show the values on the same (vertically aligned) level as the "I got this array" message. If you wanted your array to be one level further deep, you could use this instead: filter ARRAY => { my ($ref, $p) = @_; my $string = "Hey!! I got this array:"; indent; foreach my $val (@$ref) { $string .= newline . p($val); } outdent; return $string; }; You can use Term::ANSIColor's colored()' for string colorization. Data::Printer will automatically enable/disable colors for you. This is meant to provide a complete list of standalone filters for Data::Printer available on CPAN. If you write one, please put it under the Data::Printer::Filter::* namespace, and drop me a line so I can add it to this list! Data::Printer::Filter::DB provides filters for Database objects. So far only DBI is covered, but more to come! Data::Printer::Filter::DateTime pretty-prints several date and time objects (not just DateTime) for you on the fly, including duration/delta objects! Data::Printer::Filter::Digest displays a string containing the hash of the actual message digest instead of the object. Works on Digest::MD5, Digest::SHA, any digest class that inherits from Digest::base and some others that implement their own thing! Data::Printer::Filter::ClassicRegex changes the way Data::Printer dumps regular expressions, doing it the classic qr// way that got popular in Data::Dumper. Data::Printer::Filter::JSON, by Nuba Princigalli, lets you see your JSON structures replacing boolean objects with simple true/false strings! Data::Printer::Filter::URI filters through several URI manipulation classes and displays the URI as a colored string. A very nice addition by Stanislaw Pusep (SYP). Data::Printer::Filter::PDL, by Zakariyya Mughal, lets you quickly see the relevant contents of a PDL variable. As of version 0.13, standalone filters let you stack together filters for the same type or class. Filters of the same type are called in order, until one of them returns a string. This lets you have several filters inspecting the same given value until one of them decides to actually treat it somehow. If your filter caught a value and you don't want to treat it, simply return and the next filter will be called. If there are no other filters for that particular class or type available, the standard Data::Printer calls will be used. For example: filter SCALAR => sub { my ($ref, $properties) = @_; if ( Scalar::Util::looks_like_number $$ref ) { return sprintf "%.8d", $$ref; } return; # lets the other SCALAR filter have a go }; filter SCALAR => sub { my ($ref, $properties) = @_; return qq["$$ref"]; }; Note that this "filter stack" is not possible on inline filters, since it's a hash and keys with the same name are overwritten. Instead, you can pass them as an array reference: use Data::Printer filters => { SCALAR => [ sub { ... }, sub { ... } ], }; This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic.
http://search.cpan.org/dist/Data-Printer/lib/Data/Printer/Filter.pm
CC-MAIN-2015-32
refinedweb
1,085
52.29
Type.MakeGenericType Method Assembly: mscorlib (in mscorlib.dll) Parameters - typeArguments An array of types to be substituted for the type parameters of the current generic type. Return ValueA Type representing the constructed type formed by substituting the elements of typeArguments for the type parameters of the current generic type. The MakeGenericType method allows you to write code that assigns specific types to the type parameters of a generic type definition, thus creating a Type object that represents a particular constructed type. You can use this Type object to create run-time instances of the constructed type. Types constructed with MakeGenericType can be open, that is, some of their type arguments can be type parameters of enclosing generic methods or types. You might use such open constructed types when you emit dynamic assemblies. For example, consider the classes Base and Derived in the following code. To generate Derived in a dynamic assembly, it is necessary to construct its base type. To do this, call the MakeGenericType method on a Type object representing the class Base, using the generic type arguments Int32 and the type parameter V from Derived. Because types and generic type parameters are both represented by Type objects, an array containing both can be passed to the MakeGenericType method. The Type object returned by MakeGenericType is the same as the Type obtained by calling the GetType method of the resulting constructed type, or the GetType method of any constructed type that was created from the same generic type definition using the same type arguments. For a list of the invariant conditions for terms used in generic reflection, see the IsGenericType property remarks. Nested Types If a generic type is defined using C#, C++, or Visual Basic, then its nested types are all generic. This is true even if the nested types have no type parameters of their own, because all three languages include the type parameters of enclosing types in the type parameter lists of nested types. Consider the following classes: The type parameter list of the nested class Inner has two type parameters, T and U, the first of which is the type parameter of its enclosing class. Similarly, the type parameter list of the nested class Innermost1 has three type parameters, T, U, and V, with T and U coming from its enclosing classes. The nested class Innermost2 has two type parameters, T and U, which come from its enclosing classes. If the parameter list of the enclosing type has more than one type parameter, all the type parameters in order are included in the type parameter list of the nested type. To construct a generic type from the generic type definition for a nested type, call the MakeGenericType method with the array formed by concatenating the type argument arrays of all the enclosing types, beginning with the outermost generic type, and ending with the type argument array of the nested type itself, if it has type parameters of its own. To create an instance of Innermost1, call the MakeGenericType method with an array containing three types, to be assigned to T, U, and V. To create an instance of Innermost2, call the MakeGenericType method with an array containing two types, to be assigned to T and U. The languages propagate the type parameters of enclosing types in this fashion so you can use the type parameters of an enclosing type to define fields of nested types. Otherwise, the type parameters would not be in scope within the bodies of the nested types. It is possible to define nested types without propagating the type parameters of enclosing types, by emitting code in dynamic assemblies or by using the MSIL Assembler (Ilasm.exe). Consider the following code for the MSIL assembler: In this example, it is not possible to define a field of type T or U in class Innermost, because those type parameters are not in scope. The following assembler code defines nested classes that behave the way they would if defined in C++, Visual Basic, and C#: You can use the MSIL Disassembler (Ildasm.exe) to examine nested classes defined in the high-level languages and observe this naming scheme. The following example uses the MakeGenericType method to create a constructed type from the generic type definition for the Dictionary type. The constructed type represents a Dictionary of Test objects with string keys. using System; using System.Reflection; using System.Collections.Generic; public class); } } } /* This example produces the following output: --- Create a constructed type from the generic Dictionary type. System.Collections.Generic.Dictionary`2[TKey,TValue] Is this a generic type definition? True Is it a generic type? True List type arguments (2): TKey TValue System.Collections.Generic.Dictionary`2[System.String, Test] Is this a generic type definition? False Is it a generic type? True List type arguments (2): System.String Test --- Compare types obtained by different methods: Are the constructed types equal? True Are the generic types equal?.
https://msdn.microsoft.com/en-US/library/system.type.makegenerictype(v=vs.85).aspx
CC-MAIN-2015-35
refinedweb
824
51.58
A user with profile "user" can not perform any actions Hello, "Access to bonita.qa.someCompany.com was denied You don't have authorisation to view this page. HTTP ERROR 403" To the best of my knowledge I have compared everything so far and found no difference. Is there a possibility to change the rights of the profile "user"? Or does anyone have an idea what this could be due to? According to the documentation, a user with the profile "user" should be able to start or view at least one process. See Default Profils We are using Bonita Version 7.7.3 <p>first when you said have two environments for Bonita DEV & QA, the two Bonitas is the Studio Version or you have the Studio Community in DEV and in QA you have Server Version? Could you specify that?</p> <p>also you do not need to recreate the users, if you already have the configuration in one Bonita, you can export the organization and import it in the other Bonita and everything is generated automatically is same thing with the forms, pages, filters and others configurations.</p> <p data-The problem may be that since you re-created all the settings may not be correct and the page is created by default for admins only, you need to change that</p> <p> </p> Oh I'm new to this forum. I created an answer instead of adding it as comment to your answer. Exuse me <p>We only have the Bonita Community installations built on this image <a href=""></a></p> <p>Sorry for the vague description. Our dev and qa instances were empty. We developt our process and organisation on our local machine with the bonita Studio export it and import it in the bonita dev and qa with the bonita portal in the browser.</p> <p>So far we have tried different things that didn't work.</p> <ul> <li>We created a complete new bonita instance (with the connection to the existing database).</li> <li>We override the "custom-permissions-mapping.properties" with "profile|User=[document_management, case_delete, task_visualization, case_visualization, process_visualization, flownode_visualization, organization_visualization]". </li> <li>We have searched the logs for errors.</li> <li>We exported the organisation from QA and import it in Dev.</li> <li>We have searched for differences in the configuration.</li> </ul> <p>After all this I created a complete new bonita instance (from our installation image) with an h2 (default database) and it also doesn't work. So the only thing whats left is the create a new basic image from bonita itself. Maybe our dev ops department made a mistake when creating the base image (as docker).</p> <p> </p> <p>Do you have any other idea what we can do to fix the problem in our dev environment.</p> <p>thanks a lot for your answer :)</p> Hi Fabian, have you double checked, logged as an administration in Bonita Portal, that the users of your organization were correctly mapped with the user profile on both Dev & QA environments? In Organization / Profiles - you can see the list of users with your User Profile or you can verify the mapping from Users menu as shown below : hi, also can you check if you do not hard coded " bonita.qa.someCompany.com" somewhere ? because it is strange to have DEV instance targeting QA instance ? Hey Delphine, yes we doubled checked it. Our test user has the profile "User" in Dev and Qa. Both are default bonita instances builded from the docker image of bonita (so no custom settings in one of the systems). I don't get the forum how to add a picture here. and thanks for your reply julien, we have seperated instances one for each stage. So we have a prod, testing and development environment. Each is separated. Yeah we also checked our DNS to the different instances and they are correct. Our Dev also has an other url "bonita.dev.someCompany.com" and qa is "bonita.qa.someCompany.com". They are are located in a kubernetes cluster also separeted with different namespaces. We build a workaround with an own created api that works as a proxy for the bonita rest api with an administrator user. But this is not very nice to handle.
https://community.bonitasoft.com/questions-and-answers/user-profile-user-can-not-perform-any-actions
CC-MAIN-2020-34
refinedweb
717
55.44
Functions are a way to organize chunks of your code together. A function is a group of statements about a related subtask that is bundled together by a name. For example, we might have a getLetter function that asks the user for a lowercase letter, or an averageNumbers function that averages a list of numbers. Functions are important because they allow your code to be modular---you write a function once, and you can use that function over and over. We have already seen some functions. main() is the most important function in a program, because it contains all the code necessary to start the program and is responsible for calling other functions. Let's take a second to review the setup of main: int main(){ //code goes here } We begin by declaring the return type of main to be an integer. This tells the computer that we intend for main to send back an integer when it is finished. Usually, this is accomplished through a return statement, and, by convention, main returns a 1 if there was an error and 0 if the program completes successfully. We haven't been including a return statement and, for main (and, really, only main!), it isn't absolutely necessary. We'll talk more about return types for other functions soon. We then write the name of the function, main, and follow it with a set of parentheses. These parentheses are for parameters. Parameters things like numbers or variables that you send to a function so that it can use them in its code. For our purposes, they are empty for main, but we'll see them in action in other functions soon. Finally we have a set of curly braces, and we put the code that belongs to main inside the curly braces. Let's try writing a very simple function that prints the greeting "Hello World!". We will call (run) this function from main. It won't take any parameters or return anything, so the return type of this function will be void. Here's how we would set it up: #include <iostream> using namespace std; void printGreeting(){ cout<<"Hello World!\n"; } int main(){ printGreeting(); } Here, we made the printGreeting function exactly like we made main. We declared its return type to be void, named it printGreeting, and put no parameters inside the parentheses. Inside the curly braces, we wrote the same code we used for our very first C++ program. What's interesting is how we call, or run, the function. Inside of main, we call the function by naming it and then putting the arguments, or values to be assigned to the parameters, we want to send it inside of parentheses. We don't want to send it any arguments since it doesn't accept any parameters (more on this in a bit), so we leave the parentheses blank. As always, we end the line with a semi-colon. The way that this program works is the following: the computer begins running the program inside the main function. When it gets to the printGreeting function, it "jumps" up to where we defined the printGreeting function and starts executing that code. When it gets to the end of the printGreeting function, it jumps back down to the main function exactly where it left off and continues executing the rest of the code in main. Let's look at a slightly more complicated function that uses parameters. In this example, we're going to change the printGreeting function so that it prints out "Hello name", where name is some name that the user enters. printGreeting will be sent the name to print out from main. So, the general layout of this program will be: start in main, get a name from the user, send that name to printGreeting, print out the hello message, go back to main. It sounds complicated, but it's really pretty easy. When a function takes a parameter, we need to tell the computer what kind of parameter to expect. So, since we want to give printGreeting a name, the parameter will be a string. We also want to give the computer the name of the parameter to expect, which we can use inside the printGreeting function (but only inside this function). So we will call it name. So, the printGreeting function will look like this, with the name variable used in the cout function: void printGreeting(string name){ cout<<"Hello "<<name<<"!\n"; } Now the computer knows that anything that calls the printGreeting function will need to provide a string argument that will be called name inside the function. So, in order to call the function, we need to get some input from the user and provide that input when we call the printGreeting function from main: int main(){ string userName; cout<<"Enter a name: "; cin>>userName; printGreeting(userName); } Before we go any further, we need to have a discussion about the difference between arguments and parameters. Arguments are what you send to a function, and parameters are what a function expects to receive. When the printGreeting function is called from main, the argument userName's value is copied to the parameter name in printGreeting. While the computer is executing the code in printGreeting, it can't use the variable userName, because that variable is only defined (available) in main. So that's why we have a parameter name- the value of userName gets copied to name so that the program has access to that data. When the computer finished executing printGreeting and returns to main, it can again use the variable userName, but it can no longer use name. This is what is known as the scope of a variable, which means where the variable is defined and can be used. The reason that functions are so great is that we can reuse them and give them different parameters. For example, without changing the printGreeting function at all, we can use it to print greetings to three different people: #include <iostream> using namespace std; void printGreeting(string name){ cout<<"Hello "<<name<<"!\n"; } int main(){ printGreeting("Alison"); printGreeting("Brian"); printGreeting("Clifford the Big Red Dog"); } Hello Alison! Hello Brian! Hello Clifford the Big Red Dog! That's pretty convenient. We can also use functions to do computation for us and return the answers back to main. For example, let's imagine that we want to write a function called squareANum that calculates the square of a number. We know from math class that the square of a number is that number multiplied by itself. So, let's write a function that takes a floating point number and squares it. After we figure out the square of the number (which we'll call numSquared), we'll need a way to return our answer back to the function that called it. To do this, we simply say: return numSquared; float squareANum(float num){ float numSquared; numSquared = num * num; return numSquared; } #include <iostream> using namespace std; float squareANum(float num){ float numSquared; numSquared = num * num; return numSquared; } int main(){ float numToSquare, theSquaredNumber; numToSquare = 5.5; theSquaredNumber = squareANum(numToSquare); //catch the returned value cout<<numToSquare<<" squared equals "<<theSquaredNumber<<".\n"; } Write a program similar to the one that triples a number and adds 5 to it. Finally, we can write functions that take multiple parameters of different kinds. For example, we can modify the printGreeting function so that it takes both a string name and an integer age, and prints out a message about that person's age: #include <iostream> #include <string> using namespace std; void printGreeting(string name, int age){ cout<<"Hello "<<name<<"!\n"; cout<<"You are "<<age<<" years old.\n"; } int main(){ printGreeting("Ronald McDonald", 49); } Modify the printGreeting function to also take a string birthdayMonth parameter, and print out a message that tells them how old they will be the next time it is that month. For the Ronald McDonald example, the output might be, "You will be 50 next January". Now, let's learn about a special data type, structs.
http://www.cs.utexas.edu/~ans/firstbytes/tutorial/functions_cpp.html
CC-MAIN-2014-52
refinedweb
1,342
69.11
. If such an element is found, the every method immediately returns false. Otherwise, if callback returned a true value for all elements, every will return true. every, it will be used as the this for each invocation of the callback. If it is not provided, or is null, the global object associated with callback is used instead. every does not mutate the array on which it is called. The range of elements processed by every is set before the first invocation of callback. Elements which are appended to the array after the call to every begins will not be visited by callback. If existing elements of the array are changed, their value as passed to callback will be the value at the time every visits them; elements that are deleted are not visited. every acts like the "for all" quantifier in mathematics. In particular, for an empty array, it returns true. (It is vacuously true that all elements of the empty set satisfy any given condition.) Compatibility every is a JavaScript extension to the ECMA-262 standard; as such it may not be present in other implementations of the standard. You can work around this by inserting the following code at the beginning of your scripts, allowing use of every in ECMA-262 implementations which do not natively support it. This algorithm is exactly the one used in Firefox and SpiderMonkey. if (!Array.prototype.every) { Array.prototype.every = function(fun /*, thisp*/) { var len = this.length >>> 0; if (typeof fun != "function") throw new TypeError(); var thisp = arguments[1]; for (var i = 0; i < len; i++) { if (i in this && !fun.call(thisp, this[i], i, this)) return false; } return true; }; } Examples Example: Testing size of all array elements The following example tests whether all elements in the array are bigger than 10. function isBigEnough(element, index, array) { return (element >= 10); } var passed = [12, 5, 8, 130, 44].every(isBigEnough); // passed is false passed = [12, 54, 18, 130, 44].every(isBigEnough); // passed is" } ) }}
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/every$revision/1616
CC-MAIN-2015-32
refinedweb
331
56.25
Investors in Bank of America Corp (Symbol: BAC) saw new options become available today, for the January 2024 BAC options chain for the new January 2024 contracts and identified one put and one call contract of particular interest. The put contract at the $38.00 strike price has a current bid of $4.00. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $38.00, but will also collect the premium, putting the cost basis of the shares at $34.00 (before broker commissions). To an investor already interested in purchasing shares of BAC, that could represent an attractive alternative to paying $40.73/share today. Because the $38.53% return on the cash commitment, or 4.48% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Bank of America Corp, and highlighting in green where the $38.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $42.00 strike price has a current bid of $4.00. If an investor was to purchase shares of BAC stock at the current price level of $40.73/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $42.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 12.94% if the stock gets called away at the January 2024 $42.00 strike highlighted in red: Considering the fact that the $42.82% boost of extra return to the investor, or 4.18% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 45%, while the implied volatility in the call contract example is 36%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $40.73) to be 30%..
https://www.nasdaq.com/articles/january-2024-options-now-available-for-bank-of-america-bac-2021-09-13
CC-MAIN-2021-39
refinedweb
348
65.62
#include <wx/button.h> A button is a control that contains a text string, and is one of the most common elements of a GUI. It may be placed on a dialog box() and SetBitmapLabel(), SetBitmapDisabled() . This class supports the following styles: The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros for events emitted by this class: wxEVT_BUTTONevent, when the button is clicked. Default ctor.. Button creation function for two-step creation. For more details, see wxButton(). Returns true if an authentication needed symbol is displayed on the button.. Returns the string label for the button. Reimplemented from wxWindow. Reimplemented in wxCommandLinkButton. Sets whether an authentication needed symbol should be displayed on the button. This sets the button to be the default item in its top-level window (e.g. the panel or the dialog box containing it). As normal, pressing return causes the default button to be depressed when the return key is pressed. See also wxWindow::SetFocus() which sets the keyboard focus for windows and text panel items, and wxTopLevelWindow::SetDefaultItem(). Sets the string label for the button. Reimplemented from wxWindow. Reimplemented in wxCommandLinkButton.
https://docs.wxwidgets.org/trunk/classwx_button.html
CC-MAIN-2021-17
refinedweb
195
60.21
Problem You want to put some Ruby code into an object so you can pass it around and call it later. Solution By this time, you should familiar with a block as some Ruby code enclosed in curly brackets. You might think it possible to define a block object as follows: aBlock = { |x| puts x } # WRONG # SyntaxError: compile error That doesn't work because a block is only valid Ruby syntax when it's an argument to a method call. There are several equivalent methods that take a block and return it as an object. The most favored method is Kernel# lambda:[3] [3] The name lambda comes from the lambda calculus (a mathematical formal system) via Lisp. aBlock = lambda { |x| puts x } # RIGHT To call the block, use the call method: aBlock.call "Hello World!" # Hello World! Discussion The ability to assign a bit of Ruby code to a variable is very powerful. It lets you write general frameworks and plug in specific pieces of code at the crucial points. As you'll find out in Recipe 7.2, you can accept a block as an argument to a method by prepending & to the argument name. This way, you can write your own trivial version of the lambda method: def my_lambda(&aBlock) aBlock end b = my_lambda { puts "Hello World My Way!" } b.call # Hello World My Way! A newly defined block is actually a Proc object. b.class # => Proc You can also initialize blocks with the Proc constructor or the method Kernel#proc. The methods Kernel#lambda, Kernel#proc, and Proc.new all do basically the same thing. These three lines of code are nearly equivalent: aBlock = Proc.new { |x| puts x } aBlock = proc { |x| puts x } aBlock = lambda { |x| puts x } What's the difference? Kernel#lambda is the preferred way of creating block objects, because it gives you block objects that act more like Ruby methods. Consider what happens when you call a block with the wrong number of arguments: add_lambda = lambda { |x,y| x + y } add_lambda.call(4) # ArgumentError: wrong number of arguments (1 for 2) add_lambda.call(4,5,6) # ArgumentError: wrong number of arguments (3 for 2) A block created with lambda acts like a Ruby method. If you don't specify the right number of arguments, you can't call the block. But a block created with Proc.new acts like the anonymous code block you pass into a method like Enumerable#each: add_procnew = Proc.new { |x,y| x + y } add_procnew.call(4) # TypeError: nil can't be coerced into Fixnum add_procnew.call(4,5,6) # => 9 If you don't specify enough arguments when you call the block, the rest of the arguments are given nil. If you specify too many arguments, the extra arguments are ignored. Unless you want this kind of behavior, use lambda. In Ruby 1.8, Kernel#proc acts like Kernel#lambda. In Ruby 1.9, Kernel#proc acts like Proc.new, as better befits its name.
https://flylib.com/books/en/2.44.1/creating_and_invoking_a_block.html
CC-MAIN-2019-22
refinedweb
499
67.04
Welcome to the Ars OpenForum. This won't make traffic better, but it's a good test-run before Chicago.Read the whole story This is going to one of the best ideas ever - or worst. (transport wise)I cant wait to find out. In EUROPE we use public transport way more then you guys in USA and it works superb! I hope same goes for this loop type of transport as well. I assume they could always go back later and add a second tunnel alongside the first to increase capacity at a future date. For a test concept however its cool it will at least have some usefulness. When the Boring Company says it 'made technical progress much faster than expected', what do they mean? Faster tunneling that was demonstrated somewhere I may have missed? Simulations working out better than expected? Bureaucracy swiftly overcome? Something else?I do try to keep up but I'm light on details regarding this Musk venture. #conmanAnd you report every move of this jerk like the coming of the Messiah.He's a narcissistic con man, akin to Trump. I believe this is actually a two-bore tunnel, part of the blurb indicated this was the case.Edit: Unless I misread - it could be a single bore, single direction only. They mentioned 100s of skates, which would add up to the number of passengers they want to carry. So before the match, the skates are at the metro station, and they all end up by the stadium, and the direction is reversed after. I.e., it's a single-bore uni-direction transport system.This seems less than ideal to me in a general case, but fine for this use case. Except for the number of skates required to increase the load. If the tunnel is so narrow as to give a single track, and the purpose is to move people in one direction, why not a moving walkway / travelator rather than the “skate” carriages? After the game/concert/event, the direction is reversed. Maybe it's me or it's just early but I fail to grasp the "one-way" concept and how that's useful. So it goes to and fro on one tunnel? Or does it take you somewhere, Dodger stadium, and then getting home you're on your own? I am missing something, I think. The problem with this solution is that it still relies on roads and electric mini vans, even though those roads may be underground. Tunnels are a very expensive way to make roads 3D, elevated roadways and bridges are also extremely expensive and massive. So this solution is not scale-able and will not solve the congestion problem in cities. Trains/Buses/Trams won't either as they have to stop at almost every stop and so become slow after encountering more than a few stations. Musk is basically groping towards the travel concept know as PRT (personal rapid transit). There's nothing wrong with this, if a bunch of people start at different places in trying to solve a problem then all wind up at the same place that is a good sign.PRT is usually envisioned as hanging 2 person 'pods' on a thin elevated monorail above ground (enabling a 3D transit network rather than a 2D one with lots of crossroads). The above ground part is important as it keeps costs down, the thin part is important as even something as wide as a bike path overshadows people below like a mini freeway and is an eyesore. The hanging part is important as the pods with swing arms will then be self leveling by gravity and can go up and down steep or even vertical slopes.There are a lot of moving parts in the design of a PRT system. If you're interested check out "The Boring Company says it would be able to transport about 1,400 people per game, pre-selling tickets for trips at an arranged time."I know it's only intended as kind of a one-way test line, but these sounds like appalling capacity figures.As a comparison, the new East-West route underground through central London (Crossrail) has a capacity of 1,500 people, per train. At stations in the central section of the line, peak train frequencies will be once every 2.5 minutes.....Seems they plan to move 1 train's worth of people per game? Never a boring day with Elon. WHY is it One Way?Is it One Way to Dodger Stadium because they don't believe people ever need a return trip to go back home?Do they expect people to live out the rest of their lives there? Or that watching sports is such a horrible experience as to bore people to death? (which is actually plausible IMO) It has to be pointed out that this has nothing to do with the original Hyperloop concept involving depressurized tubes and air bearings. The only thing in common is a tunnel and some sort of pod (and ridiculous inefficiency...)I suspect someone is running low on cash* Not a short I'm trying to work out how this will ever make money.$1 a ride.250,000 rides a year apparently.4 years to make $1m.Cost to get permits, drive the 3+ mile bore, create the skate-parks and recharging infrastructure, build the 100-200 skates ... Sure, it's also acting as a proof of concept tunnel, but it seems a bit sketchy financially. There is no way in hell he will build this in 14 months. Tunneling is hard, you always find surprises. To top it off, any halfway decent railway infrastructure, using proven technology, needs MONTHS of testing before going into service. Musk wants to press new, self-driving vehicle tech pretty much with no testing whatsoever. It is ludicrous. Don´t even get me started on capacity, which is just pathetic for a project this size. You can move 3,000 people using 10 city buses in an hour, for a tenth of the cost, and just taking 5 more minutes of travel time. Or the fact that he is asking people to pre-book rides on a glorified people mover. Or the insane amount of room he will need to store the skates at either end of this contraption. Why do politicians keep falling for this man? He knows nothing about efficient public transportation. So 81 home games per year, 16 people per pod, assuming 100% utilization and the Dodgers miss the playoffs (trololol), that's... $2592 gross per pod per year?What's being tested here, exactly, the question of whether people will sign up to skip traffic for two dollars? This seems obvious. What's the point? It feels much more like a feel-good distraction for the guy at the top than any serious exploratory business venture. Bear in mind that this isn't the only proposal to shuttle people from the Metro to the stadium. There's also the proposed gondola going directly from Union Station to the stadium. Taking both proposals at face value, the gondola would be more expensive (the people behind it say tickets would be "less than" the current $20 parking fee, which we can probably take to mean "$19"), but has the advantage of going directly from Union Station rather than requiring people to change from the train to the subway and then to Musk's sledway, and is also somewhat larger capacity (numbers discussed were ~5000 people in each direction). The biggest problem I see with it being one way is that you need a "parking lot" for skates big enough for every skate at each end of the track. 1500 people/16 people per skate means 94 skates (or twice that if they start with 8 person skates), so The Boring Co will essentially dig a tunnel and 2 parking lots - which can't be efficient.I know its far from ideal, but given a 4 minute one-way trip, I wonder how long it would take to say, reduce the # of skates to 50 or so, run all the skates from one end to the other serially (with passengers), and then have all 50 skates return (empty) to pick up another wave? If the skates could be sent out every 20 seconds with passengers, it seems like you could manage several waves before a game, considering some people arrive 2 hours early and some around the time of the first pitch. Hopefully, they are pushing the bounds of tunnel boring machines and will incorporate microwave beams to fracture water-bearing rock ahead of it.
https://arstechnica.com/civis/viewtopic.php?p=35844517
CC-MAIN-2019-43
refinedweb
1,452
71.14
16.1: Inheritance - Page ID - 29126 Inheritance in C++ The capability of a class to derive properties and characteristics from another class is called Inheritance. Inheritance is one of the most important feature of Object Oriented Programming. Sub Class: The class that inherits properties from another class is called Sub class or Derived Class - also often referred to as the child class. Super Class:The class whose properties are inherited by sub class is called Base Class or Super class - also often referred to as the parent class. The article is divided into following subtopics: - Why and when to use inheritance? - Modes of Inheritance - Types of Inheritance_0<< Implementing inheritance in C++: For creating a sub-class which is inherited from the base class we have to follow the below syntax. class child_class_name : access_mode parent_class_name { //body of child_class }; Here, child_class_name is the name of the sub class, access_mode is the mode in which you want to inherit this sub class for example: public, private etc. and parent_class_name is the name of the base class from which you want to inherit the sub class. Note: A derived class doesn’t inherit access to private data members. However, it does inherit a full parent object, which contains any private members which that class declares. // C++ program to demonstrate implementation of Inheritance #include <bits/stdc++.h> using namespace std; //Base class class School { public: string name_p; }; // Sub class inheriting from Base Class(School) class Department : public School { public: string name_c; }; //main function int main() { Department deptObj; // An object of class child has all data members // and member functions of class parent deptObj.name_c = "Computer Science"; deptObj.name_p = "Delta College"; cout << "Child name is " << deptObj.name_c << endl; cout << "Parent name is " << deptObj.name_p << endl; return 0; } In the code there is a parent class called School, and the child class which is Department. We could have a lot more variable and methods in the parent class, but we are trying to keep this simple. The child class inherits the methods and variables of the parent class - EXCEPT - the child class can not access a parents private variables or methods. Output: Child name is Computer Science Parent name is Delta College In the above program the ‘Child’ class is publicly inherited from the ‘Parent’ class so the public data members of the class ‘Parent’ will also be inherited by the class ‘Child’. Adapted from: "Inheritance in C++" by Harsh Agarwal, Geeks for Geeks is licensed under CC BY-SA 4.0
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/16%3A_Inheritance/16.01%3A_Inheritance
CC-MAIN-2021-17
refinedweb
415
51.07
Iterators are also part of the C++ zero cost abstractions02 Jul 2017 This article picks up an example operating system kernel code snippet that is written in C++, but looks like “C with classes”. I think it is a great idea to implement Embedded projects/kernels in C++ instead of C and it’s nice to see that the number of embedded system developers that use C++ is rising. Unfortunately, I see stagnation in terms of modern programming in embedded/kernel projects in the industry. After diving through the context i demonstrate how to implement a nice iterator as a zero cost abstraction that helps tidy up the code. The real life story This context dive is rather long. If you dont care about the actual logic behind the code, just jump to the next section. As an intern at Intel Labs in 2012, I had my first contact with microkernel operating systems that were implemented in C++. This article concentrates on a recurring code pattern that I have seen very often in the following years also in other companies. I have the opinion that such code should be written once as a little library helper. Let’s jump right into it: Most operating systems allow processes to share memory. Memory is then usually shared by one process that tells the operating system kernel to map a specific memory range into the address space of another process, possibly at some different address than where it is visible for the original process. In those microkernel operating system environments I have been working on, memory ranges were described in a very specific way: The beginning of a chunk is described by its page number in the virtual memory space. The size of a chunk is described by its order. Both these characteristics are then part of a capability range descriptor and are used by some microkernel operating systems to describe ranges of memory, I/O ports, kernel objects, etc. Capabilities are a security concept i would like to ignore as much as possible for now, because the scope of this article is the maths behind capability range descriptors. Example: A memory range that is 4 memory pages large and begins at address 0x123000 is described by (0x123, 2). We get from 0x123000 to 0x123, because pages are 4096 bytes (0x1000 in hex) large. That means that we need to divide a virtual address pointer value by 0x1000 and get a virtual page number. From 4 pages we get to the order value 2, because , so the order is 2. Ok, that is simple. It stops being simple as soon as one describes real-life memory ranges. Such a (base, order) tuple is also called a capability range descriptor, and must follow the following rules: - Every memory capability’s size must be a power of 2. (By storing only the order, this rule is implicitly followed by design.) - Every capability’s base must be evenly divisible by its size. That means if we want to describe the memory range [0x100, 0x107) (the notation [a, b) means that the range goes from a to b, but does not contain b. Like it is the case for begin/end iterator pairs) following those rules, we would break it into multiple capability range descriptors: (0x100, 2), pages (0x104, 1), pages (0x106, 0), pages Let’s get towards actual code: Mapping such an example range to another process’s address space would then look like the following code, which maps its own range [0x100, 0x107) to [0x200, 0x207) in the namespace of the other process using a structure map_helper: map_helper.source_base = 0x100; map_helper.push_back(0x200, 2); // 2^2 pages = 4 pages map_helper.push_back(0x204, 1); // 2^1 pages = 2 page map_helper.push_back(0x206, 0); // 2^0 pages = 1 page // sum = 7 pages map_helper.delegate(target_address_space); The map_helper.delegate(...) call results in a system call to the kernel which does the actual memory mapping. In order to not result in one system call per mapping, map_helper accepts a whole batch of mappings that are sent to the kernel in one run. This looks very complicated but it is necessary to keep the microkernel micro. When the kernel gets mapping requests preformatted like this, the kernel code that applies the mapping contains much less complicated logic. An operating system kernel with a reduced amount of complicated logic is a good thing to have because then it is easier to prove that it is correct. Ok, that is nearly everything about expressing memory mappings with the logic of capability range descriptors. There is one last quirk. Imagine we want to map the range [0x0, 0x10), which can be expressed as (0x0, 4) ( 0x10 = 16, and ), to the range [0x1, 0x11) in the other process’s address space. That should be easy since they only have an offset of 1 page to each other. What is visible at address 0x1000 in the first process, will be visible at address 0x2000 in the other. Actually, it is not that easy, because the capability range descriptor (0x0, 4) can not simply be described as (0x1, 4) in the other process’s address space. It violates rule number 2 because 0x1 is not evenly divisible by 0x10! Frustratingly, this means that we need to break down the whole descriptor (0x0, 4) into 16 descriptors with order 0 because only such small ones have mappings that comply with the two rules in both address spaces. This was already a worst-case example. Another less bad example is the following one: If we want to map [0x0, 0x10) to [0x8, 0x18) in the other process, we could do that with the two descriptors (0, 3) and (8, 3), because both offsets 0x0 and 0x8 are evenly divisible by 8. That allows for larger chunks. A generic function that maps any page range to another process’s address space could finally look like the following: void map(word_t base1, word_t base2, word_t size, foo_t target_address_space) { map_helper.source_base = base1; constexpr word_t max_bit {1ull << (8 * sizeof(max_bit) - 1)}; while (size) { // take smaller order of both bases, as both must be divisible by it. const word_t min_order {order_min(base1 | base2 | max_bit)}; // take largest possible order from actual size of unmapped rest const word_t max_order {order_max(size)}; // choose smaller of both const word_t order {min(min_order, max_order)}; map_helper.push_back(base2, order); if (map_helper.full()) { map_helper.delegate(target_address_space); map_helper.reset(); map_helper.source_base = base1; } const word_t step {1ull << order}; base1 += step; base2 += step; size -= step; } map_helper.delegate(target_address_space); } As a newcomer to such a project, you will soon understand the maths behind it. You will see it everywhere, because the same technique is used for sharing memory, I/O ports, and descriptors for kernel objects like threads, semaphores, etc. between processes. After you have seen repeatedly exactly the same calculation with different payload code between it, you might get sick of it. Everywhere in the code base where this pattern is repeated, you have to follow the calculations thoroughly in order to see if it is really the same formula. If it is, you may wonder why no one writes some kind of library for it instead of duplicating the formula in code again and again. And if it is not the same formula - is that because it is wrong or is there an actual idea behind that? It is plainly annoying to write and read this from the ground on all the time. Library thoughts Ok, let’s assume that this piece of math will be recurring very often and we want to provide a nice abstraction for it. This would have multiple advantages: - Reduced code duplication. - Correctness: The library can be tested meticulously, and all user code will automatically profit from that. No one could ever do wrong descriptor calculations any longer if he/she just used the library. - Readability: User code will not be polluted by the same calculations again and again. Users do not even need to be able to implement the maths themselves. One possibility is to write a function map_generic that accepts a callback function that would get already calculated chunks as parameters and that would then do the payload magic: template <typename F> void map_generic(word_t base1, word_t base2, word_t size, F f) { constexpr word_t max_bit {1ull << (8 * sizeof(max_bit) - 1)}; while (size) { // take smallest order of both bases, as both must be divisible by it. const word_t min_order {order_min(base1 | base2 | max_bit)}; // take largest possible order from actual size of unmapped rest const word_t max_order {order_max(size)}; // choose smallest of both const word_t order {min(min_order, max_order)}; f(base1, base2, order); const word_t step {1ull << order}; base1 += step; base2 += step; size -= step; } } void map(word_t base1, word_t base2, word_t size, foo_t target_address_space) { map_helper.source_base = base1; map_generic(base1, base2, size, [&map_helper](word_t b1, word_t b2, word_t order) { map_helper.push_back(b2, order); if (map_helper.full()) { map_helper.delegate(target_address_space); map_helper.reset(); map_helper.source_base = b1; } }); map_helper.delegate(target_address_space); } What we have is now the pure math of capability range composition of generic ranges in map_generic and actual memory mapping code in map. This is already much better but leaves us without control how many chunks we actually want to consume at a time. As soon as we start map_generic, it will shoot all the sub-ranges at our callback function. At this point, it is hard to stop. And if we were able to stop it (for example by returning true from the callback whenever it shall continue and returning false if it shall stop), it would be hard to resume from where we stopped it. It’s just hardly composable coding style. The iterator library After all, this is C++. Can’t we have some really nice and composable things here? Of course, we can. How about iterators? We could define an iterable range class which we can feed with our memory geometry. When such a range is iterated over, it emits the sub-ranges. So let’s implement this in terms of an iterator. If you don’t know yet how to implement iterators, you might want to have a look at my other article where i explain how to implement your own iterator. This looks a bit bloaty at first, but this is a one-time implementation after all. When we compare it with the initial for-loop version, we realize that all the calculations are in the function current_order and operator++. All the other code is just data storage and retrieval, as well as iterator interface compliance. It might also at first look strange that the begin() function returns a copy of the order_range instance. The trick is that this class is at the same time a range and an iterator. One nice perk of C++17 is, that the end iterator does not need to be of the same type as normal iterators any longer. This allows for a simpler abort condition (which is: size == 0). With this tiny order 2 range iterator “library”, we can now do the following. (Let’s move away from the memory mapping examples to simple printf examples because we will compare them in Godbolt later) void print_range(word_t base1, word_t base2, word_t size) { for (const auto &[b1, b2, order] : order_range{base1, base2, size}) { printf("%4zx -> %4zx, order %2zu\n", b1, b2, order); } } This code just contains pure payload. There is no trace of the mathematical obfuscation left. Another differentiating feature from the callback function variant is that we can combine this iterator with STL data structures and algorithms! Comparing the resulting assembly What is the price of this abstraction? Let us see how the non-iterator-version of the same code would look like, and then compare it in the Godbolt assembly output view. void print_range(word_t base1, word_t base2, word_t size) { constexpr word_t max_bit {1ull << (8 * sizeof(max_bit) - 1)}; while (size) { const word_t min_order {order_min(base1 | base2 | max_bit)}; const word_t max_order {order_max(size)}; const word_t order {std::min(min_order, max_order)}; printf("%4zx -> %4zx, order %2zu\n", base1, base2, order); const word_t step {1ull << order}; base1 += step; base2 += step; size -= step; } } Interestingly, clang++ sees exactly what we did there and emits exactly the same assembly in both cases. That means that this iterator is a real zero cost abstraction! print_range(unsigned long, unsigned long, unsigned long): push r15 push r14 push r13 push r12 push rbx mov r14, rdx mov r15, rsi mov r12, rdi test r14, r14 je .LBB0_3 movabs r13, -9223372036854775808 .LBB0_2: # =>This Inner Loop Header: Depth=1 mov rax, r12 or rax, r15 or rax, r13 bsf rbx, rax bsr rax, r14 cmp rax, rbx cmovb rbx, rax mov edi, .L.str xor eax, eax mov rsi, r12 mov rdx, r15 mov rcx, rbx call printf mov eax, 1 mov ecx, ebx shl rax, cl add r12, rax add r15, rax sub r14, rax jne .LBB0_2 .LBB0_3: pop rbx pop r12 pop r13 pop r14 pop r15 ret .L.str: .asciz "%4zx -> %4zx, order %2zu\n" See the whole example in gcc.godbolt.org. When comparing the assembly of both variants with GCC, the result is a little bit disappointing at first: The for-loop version is 62 lines of assembly vs. 48 lines of assembly for the iterator version. When looking at how many lines of assembly are the actual loop part, it is still 25 lines for both implementations! Summary Hardcore low-level/kernel hackers often claim that it’s disadvantageous to use abstractions like iterators and generic algorithms. Their code needs to be very small and fast because especially on hot paths, interrupt service routines, and other occasions, the kernel surely must not be bloaty and slow. Unfortunately, one extreme kind of low-level hackers that keep their code tight and short just out of plain responsibility, are the ones that use the same reasons as an excuse for writing code that contains a lot of duplicates, is complex, hard to read (but surely makes you feel smart while being written), and difficult to test. Code should be separated into composable libraric parts that serve isolated concerns. C++ allows combining the goals of reusable software, testable libraries, and logical decoupling with high performance and low binary size. It is usually worth a try to implement a nice abstraction that turns out to be free with regard to assembly size and performance. Related I really enjoyed reading Krister Waldfridsson’s article where he primarily analyzes runtime performance of a piece of range-v3 code. What’s interesting about that article is that he also shows an innocently looking code snippet with a raw for-loop that is slower than equivalent code that uses an STL algorithm, because the STL algorithm helps the compiler optimizing the code. Another thing that is worth a look and which fits the same topic: Jason Turner gave a great talk about using C++17 on tiny computers. He demonstrates how modern C++ programming patterns that help writing better code do not lead to bloaty or slow code by compiling and showing the assembly in a Godbolt view. It actually runs on a real Commodore in the end.
https://blog.galowicz.de/2017/07/02/order2_iterator/
CC-MAIN-2017-30
refinedweb
2,519
59.33
By: Christopher Moeller Abstract: Tutorial for setting up a DCOM client/server application Setting up the Project directories: 1. Navigate to where you would like to save your source files. 2. Create the following directory: testDCOM 3. Inside the testDCOM directory, make the following two directories: server client 4. Now, when saving the files associated with your projects, you should save them into their respective directories. Creating the DCOM server: 1. File | Close All 2. File | New Application 3. File | New | ActiveX | Automation Object 4. CoClass Name: testDCOM, then click OK 5. In the Type Library Editor, select ItestDCOM, add a method and call it GetName 6. Click on the Parameters tab, & set the following Parameters: Name: Value; Type: BSTR *; Modifier: [out, retval] 7. Save the Unit as: maintestDCOMServer.cpp 8. Save the Project as: testDCOMServer.bpr 9. Save the other files as they are named. 10. In testDCOMImpl.cpp, fill in the GetName function as follows: STDMETHODIMP TtestDCOMImpl::GetName(BSTR* Value) { *Value = WideString("TtestDCOM").Detach(); return S_OK; } 11. Project | Build All Projects 12. Finally, run the server (this will register the server and let it appear in the dcomcnfg.exe utility 13. File | Save All 14. Close All. Creating the DCOM Client: 1. File | Close All 2. File | New Application 3. Project | Add to Project... | Choose, testDCOMServer_TLB.cpp (located where you saved the server project) 4. Save the Unit as: maintestDCOMClient.cpp 5. Save the Project as: testDCOMClient.bpr 6. Add the following to the top of maintestDCOMClient.cpp: #include "testDCOMServer_TLB.h" 7. Add the following to maintestDCOMClient.h in the private section (class TForm1): TCOMItestDCOM Server; 8. Save the Unit as: maintestDCOMClient.cpp 9. Save the Project as: testDCOMClient.bpr 10. Add a TButton and a TEdit to the form. 11. Change the Button Caption to "Remote" 12. Change the Edit Caption to "Computer" 13. Double click the button and add the event code below: void __fastcall TForm1::Button1Click(TObject *Sender) { Server = CotestDCOM::CreateRemote(WideString(Edit1->Text)); Form1->Caption = Server->GetName(); } 14. Now, you may run the test. When the form appears, type the name of the computer you would like to connect to in the edit box and then click the button. If everything is set up correctly, it will print the name of the DCOM object in the (client) form caption (TtestDCOM). Remember, the server must be first run on any machine that you intend to establish a remote DCOM connection. 15. File | Save All Free 30-day trial! Develop for Windows, Mac, Android, iOS, devices and gadgets! More social media choices: C++Builder on Google+ @RADtools on Twitter Server Response from: ETNASC04
http://edn.embarcadero.com/article/23185
CC-MAIN-2016-30
refinedweb
437
62.24
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Good day. I want to make it possible to reconnect the camera which is bound to the Capture object during the program works. So, it's just a button on the screen and when you click it the camera should be changed. Actually, now there is no matter if it connects to a new or the previous one. That's the truncated part of the code: import processing.video.Capture; Capture wirelessCam; void setup() { fullScreen(); smooth(); frameRate(60); connect(); } void draw() { background(#000000); if (wirelessCam.available()) wirelessCam.read(); image(wirelessCam, 0, 0); } void connect() { wirelessCam = new Capture(this, Capture.list()[0]); wirelessCam.start(); } void mouseClicked() { connect(); } Seems to be an easy block of code working correctly, but, in fact, it isn't. The first time when the program is launched it really shows the webcam video stream. But as only you click the mouse the whole program breaks down - instead of the expected video image you see a lonely useless rectangle the same color as the background. I've done my best to solve the proplem, combining the functions sequences and adding/removing some parts of code. Unfortunately, I managed to do nothing. Literally, NOTHING. I will be very grateful to your ideas and any kind of help. Thanks. Answers I have tried this before and when I tried to re-define my capture object, I could get even the blue death screen that nobody wants to ever see. How many capture devices do you have? Or are you just switching resolution settings? One possible idea is to have an array of capture devices and stop and play accordingly. Kf There are 22 of them and they're all the same webcam device. But, you see, that's not the sense. It doesn't change its behaviour while trying to re-connect a different device. You may even add a function like connect(), but make it bing the wirelessCamto a Capture object with another characteristics, thoung it still won't work. Speaking about creating an array of Capture objects - yeah, I had such an idea. The only thing is that this will occupy to much memory, and I'm not sure such an array allows the program work as fast as it is currently. If you are using the same device, I would suggest instantiating one object at high resolution and then display at lower resolution. This idea might not not suit your needs as I don't know all the details of your application. Can you ellaborate why you want to do this by the way? Not sure if it is a good idea to have an array of objects manipulating the same device but at different resolutions. You could manipulate multiple cameras. You can instantiate a capture device to each of them and then only process the one you have the focus on. Kf Oh god... I've finally found the problem. Still cannot believe that the solution is so easy and, moreover, so evident. Just have a look at this and nothing more should be explained: Kfrajer, thank you for you ideas. By the way, the principal reason to have an ability to change the camera connected to a Capture object is that the program should connect to a certain camera with known name and resolution, though it isn't known if it's connected to PC (the camera supposed to be a video stream from video capture card). If it is not at the moment, the program chooses a default one, which stands for the number 0 in Capture.list(). P.S. I tried to make an array of Capture objects, but it will give the same result if not to use the cameraName.stop()function. Now I'm sure it has some special sense. @daniil Great you found a solution. Thxs for sharing your answer. Kf If you're instantiating a newCapture after stop(), you're probably generating memory leakage! :-SS In order to make sure all resources are freed up, use dispose() instead. L-)
https://forum.processing.org/two/discussion/22484/
CC-MAIN-2021-10
refinedweb
690
72.56
import "jh.jar" to my application843810 Jun 29, 2009 4:01 PM I did a litle help with JavaHelp for my application and when i move to another computer the help will not display because it can not find the HelpSet class. On my PC a have the jar file "jh.jar" in my classpath, but I want to import it to my application but I don`t know how. Can anyone know how to do this? Can anyone know how to do this? This content has been marked as final. Show 8 replies 1. Re: import "jh.jar" to my application843810 Jun 30, 2009 12:00 PM (in response to 843810)You should add the jh.jar (or jhbasic.jar, depending on what you need) to your classpath. How are you launching the app? If it's launched with a simple java (or javaw) command just put the jar in the same directory and add args -cp ./jh.jar If the main class is declared in the manifest and you're simply double-clicking the jar you should use the Classpath attribute in the manifest file (in this case I guess you can put jh.jar in the jar itself). By the way (and in order for you to look at the proper documentation), this is not a JH problem, it's a simple Java classpath problem. Bye. 2. Re: import "jh.jar" to my application843810 Jun 30, 2009 1:08 PM (in response to 843810)Thank you for replying! On my PC the "jh.jar" file it is on the classpath, but when i move with the application on another computer i have to put the file "jh.jar" in the classpath in order to view the help. I was thinking to import that file in my application so that not everytime to put in the classpath on each computer i run the application. These 3 classes i use it from the "jh.jar": but if the jh.jar is not in the classpath i get an exception. How do import those classes directly from the jar file, which is in my application folder? import javax.help.CSH; import javax.help.HelpBroker; import javax.help.HelpSet; Calin. 3. Re: import "jh.jar" to my application793415 Jun 30, 2009 1:29 PM (in response to 843810) >Thank you for carefully reading the replies, and answering the questions of the people trying to help you. Oh, wait a second, you were asked +"How are you launching the app?"+ and I do not see the answer to that anywhere. Thank you for replying!> Are you launching the application 1) By double clicking the Jar? 2) With a script/bat file? 3) Using Java Web Start? 4) Something else? 4. Re: import "jh.jar" to my application843810 Jun 30, 2009 1:40 PM (in response to 793415)Sorry....by click in the help button: Edited by: Calin on Jun 30, 2009 9:40 PM try { URL hsURL = Main_Server.class.getResource("/Help/BackupTool.hs"); hs = new HelpSet(null,hsURL); } catch (Exception ex) { ex.printStackTrace(); return; } hb = hs.createHelpBroker(); new CSH.DisplayHelpFromSource(hb); hb.setSize(new Dimension(1024,768)); hb.setCurrentID("main"); hb.setDisplayed(true); 5. Re: import "jh.jar" to my application843810 Jul 1, 2009 12:30 PM (in response to 843810)We asked how you're launching the app, not the help. You talk about moving to another computer like you're bringing it on a USB-key, this takes us to the next big question: how are you distributing the app (if you are)? Anyway: those 3 classes you mentioned use many other JavaHelp classes, you can't possibly think including them in your jar will make your code run. How new are you to java? The 'over an year experience' I'd be guessing by your registration date should include basic classpath issues. Last (but definitively not least): according to the JavaHelp license you can redistribute jars but I don't think you can repackage classes as you like. Bye. PS: you've been told your question is border-line (meaning it's almost OT) and you've been given the right perspective on it, still you'd rather ask for more than google for something. What about adding dukes if you really wanna keep it alive? 6. Re: import "jh.jar" to my application843810 Jul 1, 2009 12:47 PM (in response to 843810)First of all my english is not so good! My fault for not understanding. I made my application a jar executable and run it (if this is what you want), and when click on the help button i get an exception because the HelpSet class is not found. I know it is not found because it is not in the classpath the file "jh.jar". I can say i am a beginner to java, but i know that class is needing other classes from that jar file. I was wondering if I can use that jar "jh.jar" in my application, instead of put it in the classpath to show the help. In other words to put it in the jar file (my application which is an executable jar) and call it from there. I think you got the ideea! Cheers! 7. Re: import "jh.jar" to my application793415 Jul 1, 2009 1:10 PM (in response to 843810) >Not quite as much detail as I wanted, but.. let us cut to another question. ... I made my application a jar executable and run it (if this is what you want),..> Do you have (or can you get) a web site to distribute your code? If so, a good way to deploy it to users is using Java Web Start. (Further, if you distribute using webstart, there is another benefit in that you already have the attention of 2 people who can design or debug a webstart based launch for this application, and it's help files, and JavaHelp). 8. Re: import "jh.jar" to my application843810 Jul 1, 2009 2:41 PM (in response to 843810)Quoting myself: If the main class is declared in the manifest and you're simply double-clicking the jar you should use the Classpath attribute in the manifest file (in this case I guess you can put jh.jar in the jar itself). Googling out, it looks like you actually can't put a jar inside a jar (I thought you could cause you can put a jar in a war or an ear, my fault). Event though this looks different (at a glance, didn't go deep in it). Anyhow, according to the [sun jar specs|] (that is the proper docs I kept mentioning): So you should be able to try something like Class-Path: jar:MyJarName.jar!/jh.jar (and probably a lot of easier url like the ones in the over-mentioned thread). ◦Class-Path : The value of this attribute specifies the relative URLs of the extensions or libraries that this application or extension needs. URLs are separated by one or more spaces. The application or extension class loader uses the value of this attribute to construct its internal search path. I never did it, never tried, don't mean to and think distribution should be solved with distribution techniques (like JWS or a simple zip file). If you want me to spend time on this instead of you and keep posting to the wrong forum (this is now officially OT), please add dukes (10 would be just fine).
https://community.oracle.com/message/5497754
CC-MAIN-2015-32
refinedweb
1,253
74.08
The QTextLine class represents a line of text inside a QTextLayout. More... #include <QTextLine> Note: All functions in this class are reentrant. decent() relative to the text. The position of the cursor in terms of the line is available from cursorToX() and its inverse from xToCursor(). A line can be moved with setPosition(). Creates an invalid line. Returns the line's ascent. See also descent() and height(). Converts the cursor position cursorPos to the corresponding x position inside the line, taking account of the edge. If cursorPos is not a valid cursor position, the nearest valid cursor position will be used instead, and cpos will be modified to point to this valid cursor position. This is an overloaded function. Returns the line's descent. See also ascent() and height(). Draws a line on the given painter at the specified position. The selection is reserved for internal use. Returns the line's height. This is equal to ascent() + descent() + 1 if leading is not included. If leading is included, this equals to ascent() + descent() + leading() + 1.(). Returns the position of the line in the text engine. Returns the rectangle covered by the line. Returns the width of the line that is occupied by text. This is always <= to width(), and is the minimum width that could be used by layout() without changing the line break position. Returns the line's position relative to the text layout's position. See also setPosition(). Returns the line's bounding rectangle. See also x(), y(), textLength(), and width().. Moves the line to position pos. Returns the length of the text in the line. See also naturalTextWidth(). Returns the start of the line from the beginning of the string passed to the QTextLayout. Returns the line's width as specified by the layout() function. See also naturalTextWidth(), x(), y(), textLength(), and rect(). Returns the line's x position. See also rect(), y(), textLength(), and width(). Converts the x-coordinate x, to the nearest matching cursor position, depending on the cursor position type, cpos. Returns the line's y position. See also x(), rect(), textLength(), and width().
https://doc-snapshots.qt.io/4.8/qtextline.html
CC-MAIN-2019-26
refinedweb
350
70.6
15 February 2007 10:45 [Source: ICIS news] LONDON (ICIS news)--Global demand for base oils is likely to increase by 6.5% in the five years to 2010 to 36.2m tonnes/year, Total’s oil and waxes director said on Thursday. ?xml:namespace> Speaking at the ICIS World Base Oils conference in ?xml:namespace> Global demand for lubricants was expected to rise to 42.6m tonnes/year by 2010, an increase of 6.5% from ten years earlier. Russian base oils were expected to change the balance of the northwest In North America was expected to see a strong GII surplus moving to Europe, the Middle East and East Asia, with increasing demand from Below is a table listing current base oil projects in the pipeline worldwide. Source: Total Lubric
http://www.icis.com/Articles/2007/02/15/9006782/world-base-oils-demand-to-rise-6.5-2005-10-total.html
CC-MAIN-2014-15
refinedweb
132
66.33
Anonymous 2011-10-09 Hi, I am trying to use libjson in a c++ project. I am working on linux with gcc version 4.5.2. I am trying to build an example program from the C++ example "Getting Started/C++ Interface/basic_parser.htm". I wrote a small program as below, #include "libjson.h" #include <string> int main() { std::string json = "{\"RootA\":\"Value in parent node\",\"ChildNode\":{\"ChildA\":\"String Value\",\"ChildB\":42}}"; JSONNode n = libjson::parse(json); return 0; } But when I try to compile, I am getting the below error. main.cpp:7:37: error: invalid initialization of reference of type ‘const json_string&’ from expression of type ‘std::string’ ../libjson/libjson.h:210:28: error: in passing argument 1 of ‘JSONNode libjson::parse(const json_string&)’ Would any one please help me to understand what I am missing here? Thanks in advance. Jonathan Wallace 2011-10-09 normally json_string is std::string. Could you post your JSONOptions.h file for me to look at? Anonymous 2011-10-10 Please find my JSONOptions.h file below, #ifndef JSON_OPTIONS_H #define JSON_OPTIONS_H /** * This file holds all of the compiling options for easy access and so * that you don't have to remember them, or look them up all the time */ /* * JSON_LIBRARY must be declared if libjson is compiled as a static or dynamic * library. This exposes a C-style interface, but none of the inner workings of libjson */ //#define JSON_LIBRARY /* * JSON_STRICT removes all of libjson's extensions. Meaning no comments, no special numbers */ //#define JSON_STRICT /* * JSON_DEBUG is used to perform extra error checking. Because libjson usually * does on the fly parsing, validation is impossible, so this option will allow * you to register an error callback so that you can record what is going wrong * before the library crashes. This option does not protect from these errors, * it simply tells you about them, which is nice for debugging, but not preferable * for release candidates */ //#define JSON_DEBUG /* * JSON_ISO_STRICT turns off all code that uses non-standard C++. This removes all * references to long long and long double as well as a few others */ //#define JSON_ISO_STRICT /* * JSON_SAFE performs similarly to JSON_DEBUG, except this option does protect * from the errors that it encounters. This option is recommended for those who * feel it's possible for their program to encounter invalid json. */ #define JSON_SAFE /* * JSON_STDERROR routes error messages to cerr instead of a callback, this * option hides the callback registering function. This will usually display * messages in the console */ //#define JSON_STDERROR /* * JSON_PREPARSE causes all parsing to be done immediately. By default, libjson * parses nodes on the fly as they are needed, this makes parsing much faster if * your program gets a lot of information that it doesn't need. An example of * this would be a client application communicating with a server if the server * returns things like last modified date and other things that you don't use. */ //#define JSON_PREPARSE /* * JSON_LESS_MEMORY will force libjson to let go of memory as quickly as it can * this is recommended for software that has to run on less than optimal machines. * It will cut libjson's memory usage by about 20%, but also run slightly slower. * It's recommended that you also compile using the -Os option, as this will also * reduce the size of the library */ //#define JSON_LESS_MEMORY /* * JSON_UNICODE tells libjson to use wstrings instead of regular strings, this * means that libjson supports the full array of unicode characters, but also takes * much more memory and processing power. */ //#define JSON_UNICODE /* * JSON_REF_COUNT causes libjson to reference count JSONNodes, which makes copying * and passing them around much faster. It is recommended that this stay on for * most uses */ #define JSON_REF_COUNT /* * JSON_BINARY is used to support binary, which is base64 encoded and decoded by libjson, * if this option is not turned on, no base64 support is included */ #define JSON_BINARY /* * JSON_EXPOSE_BASE64 is used to turn on the functionality of libjson's base64 encoding * and decoding. This may be useful if you want to obfuscate your json, or send binary data over * a network */ #define JSON_EXPOSE_BASE64 /* * JSON_ITERATORS turns on all of libjson's iterating functionality. This would usually * only be turned off while compiling for use with C */ #define JSON_ITERATORS /* * JSON_STREAM turns on libjson's streaming functionality. This allows you to give parts of * your json into a stream, which will automatically hit a callback when full nodes are * completed */ #define JSON_STREAM /* * JSON_MEMORY_CALLBACKS exposes functions to register callbacks for allocating, resizing, * and freeing memory. Because libjson is designed for costomizability, it is feasible * that some users would like to further add speed by having the library utilize a memory * pool. With this option turned on, the default behavior is still done internally unless * a callback is registered. So you can have this option on and mot use it. */ #define JSON_MEMORY_CALLBACKS /* * JSON_MEMORY_MANAGE is used to create functionality to automatically track and clean * up memory that has been allocated by the user. This includes strings, binary data, and * nodes. It also exposes bulk delete functions. */ //#define JSON_MEMORY_MANAGE /* * JSON_MEMORY_POOL Turns on libjson's iteraction with mempool++. It is more efficient that simply * connecting mempool++ to the callbacks because it integrates things internally and uses a number * of memory pools. This value tells libjson how large of a memory pool to start out with. 500KB * should suffice for most cases. libjson will distribute that within the pool for the best * performance depending on other settings. */ //#define JSON_MEMORY_POOL 524288 /* * JSON_MUTEX_CALLBACKS exposes functions to register callbacks to lock and unlock * mutexs and functions to lock and unlock JSONNodes and all of it's children. This * does not prevent other threads from accessing the node, but will prevent them from * locking it. It is much easier for the end programmer to allow libjson to manage * your mutexs because of reference counting and manipulating trees, libjson automatically * tracks mutex controls for you, so you only ever lock what you need to */ //#define JSON_MUTEX_CALLBACKS /* * JSON_MUTEX_MANAGE lets you set mutexes and forget them, libjson will not only keep * track of the mutex, but also keep a count of how many nodes are using it, and delete * it when there are no more references */ //#define JSON_MUTEX_MANAGE /* * JSON_NO_C_CONSTS removes consts from the C interface. It still acts the same way, but * this may be useful for using the header with languages or variants that don't have const */ //#define JSON_NO_C_CONSTS /* * JSON_OCTAL allows libjson to use octal values in numbers. */ //#define JSON_OCTAL /* * JSON_WRITE_PRIORITY turns on libjson's writing capabilties. Without this libjson can only * read and parse json, this allows it to write back out. Changing the value of the writer * changes how libjson compiles, and how fast it will go when writing */ #define JSON_WRITE_PRIORITY MED /* * JSON_READ_PRIORITY turns on libjson's reading capabilties. Changing the value of the reader * changes how libjson compiles, and how fast it will go when writing */ #define JSON_READ_PRIORITY HIGH /* * JSON_NEWLINE affects how libjson writes. If this option is turned on, libjson * will use whatever it's defined as for the newline signifier, otherwise, it will use * standard unix \n. */ //#define JSON_NEWLINE "\r\n" //\r\n is standard for most windows and dos programs /* * JSON_INDENT affects how libjson writes. If this option is turned on, libjson * will use \t to indent formatted json, otherwise it will use the number of characters * that you specify. If this is not turned on, then it will use the tab (\t) character */ //#define JSON_INDENT " " /* * JSON_ESCAPE_WRITES tells the libjson engine to escape special characters when it writes * out. If this option is turned off, the json it outputs may not adhere to JSON standards */ #define JSON_ESCAPE_WRITES /* * JSON_COMMENTS tells libjson to store and write comments. libjson always supports * parsing json that has comments in it as it simply ignores them, but with this option * it keeps the comments and allows you to insert further comments */ #define JSON_COMMENTS /* * JSON_WRITE_BASH_COMMENTS will cause libjson to write all comments in bash (#) style * if this option is not turned on, then it will use C-style comments. Bash comments are * all single line */ //#define JSON_WRITE_BASH_COMMENTS /* * JSON_WRITE_SINGLE_LINE_COMMENTS will cause libjson to write all comments in using // * notation, or (#) if that option is on. Some parsers do not support multiline C comments * although, this option is not needed for bash comments, as they are all single line anyway */ //#define JSON_WRITE_SINGLE_LINE_COMMENTS /* * JSON_ARRAY_SIZE_ON_ON_LINE allows you to put small arrays of primitives all on one line * in a write_formatted. This is common for tuples, like coordinates. If must be defined * as an integer */ //#define JSON_ARRAY_SIZE_ON_ONE_LINE 2 /* * JSON_VALIDATE turns on validation features of libjson. */ #define JSON_VALIDATE /* * JSON_CASE_INSENSITIVE_FUNCTIONS turns on funtions for finding child nodes in a case- * insenititve way */ #define JSON_CASE_INSENSITIVE_FUNCTIONS /* * JSON_INDEX_TYPE allows you th change the size type for the children functions. If this * option is not used then unsigned int is used. This option is useful for cutting down * on memory, or using huge numbers of child nodes (over 4 billion) */ //#define JSON_INDEX_TYPE unsigned int /* * JSON_BOOL_TYPE lets you change the bool type for the C interface. Because before C99 there * was no bool, and even then it's just a typedef, you may want to use something else. If this * is not defined, it will revert to int */ //#define JSON_BOOL_TYPE char /* * JSON_INT_TYPE lets you change the int type for as_int. If you ommit this option, the default * long will be used */ //#define JSON_INT_TYPE long /* * JSON_STRING_HEADER allows you to change the type of string that libjson uses both for the * interface and internally. It must implement most of the STL string interface, but not all * of it. Things like wxString or QString should wourk without much trouble */ //#define JSON_STRING_HEADER "../TestSuite/StringTest.h" /* * JSON_UNIT_TEST is used to maintain and debug the libjson. It makes all private * members and functions public so that tests can do checks of the inner workings * of libjson. This should not be turned on by end users. */ //#define JSON_UNIT_TEST /* * JSON_NO_EXCEPTIONS turns off any exception throwing by the library. It may still use exceptions * internally, but the interface will never throw anything. */ //#define JSON_NO_EXCEPTIONS /* * JSON_DEPRECATED_FUNCTIONS turns on functions that have been deprecated, this is for backwards * compatibility between major releases. It is highly recommended that you move your functions * over to the new equivalents */ #define JSON_DEPRECATED_FUNCTIONS /* * JSON_CASTABLE allows you to call as_bool on a number and have it do the 0 or not 0 check, * it also allows you to ask for a string from a number, or boolean, and have it return the right thing. * Without this option, those types of requests are undefined. It also exposes the as_array, as_node, and cast * functions */ #define JSON_CASTABLE /* * JSON_SECURITY_MAX_NEST_LEVEL is a security measure added to make prevent against DoS attacks * This only affects validation, as if you are worried about security attacks, then you are * most certainly validating json before sending it to be parsed. This option allows you to limitl how many * levels deep a JSON Node can go. 128 is a good depth to start with */ #define JSON_SECURITY_MAX_NEST_LEVEL 128 /* * JSON_SECURITY_MAX_STRING_LENGTH is another security measure, preventing DoS attacks with very long * strings of JSON. 32MB is the default value for this, this allows large images to be embedded */ #define JSON_SECURITY_MAX_STRING_LENGTH 33554432 #endif Anonymous 2011-10-12 Please let me know if you need any more info, thanks. Jonathan Wallace 2011-10-12 Ooooh, I see the problem. With the JSON_MEMORY_CALLBACKS option turned on, json_string is a std::string that takes an allocator. Either turn that option off, or use a json_string instead of an std::string. Anonymous 2011-10-13 @ninja9578 Thank you very much for the help! I have commented out the JSON_MEMORY_CALLBACKS macro, and now everything is working fine. Actually I was able to compile the test program with some workarounds like typecasting, putting json_string into stringstream and taking std::string out etc. But I could not seriously consider libjson for my evaluation because of these nasty workarounds. But now that I have the correct solution, I will include libjson in my performance evaluation. I hope I can publish the result soon. I have a suggestion with respect this issue. You could consider disabling this option by default in the JSONOptions.h file. That would help any newbie like me, who depends on the sample programs as a starting point. Thanks again, lijo Jonathan Wallace 2011-10-13 I only recently added that as a default option. I changed it as soon as I realized the problem. The next release will not have this problem. Hi, When i try the same program given above i get these errors, >C:\Users\CN\Downloads\libjson\libjson.h(43): error C2059: syntax error : '' 1>DST.cpp(10): error C2065: 'JSONNode' : undeclared identifier 1>DST.cpp(10): error C2146: syntax error : missing ';' before identifier 'n' 1>DST.cpp(10): error C2065: 'n' : undeclared identifier 1>DST.cpp(10): error C2653: 'libjson' : is not a class or namespace name 1>DST.cpp(10): error C3861: 'parse': identifier not found Actually my requirement is XML <->JSON is it possible in C++? Can u help me on this.
http://sourceforge.net/p/libjson/discussion/1119662/thread/4c5fe73a
CC-MAIN-2014-35
refinedweb
2,155
51.18
I'm working on a program that needs to calculate the total cost of tuition. Enter amount of credits -- multiply credits by 100.00 -- add 20.00 as student activity fee -- print final number. I can get the program to work the following way: When I try to break down the processes in functions I get some number with E in it.When I try to break down the processes in functions I get some number with E in it.Code: #include <iostream> using namespace std; int main() { int iCredits; double fTotal; cout << "Please enter a the number of credits you are taking this semester: "; cin >> iCredits; cout << "$ " << iCredits * 100 + 20 << endl; system("PAUSE"); return 0; } Can anyone help me with a suggestion to break down the above program using multiple functions? One to compute credits * 100.00, one to add the extra 20.00 and finally one to print out the final total.
http://cboard.cprogramming.com/cplusplus-programming/153877-question-functions-printable-thread.html
CC-MAIN-2015-11
refinedweb
154
73.58
Find the LCM of two numbers in python : What is an LCM value of two numbers? LCM or least common multiplier of two numbers is the smallest number that is divisible by both of these numbers. i.e. the lowest number starting from 1, that is divisible by both. To find out the LCM of two numbers in python or in any programming language, we can check for each number if it is divisible by both or not. Or, we can start this counting from the greater number, which will save us a lot of time. Or, we can only check for the multiplier of the greater number instead. Which method will be the fastest? Of course the third one! In this tutorial, we will learn how to find out the LCM of two numbers in Python. The algorithm of the program looks like below : Algorithm : - Store the numbers in two constant variables. If you want, you can also read these numbers as input from the user. - Find out the larger number between these two numbers. - Assign the larger number as the LCM of these two numbers. - Run one loop to find out the LCM of these numbers. This loop will run from the current value of LCM (or the larger number) to the multiplication of both numbers. Note that this loop will not check all numbers in the range. It will only check the numbers that are divisible by the larger number. e.g. if we are finding the LCM of 3 and 4, 4 will be considered as the initial value of the required LCM. The loop will then check the numbers within 4 and 4 * 3 = 12. It will check 4, 8 and 12. Since 4 and 8 don’t satisfy the condition, 12 is the required LCM. Let’s take a look into the python program : Python Program : def findLcm(a,b): large_no = 0 if(a>b): large_no = a else : large_no = b multiplier = 1 lcm = large_no while(lcm < (a*b)): print ("checking for ",lcm) if(lcm % a == 0 and lcm % b ==0): break multiplier += 1 lcm = large_no * multiplier print ("lcm is ",lcm) num1 = 31 num2 = 15 findLcm(num1,num2) You can also download this program from here. Description : - To get the lcm of two numbers, we need to find the multiplier for both of the numbers. And the lowest multiplier will be the LCM. If one number is divisible by the other number, then the greater number will be the LCM. In the above example, we have one method named ‘findLcm’ that takes two numbers as input and print out the LCM for both. - First, we are checking between these two number which one is greater and saving it to a variable ‘greater_num’ - Consider the greater number as lcm. If it is divisible by the smaller number, then it will be the lcm for both. - Now, inside the while loop, we are checking if the ‘lcm’ is divisible by both the numbers or not. If yes, then print it as the lcm, if not , then change ‘lcm’ to the next multiplier of the greater number. i.e. we are checking for all the multiplier of the greater number. - This loop will exit if ‘lcm’ becomes equal to the multiplication of both the numbers. Try this example with different numbers and let me know if you find any trouble with it. You can also modify the program to read the numbers as input from the user. Similar tutorials : - Python program to find the smallest divisor of a number - Python program to convert an integer number to octal - Python program to print the odd numbers in a given range - Python program to find the gcd of two numbers using fractions module - Python program to print all combinations of three numbers - Python program to find out the sum of all digits of a number
https://www.codevscolor.com/python-program-find-lcm-two-numbers/
CC-MAIN-2020-29
refinedweb
645
78.59
The Android platform provides an extensive range of user interface items that are sufficient for the needs of most apps. However, there may be occasions on which you feel the need to implement a custom user interface for a project you are working on. In this tutorial we will work through the process of creating a custom View. To create and use our custom View, we will extend the View class, define and specify some custom attributes, add the View to our layout XML, override the onDraw method to tailor the View appearance and manipulate it from our app's main Activity. Step 1: Create an Android Project Create a new Android project in Eclipse. You can choose whatever settings you like as long as your app has a main Activity class and a layout file for it. We do not need any amendments to the Manifest file. In the source code download file the main Activity is named "LovelyActivity" and the layout file is "activity_lovely.xml" - alter the code to suit your own names if necessary. We will be creating and adding to a few additional files as we go along. Step 2: Create a View Class Our custom View can extend any of the existing Android View classes such as Button or TextView. However, we will create a direct subclass of View. Extending an existing class allows you to use the existing functionality and styling associated with that class, while providing processing to suit your own additional needs. Create a new class in your application by selecting the app's main package in Eclipse and choosing "File", "New", "Class". Enter a name of your choice and click "Finish". The tutorial code uses the class name "LovelyView" - you will need to alter it in all of the below code if you choose a different name. Make your new class extend View by adding to its opening declaration line: public class LovelyView extends View { Add the following import statements above this: import android.content.Context; import android.content.res.TypedArray; import android.graphics.Canvas; import android.graphics.Paint; import android.graphics.Paint.Style; import android.util.AttributeSet; import android.view.View; Step 3: Create Attribute Resources In order to use our custom View as we would use a standard View (i.e. set its attributes in layout XML and refer to them in our Java code), we will declare attribute resources. In Eclipse, create a new file in your project "res/values" folder by selecting it and choosing "File", "New", "File". Enter "attrs.xml" as the file name and click "Finish". In the attributes file we first need to indicate that we are listing resources, so add the following parent element: <resources> </resources> Inside this element, we are going to declare three attributes for the View that will allow us to style it. Let's keep things relatively simple - the View is going to display a circle with some text in the middle. The three attributes will be the circle color, the text String, and the text color. Add the following inside your resources element: <declare-styleable <attr name="circleColor" format="color" /> <attr name="circleLabel" format="string"></attr> <attr name="labelColor" format="color"></attr> </declare-styleable> The declare-styleable element specifies the View name. Each attribute has a name and format. We will be able to specify these attributes in the layout XML when we add the custom View and also retrieve them in the View class. We will also be able to retrieve and set the attributes from our Java Activity class. The values provided for each attribute will need to be of the type listed here as format. Step 4: Add the View to the Layout Let's add an instance of the custom View to our app's main layout file. In order to specify the custom View and its attributes, we need to add an attribute to the parent layout element. In the source download, it is a RelativeLayout but you can use whichever type you prefer. Add the following attribute to your layout's parent element: xmlns:custom="" Alter "your.package.name" to reflect the package your app is in. This specifies the namespace for our app, allowing us to use the attributes we defined within it. Now we can add an instance of the new View. Inside the layout, add it as follows: <your.package.name.LovelyView android: Again, alter the package name to suit your own, and the class name if necessary. We will use the ID to refer to the View in our Activity code. Notice that the element lists standard View attributes alongside custom attributes. The custom attributes are preceded by "custom:" and use the names we specified in our attributes XML file. Note also that we have specified values of the types we indicated using the format attributes in the "attrs.xml" file. We will retrieve and interpret these values in our View class. Step 5: Retrieve the Attributes Now let's turn back to the View class we created. Inside the class declaration, add some instance variables as follows: //circle and text colors private int circleCol, labelCol; //label text private String circleText; //paint for drawing custom view private Paint circlePaint; We will use the first three of these to keep track of the current settings for color and text. The Paint object is for when we draw the View. After these variables, add a constructor method for your class: public LovelyView(Context context, AttributeSet attrs){ super(context, attrs); } As we are extending the View class, the first thing we do is call the superclass method. After the super call, let's extend the method to setup the View. First instantiate the Paint object: //paint object for drawing in onDraw circlePaint = new Paint(); Now let's retrieve the attribute values we set in XML: //get the attributes specified in attrs.xml using the name we included TypedArray a = context.getTheme().obtainStyledAttributes(attrs, R.styleable.LovelyView, 0, 0); This typed array will provide access to the attribute values. Notice that we use the resource name we specified in the "attrs.xml" file. Let's now attempt to retrieve the attribute values, using a try block in case anything goes wrong: try { //get the text and colors specified using the names in attrs.xml circleText = a.getString(R.styleable.LovelyView_circleLabel); circleCol = a.getInteger(R.styleable.LovelyView_circleColor, 0);//0 is default labelCol = a.getInteger(R.styleable.LovelyView_labelColor, 0); } finally { a.recycle(); } We read the attributes into our instance variables. Notice that we use the names we listed for each in "attrs.xml" again. The colors are retrieved as integer values and the text label as a String. That's the constructor method complete - by the time it has executed, the class should have retrieved the selected View attributes we defined in the attribute resources file and set values for in the layout XML. Step 6: Draw the View Now we have our View attributes in the class, so we can go ahead and draw it. To do this, we need to override the onDraw method. Add its outline after your constructor method as follows: @Override protected void onDraw(Canvas canvas) { //draw the View } Since we're going to draw a circle, let's get some information about the available space, inside the onDraw method: //get half of the width and height as we are working with a circle int viewWidthHalf = this.getMeasuredWidth()/2; int viewHeightHalf = this.getMeasuredHeight()/2; Now we can calculate the circle radius: //get the radius as half of the width or height, whichever is smaller //subtract ten so that it has some space around it int radius = 0; if(viewWidthHalf>viewHeightHalf) radius=viewHeightHalf-10; else radius=viewWidthHalf-10; Now let's set some properties for painting with: circlePaint.setStyle(Style.FILL); circlePaint.setAntiAlias(true); Now we will use the selected circle color as stored in our instance variable: //set the paint color using the circle color specified circlePaint.setColor(circleCol); This means that the circle will be drawn with whatever color we listed in the layout XML. Let's draw it now using these details: canvas.drawCircle(viewWidthHalf, viewHeightHalf, radius, circlePaint); Now let's add the text. First set the color using the value retrieved from the layout XML: //set the text color using the color specified circlePaint.setColor(labelCol); Now set some more properties: //set text properties circlePaint.setTextAlign(Paint.Align.CENTER); circlePaint.setTextSize(50); Finally we can draw the text, using the text string retrieved: //draw the text using the string attribute and chosen properties canvas.drawText(circleText, viewWidthHalf, viewHeightHalf, circlePaint); That's onDraw complete. Step 7: Provide Get and Set Methods When you create a custom View with your own attributes, it is recommended that you also provide get and set methods for them in your View class. After the onDraw method, first add the get methods for the three customizable attributes: public int getCircleColor(){ return circleCol; } public int getLabelColor(){ return labelCol; } public String getLabelText(){ return circleText; } Each method simply returns the value requested. Now add the set methods for the color attributes: public void setCircleColor(int newColor){ //update the instance variable circleCol=newColor; //redraw the view invalidate(); requestLayout(); } public void setLabelColor(int newColor){ //update the instance variable labelCol=newColor; //redraw the view invalidate(); requestLayout(); } These methods accept int parameters representing the color to set. In both cases we update the instance variable in question, then prompt the View to be redrawn. This will make the onDraw method execute again, so that the new values affect the View displayed to the user. Now add the set method for the text: public void setLabelText(String newLabel){ //update the instance variable circleText=newLabel; //redraw the view invalidate(); requestLayout(); } This is the same as the other two set methods except for the String parameter. We will call on these methods in our Activity class next. Step 8: Manipulate the View from the Activity Now we have the basics of our custom View in place, let's demonstrate using the methods within our Activity class. In the app's main Activity class, add the following import statements: import android.graphics.Color; import android.view.View; Before the onCreate method, inside the class declaration, add an instance variable representing the instance of the custom View displayed: private LovelyView myView; Inside the onCreate method, after the existing code, retrieve this using its ID as included in the XML layout file: myView = (LovelyView)findViewById(R.id.custView); To demonstrate setting the View attribute values from the Activity, we will add a simple button. Open your layout file and add it after the custom View element: <Button android: We specify a method to execute on user clicks - we will add this to the Activity class. First add the String to your "res/values/strings" XML file: <string name="btn_label">Press Me</string> Now go back to the Activity class and add the method listed for clicks on the button: public void btnPressed(View view){ //update the View } Let's use the set methods we defined to update the custom View appearance: myView.setCircleColor(Color.GREEN); myView.setLabelColor(Color.MAGENTA); myView.setLabelText("Help"); This is of course just to demonstrate how you can interact with a custom View within your Activity code. When the user clicks the button, the appearance of the custom View will change. Conclusion In general, it's advisable to use existing Android View classes where possible. However, if you do feel that you need a level of customization beyond the default settings, creating your own custom Views is typically straightforward. What we have covered in this tutorial is really just the beginning when it comes to creating tailored Android user interfaces. See the official guide for information on adding interactivity and optimization to your customizations. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/android-sdk-creating-custom-views--mobile-14548
CC-MAIN-2020-40
refinedweb
1,983
53.21
Jan 03, 2008 02:07 AM|CompiledMonkey|LINK Jan 03, 2008 02:28 AM|ChadThiele|LINK Well, you pretty much hit on the answer yourself. Yes, there is a model folder that should hold your BLL/DAL tiers, but that doesn't stop you from separating them. You could just create a BLL namespace and a DAL namespace (I wouldn't do it that way, but that's an option). And yes, I it's a good idea to put your business and data access layers into a separate project for reuse across multiple platforms. That's where I think the MVC concept's strengths become apparent. It really opens your eyes to separation, at least compared to web forms asp.net. Jan 03, 2008 02:44 AM|CompiledMonkey|LINK Jan 03, 2008 03:00 AM|ChadThiele|LINK Well, typically (and everyone, correct me if I'm wrong) you'd want to have your Web project contain the controllers and views while your business logic and data access were in a separate project. You shouldn't need to reference the System.Web.Extensions from the business/data project. I'm not sure how to word this, but the business/data project should be able to build and perform its job alone. Your Web project would need the reference to System.Web.Extensions (and your business/data project), the extensions reference is done automatically when you create a new MVC application in Visual Studio. Jan 03, 2008 03:20 AM|CompiledMonkey|LINK Jan 03, 2008 03:33 AM|ChadThiele|LINK Glad I could help. Jan 03, 2008 05:59 AM|hhariri|LINK As Chad's pointed out, I don't see any reason why you would want to separate out your controller and views. The controller is pretty focused on working with the views you have in your MVC application. It's quite hard to re-use the controllers for other setups, and more for web service. Separate out our model (which is your application logic and it includes your data access and business logic) into separate projects. Those are the components that you can and should re-use in different application fronts. 6 replies Last post Jan 03, 2008 05:59 AM by hhariri
http://forums.asp.net/p/1200328/2087441.aspx?Re+Do+you+find+that+the+MVC+framework+removes+the+traditional+BLL+DAL+
CC-MAIN-2015-18
refinedweb
374
70.84
##What is this Package Control everyone’s going on about? Packages, are custom plugins, snippets, and macros. Package Control is the Sublime Text package manager. It includes 2,500 packages that do everything from expand HTML tags to highlight hexadecimal colors to autocomplete file names. Additionally, users can add BitBucket or GitHub repositories themselves. The stats on package control are incredibly impressive. Today this server delivers around 5TB of compressed json data a month and has seen over 2.7 million unique clients connect. Full stats here. ##Installing Package Control Detailed instructions for installing Package Control can be found on the Package Control website. However, the gist is this (below). ###Simple Package Control Installation Go to the Sublime Text console via ctrl+ or View > Show Console. A window will open at the bottom of your current Sublime Text window. From there type or copy and paste: ####SUBLIME TEXT 3) ####SUBLIME TEXT 2 import urllib2') ###What does the installation do? The Package Control Installation will create the Packages folder for you. This is the folder where all of your packages will live. This folder is inside your Sublime Text directory in your Applications folder. The installation also downloads the Package Control.sublime-package into your new folder. ###How do I get to Package Control? cmd+shift+p will do it for you in OS X or ctrl+shift+p in Linux/Windows. This opens the Command Palette which offers you a text field where you can control Package Control. ####How do I install packages? - To install packages type install packageand a window will pop up showing you packages available for install. Beware: This is an exhaustively comprehensive list. You’re better off using the search field to find what you want. ####How do I add additional repositories? - To add a package hosted on GitHub, enter the URL in the form. Don’t include .git! BitBucket repositories should use the format. ####How do I remove packages? - To remove a package type remove package. This will delete the package folder and the package name from the installed_packageslist in Packages/User/Package Control.sublime-settings. Fun Fact You can copy the installed_packages list into the Packages/User/ folder on another computer and Package Control will automatically install the packages for you. ###The best packages for Sublime Text - The Package Control Homepage displays Trending, New, and Popular packages right on the main page. - The Google Developers Page has an article on useful Sublime Text packages. - Buffer Wall has an awesome list of great packages as well. ###A few packages I recommend - Auto File Name are you tired of images not loading or programs unable to locate files? Auto File Name will offer a dropdown menu of file names as you type. It even lets you drill down through several directories like images/team/accounting/image.jpg - Bracket Highlighter takes the guesswork out of which bracket your actually closing. This is great when you have a million <div>tags in HTML or 3 nested iterators in a ruby hash. The package will display the opening and closing bracket on the left hand side for any piece of code the cursor is on. brackethighlighter - Emmet is one of the most popular packages for Sublime Text with over 1.3 million downloads. It allows you to write HTML in shorthand and expand it using the tabkey. For example ul>li*3becomes <ul> <li></li> <li></li> <li></li> </ul> - Color Highlighter is a great plugin for anyone who hasn’t memorized the 16,777,216 hexadecimal colors, so most of us. It highlights any color in your code whether it’s hexadecimal, RGB, or just written out like ‘blue’. It also allows you to select colors using a color picker window.
http://gilmoursa.github.io/blog/Package-Control/
CC-MAIN-2019-18
refinedweb
623
67.45
How track file size changes track file size changes How to track a change in file size - JSP-Servlet the file size. Anyone help me how to check the file size with this coding. I tried...file size Hello friends, I got the file upload coding in the roseindia at the following url: restrict jtable editing restrict jtable editing How to restrict jtable from editing or JTable disable editing? public class MyTableModel extends AbstractTableModel { public boolean isCellEditable(int row, int column How to adjust a size of a pdf file How to adjust a size of a pdf file In this program we are going to tell you how we can adjust the size of a pdf file irrespective of the fact whether it exists Struts file uploading - Struts Struts file uploading Hi all, My application I am uploading... upload the file with nominal size like 1 mb. But this functionality is breaking when I upload the large size file like 10 mb. Now my requirement is below HOW TO BECOME A GOOD PROGRAMMER HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good programmer Hi Friend, Please go through the following link: CoreJava Tutorials Here you will get lot of examples with illustration where you can WANT TO RESTRICT EDITING AFTER A SET OF DATE WANT TO RESTRICT EDITING AFTER A SET OF DATE localhost : Server... to restrict user from "edit" existing data, after a set of date... an example... have any idea.. how to start and where.... can anyone tutor me? i'm Jmagick get image size Jmagick get image size Hi, How I can get the image size while using... size when using Jmagick ImageInfo ii = new ImageInfo(path+"\"+file... to get the image size when using Jmagick ImageInfo ii = new ImageInfo(path ARRAY SIZE!!! - Java Beginners have the size of it but No one has come up with a good answer/code yet...ARRAY SIZE!!! Hi, My Question is to: "Read integers from... don't know it's size (and I don't want to ask the user to enter it's size File Upload in Struts. File Upload in Struts. How to do File Upload in Struts How to decrease image thumbnail size in java? How to decrease image thumbnail size in java? Hi, I my web application I have to create the thumbnail size image from the image file uploaded by user. How to decrease image thumbnail size in java? Thanks How to create a zip file and how to compare zip size with normal text file How to create a zip file and how to compare zip size with normal text file Hi, how are you?I hope you are fine.I want program like how to create zip.../ZipCreateExample.shtml but when i verified the size of zip and normal file,normal file is showing Configuration file - struts.xml to override settings like the maximum file upload size or whether the Struts... Struts Configuration file - struts.xml  ... explains you how best you can use the struts.xml file for you big projects Java MappedByteBuffer example, How to create a large size file in java. Java MappedByteBuffer example, How to create a large file in java. In this tutorial, you will see how to create a large file with the help...;} } Output C:\>java CreateFile File Which is the good website for struts 2 tutorials? Which is the good website for struts 2 tutorials? Hi, After... for learning Struts 2. Suggest met the struts 2 tutorials good websites. Thanks Hi, Rose India website is the good Struts Struts What is called properties file in struts? How you call the properties message to the View (Front End) JSP Pages Struts File Upload and Save Upload</html:link> <br> Example shows you how to Upload File with Struts... Struts File Upload and Save  ... regarding "Struts file upload example". It does not contain any Struts Struts 1)in struts server side validations u are using programmatically validations and declarative validations? tell me which one is better ? 2) How to enable the validator plug-in file... in these three config file one by one respectively.So how I can place the code Struts upload file - Framework and send to file upload struts..how to get the sheets and data in that sheets Thanks. Hi friend, For upload a file in struts visit to : http...Struts upload file Hi, I have upload a file from struts what is the default buffer size for bufferedreader text file efficiently? Thanks Hi, The default buffer size...what is the default buffer size for bufferedreader Hi, I am writing a program in Java for reading the big text file. I want to know what 2 File Upload Struts 2 File Upload In this section you will learn how to write program in Struts 2 to upload the file... be used to upload the multipart file in your Struts 2 application. In this section you pdf default size you will better understand how you can adjust the paper size. To make... pdf default size In this program we are going to tell you how we can automatically set Java get Folder Size Java get Folder Size In this section, you will learn how to get the size of specified folder. In the given example, we are going to show you the Get all file size on FTP Server In this section we are going to describe how to get size of FTP server files using java struts struts hi in my application number of properties file are there then how can we find second properties file in jsp page Struts File Upload Example Struts File Upload Example In this tutorial you will learn how to use Struts to write... is the heart of the struts file upload application. This interface represents a file struts ;!-- This file contains the default Struts Validator pluggable validator... in this file. # Struts Validator Error Messages errors.required={0...struts <p>hi here is my code in struts i want to validate my limiting file size - JSP-Servlet limiting file size I 'am unable to limit the size of file usin jspx.Can some body guide me ? Thanks Struts - Struts Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good...:// - Framework Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary... of any size. Struts is based on MVC architecture : Model-View-Controller Can you suggest any good book to learn struts Can you suggest any good book to learn struts Can you suggest any good book to learn struts Hi, I m getting Error when runing struts application. i... /WEB-INF/struts-config.xml 1...-- java.lang.ClassNotFoundException: org.apache.struts.action.ActionServlet. how i can limiting file size - JSP-Servlet limiting file size I 'am unable to limit the size of file using... uploadMaxFileSize 100m Set the size limit... 100k Set the threshold size - Struts Articles with good Struts skills also will learn how key Struts concepts relate to Spring... security principle and discuss how Struts can be leveraged to address... and add some lines to the struts-config.xml file to get this going Integrate Struts, Hibernate and Spring Integrate Struts, Hibernate and Spring In this section you will learn how to download... are using one of the best technologies (Struts, Hibernate and Spring). This tutorial File size limit - JSP-Servlet File size limit i am still facing same issue. After adding parameter in web.xml web application is giving error.here is my web.xml file... /WEB-INF/tlds/c.tld Hi friend, To set the file Getting file size - JSP-Servlet Getting file size Hello friends, I am doing a jsp project. File uploading is one part of that. When uploading i have to check the file size. For this i used the java method .length(). But, for all type of files size of the image - Java Server Faces Questions size of the image Hello, how to set the size of the image while Inserting image in the pdf file using i text Hi friend,import java.io....("Pdf file successfully create."); doc Button Size HTML HTML Button Size can be changed according to programmer use. HTML button submits HTML page. In the following example we will learn how to change the size...;/html> Output: The following output shows two button with different size Books components How to get started with Struts and build your own... Edition maps out how to use the Jakarta Struts framework, so you can solve... - covering all the Struts components in a "how to use them" approach. You'll find hashtable size in Java? How to find hashtable size in Java? Hi, What is the code for Hashtable in Java? How to find hashtable size in Java? Give me the easy code. Thanks javascript max size javascript max size How to retrive Max size in JavaScript Struts Tutorials Tutorial This complete reference of Jakarta Struts shows you how to develop Struts... multiple Struts configuration files This tutorial shows Java Web developers how to set... struts-config.xml file for an existing Struts application into multiple How to change uploaded file root path ? How to change uploaded file root path ? I have file upload php script. File upload working is good. But I can't find where saving that file. I think the files saving in mysql directory. How to change directory to my desired path Struts DynaActionForm ; In this tutorial you will learn how to create Struts... in the struts-config.xml file. Add the following entry in the struts... in the struts-config.xml file Struts Guide and is open source. Struts Framework is suited for the application of any size... the official site of Struts. Extract the file ito... from scratch. We will use this file to create our web application. - struts Struts Struts Tell me good struts manual 2.2.1 - Struts 2.2.1 Tutorial in Struts application Example of File Upload Interceptor How... About Struts 2.2.1 Login application Create JSP file Create... in Struts 2.2.1 Introduction to PreResultListener How to work setting web page size setting web page size How to set the webpage size in Java Java Class Size Java Class Size How to determine size of java class Get File Size Get File Size  ... in understanding Get File Size. For this we have a class name" Get File... display the list of array file using for loop. Now in order to evaluate the size Based on struts Upload - Struts Based on struts Upload hi, i can upload the file in struts but i want the example how to delete uploaded file.Can you please give to compress a file in Java? How to compress a file in Java? How to compress a file in Java through Java code? I want a good example in Java. Thanks Hi, Check the tutorial Compressing the file into GZIP format. Thanks   Validating image size Validating image size How to validate image size to 2MB in javascript which is updating to mysql database A4 size . It will adjust the size of a pdf file irrespective of the fact whether it exists... A4 size In this program we are going to tell you how we can automatically assign the right RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ? RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ? try { Connection conn=Create_Connection.conOpen(); System.out.println(conn); String submit UILabel Font Size UILabel Font Size Hi Developers, How to set font size of UILabel? I am creating label programmatically and I have to set the font size programatically. How I can set UILabel Font Size? Thanks HI, Following Struts - Framework Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary.../struts/". Its a very good site to learn struts. You dont need to be expert In Java how to get size of class and object? In Java how to get size of class and object? In Java how to get size of class and object Struts PDF Generating Example in struts How to insert image in PDF file in struts2 How to set pdf...Struts PDF Generating Example To generate a PDF in struts you need to use struts stream result type as follows <result name="success" type Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/45903
CC-MAIN-2015-32
refinedweb
2,117
74.69
Python requires: reusing code [EXPERIMENTAL]¶ Warning This is an experimental feature subject to breaking changes in future releases. The python_requires() feature is a very convenient way to share files and code between different recipes. A Python Requires is just like any other recipe, it is the way it is required from the consumer what makes the difference. The Python Requires recipe file, besides exporting its own required sources, can export files to be used by the consumer recipes and also python code in the recipe file itself. Let’s have a look at an example showing all its capabilities (you can find all the sources in Conan examples repository): - Python requires recipe:import os import shutil from conans import ConanFile, CMake, tools from scm_utils import get_version class PythonRequires(ConanFile): name = "pyreq" version = "version" exports = "scm_utils.py" exports_sources = "CMakeLists.txt" def get_conanfile(): class BaseConanFile(ConanFile): settings = "os", "compiler", "build_type", "arch" options = {"shared": [True, False]} default_options = {"shared": False} generators = "cmake" exports_sources = "src/*")) def build(self): cmake = CMake(self) cmake.configure() cmake = [self.name] return BaseConanFile - Consumer recipefrom conans import ConanFile, python_requires base = python_requires("pyreq/version@user/channel") class ConsumerConan(base.get_conanfile()): name = "consumer" version = base.get_version() # Everything else is inherited We must make available for other to use the recipe with the Python Requires, this recipe won’t have any associated binaries, only the sources will be needed, so we only need to execute the export and upload commands: $ conan export . pyreq/version@user/channel $ conan upload pyreq/version@user/channel -r=myremote Now any consumer will be able to reuse the business logic and files available in the recipe, let’s have a look at the most common use cases. Import a python requires¶ To import a recipe as a Python requires it is needed to call the python_requires() function with the reference as the only parameter: base = python_requires("pyreq/version@user/channel") All the code available in the conanfile.py file of the imported recipe will be available in the consumer through the base variable. Important There are several important considerations regarding python_requires(): - They are required at every step of the conan commands. If you are creating a package that python_requires("MyBase/..."), the MyBasepackage should be already available in the local cache or be also consumed with a python_requires()from another package recipe. - They are not automatically updated with the --updateargument from remotes. - Different packages can require different versions in their python_requires(). They are private to each recipe, so they do not conflict with each other, but it is the responsibility of the user to keep consistency. - They are not overridden from downstream consumers. Again, as they are private, they are not affected by other packages, even consumers Reuse python sources¶ In the example proposed we are using two functions through the base variable: base.get_conanfile() and base.get_version(). The first one is defined directly in the conanfile.py file, but the second one is in a different source file that was exported together with the pyreq/version@user/channel recipe using the exports attribute. This works without any Conan magic, it is just plain Python and you can even return a class from a function and inherit from it. That’s just what we are proposing in this example: all the business logic in contained in the Python Requires so every recipe will reuse it automatically. The consumer only needs to define the name and version: from conans import ConanFile, python_requires base = python_requires("pyreq/version@user/channel") class ConsumerConan(base.get_conanfile()): name = "consumer" version = "version" # Everything else is inherited while all the functional code is defined in the python requires recipe file: from conans import ConanFile, python_requires [...] def get_conanfile(): class BaseConanFile(ConanFile): def source(self): [...] def build(self): [...] Reuse source files¶ Up to now, we have been reusing python code, but we can also package files within the python requires recipe and consume them afterward, that’s what we are doing with a CMakeList.txt file, it will allow us to share the CMake code and ensure that all the libraries using the same python requires will have the same build script. These are the relevant code snippets from the example files: - The python requires exports the needed sources (the file exists next to this conanfile.py):class PythonRequires(ConanFile): name = "pyreq" version = "version" exports_sources = "CMakeLists.txt" [...] The file will be exported together with the recipe pyreq/version@user/channelduring the call to conan export . pyreq/version@user/channelas it is expected for any Conan package. - The consumer recipe will copy the file from the python requires folder, we need to make this copy ourselves, there is nothing run automatically during the python_requires()call:class BaseConanFile(ConanFile): [...])) As you can see, in the inherited source()method, we are copying the CMakeLists.txt file from the exports_sources folder of the python requires (take a look at the python_requires attribute), and modifying a line to name the library with the current recipe name. In the example, our ConsumerConanclass will also inherit the build(), package()and package_info()method, turning the actual conanfile.py of the library into a mere declaration of the name and version. You can find the full example in the Conan examples repository.
https://docs.conan.io/en/1.20/extending/python_requires.html
CC-MAIN-2022-27
refinedweb
864
52.29
Step 3 – unit testing the functionality of the a new sproc This is the third blog in a series looking at how to take a TDD approach when creating new sprocs. It will look at the third step outlined in – Database unit testing patterns – how to test the functionality of a sproc. In this case, how to ensure that the data outputted meets the requirements. * It is assumed that you have read parts 1 and 2 in this series. Overview of the overall process - Identify the requirements – the data that should be returned by the sproc. - Ensure that appropriate test data has been set up. - Use the ExportDBDataAsXML tool to create XML files that contain the datasets that should be returned by the sproc. - Write the data comparison unit tests using DBTestUnit. When first run these will fail. - Write the implementation SQL script for the sproc. - Run the tests again. They will pass if the data returned by the sproc matches the ‘expected test data’ in the XML files. Why test the data outputted by the sproc The data outputted by a sproc is part of the data contract – the ‘DB API’ – offered by the database to its client. When changes/refactorings are made to the database, having a set of automated unit tests can make it easier to ensure that this DB API is maintained. Also, from a personal viewpoint, I find that writing tests first – ie defining the requirements – means the implementation is more likely to meet the requirements. Note The purpose of this blog is to give an initial introduction into taking a TDD approach when carrying out data comparison type unit testing. The implementation produced in this blog will not have all of the required query filters to ensure that only ‘current employees are returned’. The next blog in this series will show how to create more ‘granular’ tests – with each query filter explicitly unit tested. Background scenario The high level requirement can be outlined as: “For a given department the sproc should return the details of current employees in that department.” To test the data that will be outputted there are two areas that need defining: a) The data elements that the sproc should return. This along with the properties tested in the previous blog effectively make up the ‘DB API’ – the contract offered by the sproc to clients of it. b) The logic within the sproc that ensures only ‘current employees’ of a department are returned ie the query filters. The following data elements are required to be returned by the sproc*: BusinessEntityID (unique ID of a person) FirstName, LastName, JobTitle, StartDate and EndDate (of working for a particular department). * In reality to get to this point can involve a large degree of analysis/collabaration between different people. Ensuring that test data exists The next step is to ensure that appropriate test data exists. The list of data elements above can be sourced from the existing AdventureWorks database using the following tables: Person.Person, HumanResources.Employee and HumanResources.EmployeeDepartmentHistory The following SQL statement can be used to return the data that is expected for the department with a DepartmentID = The image below shows the data that is returned when this query is run against the database: At this point, the data currently in the sample database is appropriate for carrying out initial testing The next blog will look at actually creating data to test specific query filters. Creating the expected test datasets The ‘expected test data’, that the sproc should return, will be stored in XML/XSD files. The ExportDBDataAsXML tool can be used to create these. More detail on how to set up/configure this tool can be found at – Using DBTestUnit to test data outputs from SQL objects. For testing the new sproc the following is added to the ExportDBDataAsXML config file. <!--********************--> <!--SECTION FOR SPROCS--> <!--********************--> <Sprocs> <SQLObject> <FileName>TestData_uspGetCurrentEmployeesForDepartment_Dept2</FileName> <Execute>Yes</Execute> <SQLStatement> </SQLStatement> </SQLObject> When the tool is run it queries the database using the SQL set in the ‘SQLStatement’ element. The output is placed in XML/XSD files – with the names set by the ‘FileName’ element (the directory it is created in is configured in another section.) The data outputted is shown below for the XML and XSD files respectively: TestData_uspGetCurrentEmployeesForDepartment_Dept2.xml <?xml version="1.0" standalone="yes"?> <NewDataSet> <Table> <BusinessEntityID>4</BusinessEntityID> <FirstName>Rob</FirstName> <LastName>Walters</LastName> <JobTitle>Senior Tool Designer</JobTitle> <StartDate>2004-07-01T00:00:00+01:00</StartDate> </Table> <Table> <BusinessEntityID>11</BusinessEntityID> <FirstName>Ovidiu</FirstName> <LastName>Cracium</LastName> <JobTitle>Senior Tool Designer</JobTitle> <StartDate>2005-01-05T00:00:00+00:00</StartDate> </Table> <Table> <BusinessEntityID>12</BusinessEntityID> <FirstName>Thierry</FirstName> <LastName>D'Hers</LastName> <JobTitle>Tool Designer</JobTitle> <StartDate>2002-01-11T00:00:00+00:00</StartDate> </Table> <Table> <BusinessEntityID>13</BusinessEntityID> <FirstName>Janice</FirstName> <LastName>Galvin</LastName> <JobTitle>Tool Designer</JobTitle> <StartDate>2005-01-23T00:00:00+00:00</StartDate> </Table> </NewDataSet> TestData_uspGetCurrentEmployeesForDepartment_Dept2.xsd <?xml version="1.0" standalone="yes"?> <xs:schema > Once these files have been created they should be checked to ensure they are returning the expected data. Writing a test that defines the requirement/expectations The next step is to write the data comparison unit tests. As mentioned in previous blogs DBTestUnit provides a number of sample C# test templates. Therefore, the easiest way to get started is to copy one and change appropriately. The sample code below shows a data comparison test for the new sproc: using System; using MbUnit.Framework; using DBTestUnit.Util; namespace AdventureWorks.DatabaseTest.Tests.Sprocs.Functional { [TestFixture] public class uspGetCurrentEmployeesForDepartment { string dbInstance = "AdventureWorks"; string sprocTestFileDir = AppSettings.DirSprocExpectedResults(); public uspGetCurrentEmployeesForDepartment() { } ."); } } } What happens when the tests are run? When the test is first run it will fail. As executing the following SQL statement: ‘EXEC HumanResources.uspGetCurrentEmployeesForDepartment @departmentId=2’ will not return any data. Obviously this will not be the same when compared to the data contained in the expected test data XML/XSD files created above. The image below shows the output from the MBUnit console when this test is run Making the test pass Next just enough SQL is written to ensure that the test passes. A script – as shown below – is created and run against the database. IF EXISTS(SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE = 'PROCEDURE' AND ROUTINE_SCHEMA + '.' + ROUTINE_NAME = 'HumanResources.uspGetCurrentEmployeesForDepartment') DROP PROC HumanResources.uspGetCurrentEmployeesForDepartment GO CREATE PROC HumanResources.uspGetCurrentEmployeesForDepartment @departmentID smallint AS SET NOCOUNT = @departmentID GO When the test is run again – it will now pass. Now, the data returned by the sproc is the same as that contained in the expected test data files. This is shown in image below: How does this work? There are two parts to a data comparison unit test as shown in the sample code below: ."); } 1. At the top – the ‘Row’ part. The names of the expected data test file and the actual SQL statement that will be executed are set. The XML/XSD files named: ‘TestData_uspGetCurrentEmployeesForDepartment_Dept2’ will be compared to the data returned by executing: ‘EXEC HumanResources.uspGetCurrentEmployeesForDepartment @departmentId=2’ 2. The DBTestUnit test method that will carry out the data comparison. The values set above are passed to the test method – ‘DataSetComparer.Compare’ (part of the DBTestUnit.Util namespace). This method ‘grabs’ the expected data from the XML/XSD files and transforms it into datasets and then compares this to the dataset returned by running the SQL statement against the database. If the datasets are the same the test method will return ‘true’ and the test will pass. If there are any differences it will return ‘false’ and the test will fail. What happens if ‘unexpected’ changes are made? There are two types of changes which these types of tests will identify. 1. Schema changes of the dataset returned. If this is changed in anyway eg column names, ordinal position, adding/removing a column and data types*. 2. Logic changes that cause the number of rows returned to change. For a given input there is an expected set of rows to be returned. If, for example, a change is made to a query filter that effects the rows returned – then this type of test will quickly identify this. * Due to the method currently used by DBTestUnit to compare datasets actually returned by the sproc to those which are expected (XML/XSD comparison) – some types of schema changes on the dataset returned by sproc will not be detected. Looking at the XSD – TestData_uspGetCurrentEmployeesForDepartment_Dept2.xsd shown above – if the underlying SQL column for ‘LastName’ is changed, for example, from nvarchar(50) to nvarchar(40) this type of test will not necessarily identify this. I hope to improve on this in a future version. Acknowledgements The idea behind how to carry out this data comparison and the implementation used within the DBTestUnit method – DataSetComparer.Compare – was based on an article written by Alex Kuznetsov and Alex Styler Close those Loopholes – Testing Stored Procedures. Future release The next release of DBTestUnit – post version – 0.4.0.428 – will use an updated version of MBUnit (from v2.4 to v3.2). The latest version of MBUnit makes it easier to work with XML – see MbUnit 3.2 makes it easier to work with Xml For example it has an inbuilt test method – Assert.Xml.AreEqual. The advantage of using this is the fact that if the assert fails it will display any differences between two XML files. This can save a lot of time troubleshooting failing tests. Writing a test will be very similar to the sample code shown above. Note the existing method DataSetComparer.Compare will continue to be supported. What next This blog has given a quick overview on how to take a TDD approach when testing the expected data that should be outputted by a new sproc. The next blog in this series will extend this and take a more detailed look at how to explicitly test each ‘query filter’. fairly beneficial material, all round I imagine this is worthy of a book mark, thanks a lot
https://dbtestunit.wordpress.com/2011/04/02/unit-testing-select-stored-procedures-part-3/
CC-MAIN-2017-39
refinedweb
1,662
55.34
Opened 7 years ago Closed 7 years ago Last modified 7 years ago #17011 closed Bug (fixed) Recursion error in TestCase class decorated by override_settings when overriding method on parent class and calling parent method Description A bit hard to explain but a fairly straightforward bug. Basically, if you have a sub class of TestCase and then a sub class of that sub class that's decorated by @override_settings at the class level, and you override a method and then call the parent method, you get a recursion error. Kind of easier to demonstrate in code: from django.test import TestCase from django.test.utils import override_settings class FooTests(TestCase): def setUp(self): # do some stuff pass @override_settings(FOO='bar') class SubFooTests(TestCase): def setUp(self): # recursion error! super(SubFooTests, self).setUp() I have a patch with a regression test that formally demonstrates this behavior but have not investigated it yet. It's possible it's related to #16224, but I don't really know. Attachments (1) Change History (8) Changed 7 years ago by comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 follow-up: 4 Changed 7 years ago by Thanks Carl. I think you are correct, monkey patching in this case is cleaner and less prone to weirdness. I moved the the tests I had created to settings_tests and worked the implementation. settings_tests all pass. I filed the patch as a pull request, hopefully I'm doing this right: comment:4 Changed 7 years ago by Thanks Carl. I think you are correct, monkey patching in this case is cleaner and less prone to weirdness. I moved the the tests I had created to settings_testsand worked the implementation. settings_testsall pass. I filed the patch as a pull request, hopefully I'm doing this right: Thanks for the patch! The pull request part is right, but something about the patch isn't; when I apply the patch and run the full test suite I get hundreds of infinite recursion errors. I haven't tracked down the cause. comment:5 Changed 7 years ago by Ah, dang, thanks. Looks like there were some issues in multiple inheritance scenarios, particularly where @override_settings was being applied to more than one class in the inheritance chain (e.g. the test class staticfiles_tests.TestCollectionNonLocalStorage). Shame on me for not running the full test suite with my first patch. Anyhow, the resolution is pretty straightforward: assigning the original pre-setup/post-teardown methods to local variables instead of class attributes, so they don't end up stepping on each others toes. I've updated my topic branch with the new commits, which is now reflected in the pull request. Full test suite now passes. comment:6 Changed 7 years ago by This is a release blocker, as its a major issue with a new feature in 1.4. This is because of the dynamic-subclass-creation magic that the decorator does. Basically, the problem is that there doesn't seem to be any way to write a Python class decorator that doesn't break weirdly in some cases (c.f. the recent threads about decorating class-based views on the -developers mailing list). I think in this case the decorator probably ought to just modify the given class in-place (monkeypatch it) rather than creating a dynamic subclass. Because I think the subclassing/super case you've demonstrated is much more common usage than someone trying to do this: Which is the case where monkeypatching fails (because both Foo and Bar end up modified). I guess it's possible someone might try that, but we could just tell them to do this instead: I am not aware of any perfect option here; I think in this case monkeypatching would fail less often than what we do currently.
https://code.djangoproject.com/ticket/17011
CC-MAIN-2018-39
refinedweb
640
60.45
(For more resources related to this topic, see here.) How to do it... The steps to handle events raised by different controls are as follows: Open the Pack.Ext2.Examples solution Press F5 or click on the Start button to run the solution. Click on the Direct Methods & Events hyperlink. This will run the example code for this recipe. Familiarize yourself with the code behind and the client-side markup. How it works... Applying the [DirectMethod(namespace="ExtNetExample")] attribute to the server-side method GetDateTime(int timeDiff) has exposed this method to our client-side code with the namespace of ExtNetExample, which we append to the method name call on the client side. As we can see in the example code, we call this server method in the markup using the Ext.NET button btnDateTime and the code ExtNetExamples.GetDateTime(3). When the call hits the server, we update the Ext.NET control lblDateTime text property, which updates the control related to the property. Adding namespace="ExtNetExample" allows us to neatly group server-side methods and the JavaScript calls in our code. A good notation is CompanyName.ProjectName. BusinessDomain.MethodName. Without applying the namespace attribute, we would access our server-side method using the default namespace of App.direct. So, to call the GetDateTime method without the namespace attribute, we would use App.direct. GetDateTime(3). We can also see how to return a response from Direct Method to the client-side JavaScript. If a Direct Method returns a value, it is sent back to the success function defined in a configuration object. This configuration object contains a number of functions, properties, and objects. We have dealt with the two most common functions in our example, the success and failure responses. The server-side method GetCar()returns a custom object called Car. If the btnReturnResponse button is clicked on and GetCar() successfully returns a response, we can access the value when Ext.NET calls the JavaScript function named in the success configuration object CarResponseSuccess. This JavaScript function accepts the response parameter from the method and we can process it accordingly. The response parameter is serialized into JSON, and so object values can be accessed using the JavaScript object notation of object.propertyValue. Note that we alert the FirstRegistered property of the Car object returned. Likewise, if a failure response is received, we call the client-side method CarResponseFailure alerting the response, which is a string value. There are a number of other properties that form a part of the configuration object, which can be accessed as part of the callback, for example, failure to return a response. Please refer to the Direct Methods Overview Ext.NET examples website ( Events/DirectMethods/Overview/ ). To demonstrate DirectEvent in action, we've declared a button called btnFireEvent and secondly, a checkbox called chkFireEvent. Note that each control points to the same DirectEvent method called WhoFiredMe. You'll notice that in the markup we declare the WhoFiredMe method using the OnEvent property of the controls. This means that when the Click event is fired on the btnFireEvent button and the Change event is fired on the chkFireEvent checkbox, a request to the server is made where we call the WhoFiredMe method. From this, we can get the control that invoked the request via the object sender parameter and the arguments of the event using the DirectEventArgs e method. Note that we don't have to decorate the DirectEvent method, WhoFiredMe, with any attributes. Ext.NET takes care of all the plumbing. We just need to specify the method, which needs to be called on the server. There's more... Raising DirectMethods is far more flexible in terms of being able to specify the parameters you want to send to the server. You also have the ability to send the control objects to the server or to client-side functions using the #{controlId} notation. It is generally not a good idea though to send the whole control to the server from a Direct Method, as Ext.NET controls can contain references to themselves. Therefore, when Ext.NET encodes the control, it can end up in an infinite loop, and you will end up breaking your code. With a DirectEvent method, you can send extra parameters to the server using the ExtraParams property inside the controls event element. This can then be accessed using the e parameter on the server. Summary In this article we discussed about how to connect client-side and server-side code. Resources for Article : Further resources on this subject: - Working with Microsoft Dynamics AX and .NET: Part 1 [Article] - Working with Microsoft Dynamics AX and .NET: Part 2 [Article] - Dynamically enable a control (Become an expert) [Article]
https://www.packtpub.com/books/content/extnet-%E2%80%93-understanding-direct-methods-and-direct-events
CC-MAIN-2015-40
refinedweb
785
65.12
Introduction to Namedtuple Python In Python, there are different types of containers like lists, set, dictionaries, etc. These containers can be used in programs to collect data to do this in Python we have collection modules which are an alternative to Python containers. These collection modules have different data structures to collect and store a set of data one such module is namedtuple. In Python, namedtuple is a function module of collection module which returns tuple or set of data where each data in a set or tuple are named. In general, every value in the tuple is given a name so that it will be easy to access each value by its name. Working of Namedtuple As tuple has integer index to access as they have no names there might grow an ambiguity to store and access the data to that particular integer index. Tuples are an ad-hoc structure which means if two tuples have the same number of fields and the same data stored then there is again an ambiguity for accessing. So to overcome all such problems Python introduces Namedtuple in the collections module. Python has containers like dictionaries called namedtuple() which supports the access to the value through key similar to dictionaries. Namedtuple is the class of collections module which also contains many other classes for storing data like sets, dictionaries, tuple, etc. Nametuple is an extension to built-in tuple data type, where tuples are immutable which means once created it cannot be modified. Here it shows how we can access the elements using keys and indexes. To use this namedtuple() you should always import collections in the program. Namedtuples offers a few users access and conversion methods that all start with a _ underscores. Namedtuple is mostly used on unstructured tuples and dictionaries which makes it easier for data accessing. Namedtuple makes an easy way to clean up the code and make it more readable which makes it a better structure for the data. There are different access and conversion operations on namedtuple. They are as follows: Access operations on Namedtuple() which we can access values using indexes, keys, and getattr() methods. - Access by index: In this, the values are accessed using index number, because the attribute values of namedtuple() are in order so it can be easily accessed by indexes. - Access by keys: In this, the working is similar to a dictionary where the values can be accessed using the keys given as allowed in dictionaries. - Access using getattr(): This is one of another method in which it takes namedtuple and key-value as its argument. Examples of Namedtuple Python Given below are the examples mentioned: Example #1 Code: import collections Employee = collections.namedtuple('Employee',['name','age','designation']) E = Employee('Alison','30','Software engineer') print ("Demonstration using index, The Employee name is: ",E.name) print ("Demonstration using keynames, The Employee age is : ",E[1]) print ("Demonstration using getattr(), The Employee designation is : ",getattr(E,'designation')) Output: In the above example firstly we create namedtuple() with tuple name as “Employee” and it has different attributes in the tuple named “name”, “age”, “designation”. The key-value of the Employee tuple can be accessed using 3 different ways as demonstrated in the program. There are some conversion operations that can be applied on namedtuple(). They are as follows: - _make(): This function converts any iterable passed as argument to this function to namedtuple() and this function returns namedtuple(). - _asdict(): This function converts the values of namedtuple that are constructed by mapping the values of namedtuple and returns the OrderDict(). - ** (double star) operator: This operator is used to convert to namedtuple() from dictionary. - _fields: This function returns all the keynames of the namedtuple that is declared. We can also check how many fields and which fields are there in the namedtuple(). - _replace(): This function replaces the values that are mapped with keynames that are passed as an argument to this function. Example #2 Demonstration of all the above conversion operations on namedtuple(). Code: import collections Employee = collections.namedtuple('Employee',['name','age','designation']) E = Employee('Alison','30','Software engineer') El = ['Tom', '39', 'Sales manager' ] Ed = { 'name' : "Bob", 'age' : 30 , 'designation' : 'Manager' } print ("The demonstration for using namedtuple as iterable is : ") print (Employee._make(El)) print("\n") print ("The demonstration of OrderedDict instance using namedtuple is : ") print (E._asdict()) print("\n") print ("The demonstration of converstion of namedtuple instance to dict is :") print (Employee(**Ed)) print("\n") print ("All the fields of Employee are :") print (E._fields) print("\n") print ("The demonstration of replace() that modifies namedtuple is : ") print(E._replace(name = 'Bob')) Output: The above program creates the namedtuple() “Employee” and then it creates iterable list and dictionary “El” and “Ed” which uses the conversion operations _make() which will convert the “El” to namedtuple(), _asdict() is also used to display the namedtuple() in the order that is using OrderDict(), the double start (**) which converts dictionary “Ed” to namedtuple(), E.fields which will print the fields in the declared namedtuple(), E.replace(name = “Bob”) this function will replace the name field value of the namedtuple() in this it replaces “Alison” to “Bob”. Conclusion In Python, we use namedtuple instead of the tuple as in other programming languages as this is a class of collections module which will provide us with some helper methods like getattr(), _make(), _asdict(), _fileds, _replace(), accessing with keynames, ** double star, etc. This function helps us to access the values by having keys as the arguments with the above different access and conversion functions on namedtuple() class of collections module. It is easier than tuples to use and is very efficient and readable than tuples. Recommended Articles This is a guide to Namedtuple Python. Here we discuss the introduction, working of namedtuple python along with examples. You may also have a look at the following articles to learn more –
https://www.educba.com/namedtuple-python/
CC-MAIN-2020-24
refinedweb
973
51.28
I suspect that the issue may be related to the fact that I'm running 32-bit python on this computer (due to the need to use 32-bit libraries for some other parts of my experimental setup), and Phidgets might be trying to load the 64-bit drivers. Here is the minimum code required to reproduce the problem: Code: Select all from Phidget22.Devices.DigitalOutput import DigitalOutput hub_serial_number = 530304 hub_port_number = 0 ch0 = DigitalOutput() ch0.setDeviceSerialNumber(hub_port_number) ch0.setChannel(0) ch0.openWaitForAttachment(timeout=1000) The error output I get is: Code: Select all Traceback (most recent call last): ~~~snip text out here because of the stupid forum software~~~ line 551, in openWaitForAttachment raise PhidgetException(result) Phidget22.PhidgetException.PhidgetException: PhidgetException 0x03 (Timed Out) No matching devices were found to open. Make sure your device is attached, and that your addressing parameters are specified correctly. If your Phidget has a plug or terminal block for external power, ensure it is plugged in and powered. Again, the hub shows up in the Phidgets Control Panel, and I've run this same code on other machines that are configured the same, except that they have 64-bit python installed. Has anyone else run into something similar? Any suggestions? (Side note, I had to snip out a bunch of the traceback text above because the forum software thought it was an external URL, despite it just being a windows file path. That's super annoying.)
https://www.phidgets.com/phorum/viewtopic.php?f=26&t=9496
CC-MAIN-2019-51
refinedweb
240
51.07
Problem with custom node typesCraig Ching Sep 8, 2010 2:34 PM Hi! First, I'm very new to JCR and ModeShape in particular, I've dabbled a bit in Jackrabbit, but I'd still consider myself pretty new to the subject. I'm sure I'm missing something fundamental, but I just can't figure out what it is. I've got the following code working with Jackrabbit, but I want to use ModeShape (in particular the Infinispan connector). Here's my code: public class JcrTest { static public void main(String [] args) throws Exception { Repository repo = null; Map<String, String> parameters = new HashMap<String, String>(); parameters.put("org.modeshape.jcr.URL", "file:cars-repository-config.xml?repositoryName=Cars"); for (RepositoryFactory factory : ServiceLoader.load(RepositoryFactory.class)) { repo = factory.getRepository(parameters); if (repo != null) { break; } } if (repo != null) { Session session = repo.login(); String [] prefixes = session.getNamespacePrefixes(); for (String prefix : prefixes) { System.out.println("Prefix: " + prefix); } Workspace ws = session.getWorkspace(); System.out.println("WS Name: " + ws.getName()); Node root = session.getRootNode(); Node type = root.addNode("Hybrid", "nt:unstructured"); for(int i=0;i<10;i++) { Node car = type.addNode("car_" + i, "car:Car"); car.setProperty("car:maker", "Volkswagen"); car.setProperty("car:model", "Passat"); car.setProperty("car:year", "2010"); car.setProperty("car:msrp", "$32,000"); } session.save(); } } and here is my repository configuration (basically a bastardization of the repository example): <?xml version="1.0" encoding="UTF-8"?> <configuration xmlns: <mode:sources jcr: <mode:source jcr: </mode:sources> <mode:repositories> <mode:repository jcr: <!-- Specify the source that should be used for the repository --> <mode:source>Cars</mode:source> <!-- Define the options for the JCR repository, using camelcase version of JcrRepository.Option names --> <mode:options jcr: <jaasLoginConfigName jcr: </mode:options> <!-- Define any custom node types. Importing CND files via JcrConfiguration is equivalent to specifying here. --> <jcr:nodeTypes mode: <!-- Define any namespaces for this repository, other than those already defined by JCR or ModeShape --> <namespaces jcr: <car jcr: </namespaces> </mode:repository> </mode:repositories> </configuration> 1. Re: Problem with custom node typesCraig Ching Sep 8, 2010 8:50 PM (in response to Craig Ching) I did a little debugging and I can't see that: <jcr:nodeTypes mode: has any effect. I don't see that JcrConfiguration.RepositoryDefinition.addNodeTypes is ever called. Anyway, looking for confirmation that specifying a .cnd in the repository configuration is or isn't a defect. I'll open one if someone tells me it is one. 2. Re: Problem with custom node typesRandall Hauch Sep 9, 2010 9:47 AM (in response to Craig Ching) The following line in your configuration file is not correct: <jcr:nodeTypes mode: The parameter value should not be a URL, but should be the name of a resource on the classpath (or a comma separate list of names of resources on the classpath). IOW, this particular line tells the engine to find the resource on the classpath named "file:cars.cnd", which of course does not work. This should work: <jcr:nodeTypes mode: There are several methods on CndNodeTypeReader that accept various types, including a URL, InputStream, java.io.File, and String. The method that takes a String specifying the name of a classpath resource does not currently throw an exception if the classpath resource cannot be found. This is definitely a bug. The documentation should give several examples, and the code should be more tolerant of leading '/' character (unfortunately, Class.getResourceAsStream() and ClassLoader.getResourceAsStream() have different requirements for the leading '/'). If you would, could you log a new issue in JIRA to correct & improve the CndNodeTypeReader.read(String) method? 3. Re: Problem with custom node typesCraig Ching Sep 9, 2010 10:10 AM (in response to Randall Hauch) Ah, ok, I didn't know the .cnd file had to be on the classpath, thanks for the information! About the lack of error indication when the resource isn't found by CndNodeTypeReader.read(String), I've opened an issue for that: Thanks for the help Randall! 4. Re: Problem with custom node typesRandall Hauch Sep 9, 2010 10:16 AM (in response to Randall Hauch) Actually, I should have said to try this line (with a leading "/" in the attribute value): <jcr:nodeTypes mode: Feel free to log an issue about the configuration should be more tolerant about a leading slash, too. 5. Re: Problem with custom node typesRandall Hauch Sep 9, 2010 10:26 AM (in response to Craig Ching) Ah, ok, I didn't know the .cnd file had to be on the classpath, thanks for the information! That's what we're used to, but it doesn't always makes sense in all applications (especially those that are stand-alone.) Feel free to log an enhancement request to handle files and URLs for CND references in ModeShape configurations. I think we could pretty easily try to load the CND files first as classpath resources, but if that fails we could try to parse into a resolvable URL, and (if that fails) finally look for an existing java.io.File with that path. We should make this as easy as possible, so your feedback is very much appreciated! 6. Re: Problem with custom node typesCraig Ching Sep 9, 2010 11:06 AM (in response to Randall Hauch) I think having it on the classpath more fits my actual needs, I was just prototyping and having it on the file system made sense for that, but node types, in my application, are more tightly coupled with my code so the classpath makes more sense. I'll bear it in mind and log an issue if my needs change. Thanks! 7. Re: Problem with custom node typesRandall Hauch Sep 9, 2010 6:26 PM (in response to Craig Ching) About the lack of error indication when the resource isn't found by CndNodeTypeReader.read(String), I've opened an issue for that: This issue has been fixed (on trunk), and will be in the next release (e.g., 2.3).
https://developer.jboss.org/thread/156325
CC-MAIN-2018-39
refinedweb
994
57.47
0 my task is to sort the input numbers from lowest to highest but i just cant make it work... i've tried different ways and I know there's a problem in my code. hope you can help me out. thanks! #include <iostream.h> struct trial { int age; } getcount[5]; void main() { int i=0; int j = 1; int x = 0; int temp; cout << "PLease enter 5 numbers:\n"; for(i=0;i<5;i++) { cout << j << "number: "; cin >> getcount[i].age; j = j+1; } for (i=0;i<5;i++) { temp = getcount[i].age; cout << "\n" << temp; /*if (x < temp) { x = temp; cout << "\n" << x; }*/ } }
https://www.daniweb.com/programming/software-development/threads/185435/sorting
CC-MAIN-2017-09
refinedweb
107
90.6
This is the mail archive of the [email protected] mailing list for the GCC project. On Saturday 27 May 2006 01:52, Steven Bosscher wrote: > This is patch 1. OK for mainline? Part 2 cleans up some bits of CSE. There are far more significant cleanups possible (removing everything related to path following, to start with, and that is a _lot_ of code), but I don't suppose that would be acceptable at this stage so I'm probably going to do that on a branch or something. OK for mainline? Gr. Steven P.S. I forgot to say that part 1 was bootstrapped on x86_64-linux also _without_ this cleanup patch, and also tested on x86_64-linux, m32r-elf, mips-elf, and sh-elf. The combined pair of patches was tested on a larger number of targets. * cse.c (fold_rtx_subreg, fold_rtx_mem, find_best_addr, canon_for_address): Remove. (fold_rtx): Process SUBREGs and MEMs with equiv_constant, make simplification loop more straightforward by not calling fold_rtx recursively. (equiv_constant): Move here a small part of fold_rtx_subreg, do not call fold_rtx. Call avoid_constant_pool_reference to process MEMs. * recog.c (canonicalize_change_group): New. * recog.h (canonicalize_change_group): New. Index: cse.c =================================================================== --- cse.c (revision 114139) +++ cse.c (working copy) @@ -600,7 +600,6 @@ static inline unsigned safe_hash (rtx, e *); @@ -731,57 +730,6 @@ approx_reg_cost (rtx x) @@ -2822,231 +2770,6 @@ canon_reg (rtx x, rtx - && ((p->cost + 1) >> 1) > best_rtx_cost))) - { - found_better = 1; - best_addr_cost = exp_cost; - best_rtx_cost = (p->cost + 1) >> 1; -. @@ -3241,380 +2964,15 @@ find_comparison_args (enum rtx_code code. */ - -static rtx -fold_rtx_mem . +/* If X is a nontrivial arithmetic operation on an argument for which + a constant value can be determined, return the result of operating + on that value, as a constant. Otherwise, return X, possibly with + one or more operands changed to a forward-propagated constant. + + If X is a register whose contents are known, we do NOT return + those contents here; equiv_constant is called to perform that task. + For SUBREGs and MEMs, we do that both here and in equiv_constant. INSN is the insn that we may be modifying. If it is 0, make a copy of X before modifying it. */ @@ -3627,10 +2985,9 @@ fold_rtx (rtx x, rtx insn) const char *fmt; int i; rtx new = 0; - int copied = 0; - int must_swap = 0; + int changed = 0; - /* Folded equivalents of first two operands of X. */ + /* Operands of X. */ rtx folded_arg0; rtx folded_arg1; @@ -3647,10 +3004,16 @@ fold_rtx (rtx x, rtx insn) if (x == 0) return x; - mode = GET_MODE (x); + /* Try to perform some initial simplifications on X. */ code = GET_CODE (x); switch (code) { + case MEM: + case SUBREG: + if ((new = equiv_constant (x)) != NULL_RTX) + return new; + return x; + case CONST: case CONST_INT: case CONST_DOUBLE: @@ -3670,28 +3033,6 @@ fold_rtx (rtx x, rtx ins) { @@ -3699,12 +3040,21 @@ fold_rtx (rtx x, rtx insn) validate_change (insn, &ASM_OPERANDS_INPUT (x, i), fold_rtx (ASM_OPERANDS_INPUT (x, i), insn), 0); } + return x; + +#ifdef NO_FUNCTION_CSE + case CALL: + if (CONSTANT_P (XEXP (XEXP (x, 0), 0))) + return x; break; +#endif + /* Anything else goes through the loop below. */ default: break; } + mode = GET_MODE (x); const_arg0 = 0; const_arg1 = 0; const_arg2 = 0; @@ -3717,55 +3067,13 @@ fold_rtx (rtx x, rtx insn); - + rtx folded_arg = XEXP (x, i), const_arg; + enum machine_mode mode_arg = GET_MODE (folded_arg); #ifdef HAVE_cc0 - case CC0: - folded_arg = prev_insn_cc0; - mode_arg = prev_insn_cc0_mode; - const_arg = equiv_constant (folded_arg); - break; + if (CC0_P (folded_arg)) + folded_arg = prev_insn_cc0, mode_arg = prev_insn_cc0_mode; #endif - - default: - folded_arg = fold_rtx (arg, insn); - const_arg = equiv_constant (folded_arg); - } + const_arg = equiv_constant (folded_arg); /* For the first three operands, see if the operand is constant or equivalent to a constant. */ @@ -3785,120 +3093,50 @@ fold_rtx (rtx x, rtx insn); + /* Pick the least expensive of the argument and an equivalent constant + argument. */ + if (const_arg != 0 + && const_arg != folded_arg + && COST_IN (const_arg, code) <= COST_IN (folded_arg, code) /*; + && (GET_RTX_CLASS (code) != RTX_UNARY + || GET_MODE (const_arg) == mode_arg0 + || (code != ZERO_EXTEND + && code != SIGN_EXTEND + && code != TRUNCATE + && code != FLOAT_TRUNCATE + && code != FLOAT_EXTEND + && code != FLOAT + && code != FIX + && code != UNSIGNED_FLOAT + && code != UNSIGNED_FIX))) + folded_arg = const_arg; -; - } - } - } - } + if (folded_arg == XEXP (x, i)) + continue; - else - { - if (fmt[i] == 'E') - /* Don't try to fold inside of a vector of expressions. - Doing nothing is harmless. */ - {;} + if (insn == NULL_RTX && !changed) + x = copy_rtx (x); + changed = 1; + validate_change (insn, &XEXP (x, i), folded_arg, 1); } - /* If a commutative operation, place a constant integer as the second - operand unless the first operand is also a constant integer. Otherwise, - place any constant second unless the first operand is also a constant. */ - - if (COMMUTATIVE_P (x)) + if (changed) { -; - } + /* Canonicalize X if necessary, and keep const_argN and folded_argN + consistent with the order in X. */ + if (canonicalize_change_group (insn, x)) + { + rtx tem; + tem = const_arg0, const_arg0 = const_arg1, const_arg1 = tem; + tem = folded_arg0, folded_arg0 = folded_arg1, folded_arg1 = tem; } + + apply_change_group (); } /* If X is an arithmetic operation, see if we can simplify it. */ @@ -4403,16 +3641,31 @@ equiv_constant (rtx x) (GET_CODE (x) == SUBREG) + { + rtx new; + + /* See if we previously assigned a constant value to this SUBREG. */ + if ((new = lookup_as_function (x, CONST_INT)) != 0 + || (new = lookup_as_function (x, CONST_DOUBLE)) != 0) + return new; + + if (REG_P (SUBREG_REG (x)) + && (new = equiv_constant (SUBREG_REG (x))) != 0) + return simplify_subreg (GET_MODE (x), SUBREG_REG (x), + GET_MODE (SUBREG_REG (x)), SUBREG_BYTE (x)); + + return 0; + } + + /* If X is a MEM, see if it is a constant-pool reference, or look it up in + the hash table in case its value was seen before. */ if (MEM_P (x)) { struct table_elt *elt; - x = fold_rtx (x, NULL_RTX); + x = avoid_constant_pool_reference (x); if (CONSTANT_P (x)) return x; Index: recog.c =================================================================== --- recog.c (revision 114139) +++ recog.c (working copy) @@ -238,6 +238,28 @@ validate_change (rtx object, rtx *loc, r return apply_change_group (); } +/* Keep X canonicalized if some changes have made it non-canonical; only + modifies the operands of X, not (for example) its code. Simplifications + are not the job of this routine. + + Return true if anything was changed. */ +bool +canonicalize_change_group (rtx insn, rtx x) +{ + if (COMMUTATIVE_P (x) + && swap_commutative_operands_p (XEXP (x, 0), XEXP (x, 1))) + { + /* Oops, the caller has made X no longer canonical. + Let's redo the changes in the correct order. */ + rtx tem = XEXP (x, 0); + validate_change (insn, &XEXP (x, 0), XEXP (x, 1), 1); + validate_change (insn, &XEXP (x, 1), tem, 1); + return true; + } + else + return false; +} + /* This subroutine of apply_change_group verifies whether the changes to INSN were valid; i.e. whether INSN can still be recognized. */ Index: recog.h =================================================================== --- recog.h (revision 114139) +++ recog.h (working copy) @@ -74,6 +74,7 @@ extern void init_recog_no_volatile (void extern int check_asm_operands (rtx); extern int asm_operand_ok (rtx, const char *); extern int validate_change (rtx, rtx *, rtx, int); +extern bool canonicalize_change_group (rtx insn, rtx x); extern int insn_invalid_p (rtx); extern int verify_changes (int); extern void confirm_change_group (void);
http://gcc.gnu.org/ml/gcc-patches/2006-05/msg01391.html
CC-MAIN-2017-30
refinedweb
1,055
62.17
Grepper GREPPER SEARCH SNIPPETS USAGE DOCS INSTALL GREPPER All Languages >> Html >> add css file in html “add css file in html” Code Answer how to link css to html css by Thankful Toucan on Jan 17 2020 Donate 0 how to link to a css file in html css by TechWhizKid on Mar 20 2020 Donate 0 add css file in html css by Ankur on May 10 2020 Donate 0 /* Adding css file into html document */ /* add this line into head tag of the file with change in file name. */ /* I hope it will help you. Namaste */ insert css file in html css by Jolly Jackal on Feb 12 2020 Donate 0 This is a heading This is a paragraph. Source: how do i link my css to my html css by Witty Wasp on Feb 09 2020 Donate 0 Html queries related to “add css file in html” import stylesheet into html file declare css file in html how to link css css.style what is the css tag to put on the html? how to use css in my html code how to link style css to html connect css how to make a css script in html adding style script in html connect external css file to html css file set how to link css file' how to add a stylesheer app;y css w3chool css style adding css to an html file hw to link css different styles in css embed css file in html > a css htm l css file html reference style sheet link for style css how to link css in css css in html reference ad css to html linking a css page to website where do you put a style regerence in html how to include css file in html document insert a style sheet how i should css file how to read css file in html embed a css file in html link css to index.html how to link css to htrml html link to style.css html css how to use a different style sheet for a section how to link css code to html writing stle elements within html css in linke online external stylesheet reference in html hoe to attach onther file css to html head css css style element how to write css html example how to make tendecny in css & html write css in html¨ write css in get css file css within html doc how to give css inside html add styling to html w3schools Best way to include CSS External CSS htm link css with htmll css html refernece syle in html and css section css style style sheets and html html name for adding stylesheets connect a css file to html style include HTML tag to add an External CSS file inside an HTML file define css style inside css style includes css in html what is CSS style reference adding style file to html html css external style html css connection tutorial style include css how to specify in css how to style head with css apply css About %s +1 in diffrent style css how to connect your HTMLto css html css from file internal and external linking css how to connect html file to css how to load an internal style sheet with all styles how to add styling in html include css code where to put internal css css embedded in html adding style inside html html5 link to css css add stylesheet writing inline style in html css stylesheety div css external css internal html put style url in tag css in line style html how to call in css css css style types where to put css file how to add an external css file get css file to html how to specify inline style include css style sheet in html head include css how to add cssin html rel="stylesheet" href="../ connecting CSS file to html file put html in css how to connect style.css to html internal style in html where to put css link in html how to link css to htm lpage apply html css custom on. opening page html + css link href="style.css" css use using # css css connect imprt css in html fie style.css or styles.css how to use style sheet in html inserting a style tag in html html include css header referencing stylesheet in html how to write -% in CSS external css how to use in html file how to call .css in .html css > use how to use anina for html css css use @ css from external file html link a css file css inside html document example styles in html header html imbedded css javascript link css add css block to page Which HTML element is used to refer to an external style sheet? how to link an internal styling sheet in html html document inline html css connection how to style webpage w3schools referring an external style sheet in the html page importing css in html css in html document html and css link css types how to add style to single element in html how to link html to css stylesheet css with a stylesgeet style from css external style sheet example css link css to hmtl base css style sheet adding styling to html how to apply stylesheets in text/html embedded css style css a linking css to 3 html ho to add css file how to write css inside css file linking stylesheet in html .css file inline css html add external css in html how to reference your stylesheet in html html css file how to link own css stylesheet css in incluled styles in styles.css linking css style to html External CSS - element How to refer the .css file in the web page? 3 type of style in html how to link a stylesheet in html how to style + css link href= phone.css rel= stylesheet link for html to css how to call scc in html How to use @ in css Css styling formats stylre.css color inline styling java script link style sheet how to create a .css link html linking external css define inline css how to get css to work in html include file css in html stiley css in html where do i put css html5 insert css style tag not import css link css sheet to html use Css on an HTML style.css file link in css styles define css in html css inside add css to using -moz- css inline style in php line css htl inline style By continuing, I agree that I have read and agree to Greppers's and . Continue with Google
https://www.codegrepper.com/code-examples/html/add+css+file+in+html
CC-MAIN-2020-50
refinedweb
1,131
60.05
question in a beginning java problem kanaka tam Ranch Hand Joined: Jan 19, 2004 Posts: 42 posted Apr 02, 2005 16:56:00 0 import java.awt.*; import java.applet.*; import java.awt.event.*; public class Lab7 extends Applet implements ActionListener { private Button savingsButton, checkingButton; private TextField amountTF; private int amount; private int check; private int save; private Account checking, saving; public void init() { Label inputLbl; inputLbl = new Label("Enter an amount to deposit or withdraw:"); add(inputLbl); amountTF = new TextField(10); add(amountTF); amountTF.addActionListener(this); savingsButton = new Button("Savings"); add(savingsButton); savingsButton.addActionListener(this); checkingButton = new Button("Checking"); add(checkingButton); checkingButton.addActionListener(this); checking = new Account(); saving = new Account(); } public void actionPerformed (ActionEvent event) { if(event.getSource() == savingsButton) { amount = Integer.parseInt(amountTF.getText()); if(amount > 0) { saving.depositMoney(amount); save = saving.getBalance(); } else if(amount < 0) { saving.withdrawMoney(amount); save = saving.getBalance(); } } if(event.getSource() == checkingButton) { amount = Integer.parseInt(amountTF.getText()); if(amount > 0) { checking.depositMoney(amount); check = checking.getBalance(); } else if(amount < 0) { checking.withdrawMoney(amount); check = checking.getBalance(); } } repaint(); } public void paint(Graphics g) { g.drawString("Your Checking Account Balance = $" + check, 70, 90); g.drawString("Your Saving Account Balance = $" + save, 70, 110); if(check < 0) g.drawString("Warning!!! Negative Checking Balance", 70, 130); if(save < 0) g.drawString("Warning!!! Negative Saving Balance", 70, 150); } } class Account { private int balance; public Account() { } public void depositMoney(int amount) { balance = balance + amount; } public void withdrawMoney(int amount) { balance = balance + amount; } public int getBalance() { return balance; } public void display() { } } I wrote the above program for simulating a bank account which works fine but the rule is that i should use display method for displaying the output and should put the code inside this method instead of where i have put it in paint and call the method inside paint(). I am not sure how to do that. Can anyone help me.. kanakatam Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Apr 02, 2005 17:08:00 0 If I understand correctly, do just as you described. First create a method named display and move the code that you currently have in paint() to it: public void display(Graphics g) { // put your code here } Then change paint() to call display(): public void paint(Graphics g) { display(g); } Notice that I made display take a reference to a Graphics object as a parameter. This allows paint() to pass the correctly configured object on to it. If you have any more questions let us know. Layne HTH Java API Documentation The Java Tutorial kanaka tam Ranch Hand Joined: Jan 19, 2004 Posts: 42 posted Apr 02, 2005 18:32:00 0 Layne, Thank you for your reply. But i did try that by putting the code in method display and tried to call inside paint. But the variables check and save are local variables to the class Lab7 and so i cannot use them inside the method display and call the method display inside paint() method. If i try to declare these variables local to Account class then it does not recognize savings and checking instances. Also i get error message "cannot resolve symbol" when using display(g) inside paint method. I am flustered now. Any help appreciated. Thanx kanakatam Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Apr 03, 2005 12:23:00 0 After looking more closely at your code, it doesn't seem as nearly as simple as I thought when I first read your question. One of the problems is that I didn't notice that you already have a display() method in the Account class, but the paint() method is in Lab7 class. When you have two methods in different classes, it helps to use the class name along with the method to avoid confusion. So here we should talk about Account.display() and Lab7.paint() to clarify that the two methods are in different classes. This is the essential part I was missing with my previous comments. I thought that these two methods were in the same class So to figure this out, let's back up for a minute? What exactly should Account.display() do? It seems likely, that it should display the information about a single Account. I would suggest that you figure out how to do this. Notice that the Account class doesn't need the variables check and save because, as you are using them in the Lab7 class, they represent the balances for two different accounts. But the Account class only represents a SINGLE account and it already has its own variable to keep track of that account's balance. In fact, you don't really need the check and save variables in Lab7 because those values are indirectly available through the saving and checking variables that represent each account. So to reiterate, you should try to fill in Account.display() so that it prints the information for the "current account". Once you get that much, you can work on calling this method from Lab7.paint(). I hope this helps. Let us know what problems you encounter from there. Keep Coding! Layne p.s. I'd also like to mention that saying "the variables check and save are local variables to the class Lab7" may be a little confusing to most Java programmers. I think I understand what you mean, but you are not using the term "local variables" correctly. To clarify, a local variable is one that is declared inside a method. Since check and save are declared in the class and outside any method, they are NOT local variables; they are "member variables". [ April 03, 2005: Message edited by: Layne Lund ] subject: question in a beginning java problem Similar Threads Replicating a simple bank account Interest need help sorting a string array alphabetically Help!What is wrong with this program? (a simple bank account program) IF Condition not working!?!? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/399189/java/java/beginning-java
CC-MAIN-2014-52
refinedweb
1,002
64.41
Using Try/Catch Blocks with Java Dialogues Using Type Casting and the TRY/CATCH Block with Java Swing Dialogues One of the most important things to remember when you decide to create a user interface is to expect possible user errors that may come up. Indeed, one of the most embarrassing things that could happen in your program is when you ask for user input and they accidentally enter the wrong input type and your program crashes. Yes, it is true that the user made the error, however, as a programmer you must keep this in mind when trying to create you interface. So, what can you do to make sure this never happens? Well, like all programs, there must be prior planning before you decide that the program is good and try to export it as a usable product. You must consider not only what the interface must look like but how it will handle possible errors. Of course the possibilities are great, however, if you plan out your program prior to writing any code or module, you can possibly catch most of these input errors before you program ever makes it to the market. In this article, I will create code using dialogues in Java’s Swing class to show you how you can use a generic try/catch block that will allow you to catch a type of input error in which you can correct while the program is running. First, I will show you what I usually do to plan out the specifics of a program by showing you how to use a flow chart and pseudo code. Secondly, this program will also show you how to type cast string input from a Java based dialogue box and change the input to a number in which you can manipulate the number via any type of math algorithm. Third, I will show you how to use a try/catch block to catch user input errors. Last, I will show you the actual JAVA code that I created in NetBeans using the built in JAVA Swing class. Again, as stated earlier, the version of NetBeans I will be using is 7.0.1. Planning for Possible Errors: One of the things I like to do is to find out what exactly the user needs from a particular interface. Is it to add numbers or enter text? You have to remember that in pretty much all programming languages there is something known as variable type. When you define a variable or class it is a type (i.e. an int, double, String, etc.). This type must be exact and the only way to make sure user input is the same type is to either catch the error before it is processed or purposely cast the type to fit the correct input. So, you may have to plan how you will handle this issue by planning accordingly. When I create a plan to handle such issues of a programming language, I usually will fill out a flowchart with some pseudo code (i.e. code that represents more of an English view without actually using a programs code.). In the case of this example, I want to ask the user to input two numbers by way of two dialogue boxes and in return the program will add the two numbers and show a dialogue box with the results. With that said, the following would be the flowchart and pseudo code I would use to help me plan to create the program that catches any incorrect user inputs. Program FLow Chart So, by looking at this flowchart, I can plan out the code to catch input errors made by the user and have the program request the correct input. Now the following would be the pseudo code I would use in this situation. Remember, pseudo code is not program specific, it just give you a visual or idea on how you may write the code later within the programming language that you choose. Program Pseudo-Code boolean done = false; While (!done) String firstNumber = Show Dialogue (input firstNumber); String secondNumber=Show Dialogue (input seconeNumber); Try { Double number1 = cast (firstNumber); Double number 2 = cast (firstNumber); Double sum = number1 + number 2; showMessageDialogue (sum); done = true; } Catch (numberException) { showDialogue (“You have to Enter Two Numbers”) } Again, the above is only a pseudo code and could be refined to any programming language you wish. Just remember that pseudo code is only a tool to give you an idea how you want the program to flow. As you can see above, I created a plan that would ask the user to input two numbers via two different dialogue boxes. The try/catch block then takes in the input and determines if the input is correct. If the input is correct, then the two numbers are summed up and the while loop is ended since the Boolean variable “done,” is now set to true. However, if the input where not numbers, the catch block would catch it and produce an error that would ask the user to reenter the number. If it wasn’t for this try/catch block, the program would crash and the user may never know the reason why. With that said, the following is the code that I ended up with using Java. I am presently using the NetBeans version 7.0.1: //********************************************************** // //"); if(firstNumber == null || firstNumber.length() == 0) { firstNumber = "0"; } String secondNumber = JOptionPane.showInputDialog("Enter second integer"); if(secondNumber == null || secondNumber.length() == 0) { secondNumber = "0"; } /, now I will explain the code above to give you an idea what is going on. First, I created a new project with the package name of “additional dialogues,” with a public class name of “Additional Dialogues.” I also imported the built Java class “javax.swing.JOptionPlane.” This imported class will allow you create dialogues for input. Within the Additional Dialogue class you have the main function. This function is the entry point to your program. Next I set up a Boolean variable “done,” which will be the variable that controls your while loop. If the user enters the numbers correctly, then this Boolean variable will change to true and the loop will be exited. Next, we have the while loop. As stated above, the wile loop will continuously run until the correct input is entered. Since the variable “done,” is set to false, then the symbol “!” is used to tell the while loop that until this variable is true, continue looping. The next variables will hold the input string that the user enters and will initialize two individual dialogue boxes to get that input. When you use dialogue boxes, the input that the user enters will be of a string value. This also goes for numbers as well. The next section of code is the try/catch block. This is where the decision is made to determine if the user did indeed actually enter integers and not other type of characters. Within the try block I created two new double type variables (double type variables allow you to use decimal variables). These two variables will be use to add the two numbers inputted by the user. You may also noticed that both variables use a parsing statement (i.e. number1 = Double.parseDouble (firstNumber)). This is called type casting. Remember that the type that the user inputs within the dialogue box is a string; however, you cannot use any type of math algorithm with strings, only numbers. So, user input has to be what programmers call “type casted,” to make sure that any math on inputted numbers can be done. Finally, within this block, the numbers are added and returned to the user via a dialogue box and the loop variable of done is set to true to end the loop. However, if within the try block, the input from the user was not a number, then the catch block is ran and the user is asked to enter the correct input. So, the program will continue to run until the user finally inputs the correct information. With that said, the following graphics shows and example of what happens when you finally run this program: Figure 1: User enters random character Figure 2: User enters a letter as input. Figure 3: Program produces an error and user is asked to re-input intergers Figure 4: User re-inputs first integer. Figure 5: User inputs second integer. Figure 6: Program returns correct result to user. Final Conclusion In this example, I showed you how to create a simple version of a graphic user interface in which the user can enter input. I also showed how to use type casting in java as well as how to use a try/catch block to catch any errors and in turn to keep the program from crashing. However, this program is still not crash proof. There is one problem with this program that I did not cover as it pertains to user input. I decided to leave it up to you to decide what that problem is. However, I will give you a hint: The cancel button on the dialogue buttons could cause your program to crash. Again, I will leave it up to you to figure out how to compensate for this problem and figure out how to change the program to catch this error. Happy coding!!!! Great Programming Books from Amazon Please enter any comments here. what will happen when u cancel the option it will crash id-10t
https://hubpages.com/technology/Using-TryCatch-Blocks-with-Java-Dialogues
CC-MAIN-2017-26
refinedweb
1,588
68.4
Hello, I'm required to run some C Programs for my C Programming class in Terminal. I have never used Terminal before and can't seem to figure out how to do this. I'm using Xcode as my compiler. I figured out how to get Xcode to make an exe file of my program so that I can use it in Terminal. If I double click on the exe file on my computer, it will run in Terminal, but it does this before I can enter any strings for it to use. Can someone please guide me through the process of compiling and running the program in Terminal? Below is an example of what I'm typing in Terminal with no results. Repeat is the name of my program. John-Smiths-MacBook:desktop smith_j$ ls "/Users/smith_j/Desktop/C Programming Class/repeat 3-27-13 6.49 PM/usr/local/bin/" repeat John-Smiths-MacBook:desktop smith_j$ "repeat" resistance is futile -bash: repeat: command not found This is my program: #include <stdio.h> int main(int argc, char *argv[]) { int count; printf("The command line has %d arguments:\n", argc - 1); for (count = 1; count < argc; count++) printf("%d: %s\n", count, argv[count]); printf("\n"); return 0; }
http://cboard.cprogramming.com/c-programming/155466-help-opening-c-program-mac-terminal.html
CC-MAIN-2014-41
refinedweb
209
71.65
Our Own Multi-Model Database (Part 1) Our Own Multi-Model Database (Part 1) Multi-model databases are easy to get wrong, so how about doing one right? Follow along with this series and merge a KV store, a graph database, and an object DB. Join the DZone community and get the full member experience.Join For Free Compliant Database DevOps: Deliver software faster while keeping your data safe. This new whitepaper guides you through 4 key ways Database DevOps supports your data protection strategy. Read free now I may be remembering this wrong, but I think it was Henry Rollins who once asked, “What came first, the sh*tty multi-model databases or the drugs?” His confusion was over whether: Option A: There were a bunch of developers dicking around with their Mac laptops and they wrote a sh*tty database, put it on GitHub, posted on Hacker News, and then other developers who were on drugs started using it or… Option B: There were a bunch of developers on ketamine and ecstasy and somebody said, "Let's write a sh*tty let's down some of that and see what happens. Alright, most multi-model databases start out from some key-value backing. There are plenty of choices to choose from, but I’m gonna go embedded first (an homage to Neo4j, which started life as embedded only) so I’ll use ChronicleMap. Chronicle Map is built by the Open HFT folks (Roman Leventov, Rob Austin, Peter Lawrey, and others) and is known to be pretty damn fast, so we’ll use that. Now, I’ll need to handle Keys and their values, but those values can be Objects, and those Objects can be connected together. Yes, our multi-model database will be a KV store, object database, and graph database rolled into one sh*tty mess. Let’s call this thing ChronicleGraph and set up our storage. public class ChronicleGraph { private static ChronicleMap<String, Object> nodes; So we have this “nodes” map that accepts any string as a key and stores an object. Now onto our addNode method, we need to accept both empty nodes and nodes with properties, but we’ll force the users to give us a primary key of some sort. This will make getting the node back easy. public boolean addNode (String key) { return addNode(key,""); } public boolean addNode (String key, Object properties) { nodes.put(key, properties); return true; } Let’s test this out… does this pass? @Test public void shouldAddNode() { boolean created = cg.addNode("key"); Assert.assertTrue(created); Assert.assertEquals("", cg.getNode("key")); } Yes… what about with complex properties? @Test public void shouldAddNodeWithObjectProperties() { HashMap<String, Object> address = new HashMap<>(); address.put("Country", "USA"); address.put("Zip", "60601"); address.put("State", "TX"); address.put("City", "Chicago"); address.put("Line1 ", "175 N. Harbor Dr."); HashMap<String, Object> properties = new HashMap<>(); properties.put("name", "max"); properties.put("email", "[email protected]"); properties.put("address", address); boolean created = cg.addNode("complex", properties); Assert.assertTrue(created); Assert.assertEquals(properties, cg.getNode("complex")); } Yes, no problem. OK, so that was the easy part. How do we join these guys together in relationships, though? First, let’s add another map for “relationships,” which will store the relationship properties if any. We want to be able to store relationship properties because, without those, we’d be reduced to the most useless of all NoSQL databases — the dreaded “RDFs.” Ewwwww. private static ChronicleMap<String, Object> relationships; private static HashMap<String, ChronicleMap<String, Set<String>>> related = new HashMap<>(); We are also adding a map of maps called “related” which will store our relationships by type and direction. To add a new relationship, we ask the user for the Type and the keys of the two nodes we’re connecting. If the related map doesn’t know about this relationship type, then we add the type before trying to create the relationship. This addRelationshipType will create two ChronicleMaps one for the Incoming and one for the Outgoing direction and our addEdge method is called twice to match each direction. public boolean addRelationship (String type, String from, String to) { if(!related.containsKey(type+"-out")) { addRelationshipType(type, DEFAULT_MAXIMUM_RELATIONSHIPS, DEFAULT_OUTGOING, DEFAULT_INCOMING); } addEdge(related.get(type + "-out"), from, to); addEdge(related.get(type + "-in"), to, from); return true; } This means, of course, that when we delete a node, we have to delete it from the nodes map as well as from both its incoming and outgoing relationships and delete any relationship properties it may have had from both directions. That method was painful, so I’ll spare you the details, but take a look at it here if you want. Now, what good would this database be if it could not traverse? Let’s add a new method to get the nodes linked by an outgoing relationship (and we’ll do the same for incoming) by first getting the node ids in the type+”out” ChronicleMap of “related” and then using “nodes” to get their properties: public Set<Object> getOutgoingRelationshipNodes(String type, String from) { Set<Object> results = new HashSet<>(); for (String key : related.get(type+"-out").get(from) ) { HashMap<String, Object> properties = new HashMap<>(); properties.put("_id", key); properties.put("properties", nodes.get(key)); results.add(properties); } return results; } … and now for a crazy test: @Test public void shouldGetNodeOutgoingRelationshipNodes() { cg.addNode("one", 1); cg.addNode("two", "node two"); HashMap<String, Object> node3props = new HashMap<> (); node3props.put("property1", 3); cg.addNode("three", node3props); cg.addRelationshipType("FRIENDS", 10000, 100, 100); cg.addRelationship("FRIENDS", "one", "two"); cg.addRelationship("FRIENDS", "one", "three"); Set<Object> actual = cg.getOutgoingRelationshipNodes("FRIENDS", "one"); Set<Object> expected = new HashSet<Object>() {{ add( new HashMap<String, Object>() {{ put("_id", "two"); put("properties", "node two"); }}); add( new HashMap<String, Object>() {{ put("_id", "three"); put("properties", node3props); }}); }}; Assert.assertEquals(expected, actual); } Beautiful, isn’t it? The Tramadol is wearing off, so we’ll stop here. Take a look at the source code on GitHub as always. I’ll try to continue this later maybe by adding a web server and an API to go with it. In the meantime, enjoy the sh*tty music and the drugs. Read this new Compliant Database DevOps whitepaper now and see how Database DevOps complements data privacy and protection without sacrificing development efficiency. Download free. Published at DZone with permission of Max De Marzi , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/our-own-multi-model-database-part-1
CC-MAIN-2018-51
refinedweb
1,077
57.06