text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
------------------------------------------------------------------ This file is part of bzip2/libbzip2, a program and library for lossless, block-sorting data compression. bzip2/libbzip2 version 1.0.6 of 6 September 2010 Copyright (C) 1996-2010 Julian Seward <[email protected]> Please read the WARNING, DISCLAIMER and PATENTS sections in the README file. This program is released under the terms of the license contained in the file LICENSE. ------------------------------------------------------------------ 0.9.0 ~~~~~ First version. 0.9.0a ~~~~~~ Removed 'ranlib' from Makefile, since most modern Unix-es don't need it, or even know about it. 0.9.0b ~~~~~~ Fixed a problem with error reporting in bzip2.c. This does not effect the library in any way. Problem is: versions 0.9.0 and 0.9.0a (of the program proper) compress and decompress correctly, but give misleading error messages (internal panics) when an I/O error occurs, instead of reporting the problem correctly. This shouldn't give any data loss (as far as I can see), but is confusing. Made the inline declarations disappear for non-GCC compilers. 0.9.0c ~~~~~~ Fixed some problems in the library pertaining to some boundary cases. This makes the library behave more correctly in those situations. The fixes apply only to features (calls and parameters) not used by bzip2.c, so the non-fixedness of them in previous versions has no effect on reliability of bzip2.c. In bzlib.c: * made zero-length BZ_FLUSH work correctly in bzCompress(). * fixed bzWrite/bzRead to ignore zero-length requests. * fixed bzread to correctly handle read requests after EOF. * wrong parameter order in call to bzDecompressInit in bzBuffToBuffDecompress. Fixed. In compress.c: * changed setting of nGroups in sendMTFValues() so as to do a bit better on small files. This _does_ effect bzip2.c. 0.9.5a ~~~~~~ Major change: add a fallback sorting algorithm (blocksort.c) to give reasonable behaviour even for very repetitive inputs. Nuked --repetitive-best and --repetitive-fast since they are no longer useful. Minor changes: mostly a whole bunch of small changes/ bugfixes in the driver (bzip2.c). Changes pertaining to the user interface are: allow decompression of symlink'd files to stdout decompress/test files even without .bz2 extension give more accurate error messages for I/O errors when compressing/decompressing to stdout, don't catch control-C read flags from BZIP2 and BZIP environment variables decline to break hard links to a file unless forced with -f allow -c flag even with no filenames preserve file ownerships as far as possible make -s -1 give the expected block size (100k) add a flag -q --quiet to suppress nonessential warnings stop decoding flags after --, so files beginning in - can be handled resolved inconsistent naming: bzcat or bz2cat ? bzip2 --help now returns 0 Programming-level changes are: fixed syntax error in GET_LL4 for Borland C++ 5.02 let bzBuffToBuffDecompress return BZ_DATA_ERROR{_MAGIC} fix overshoot of mode-string end in bzopen_or_bzdopen wrapped bzlib.h in #ifdef __cplusplus ... extern "C" { ... } close file handles under all error conditions added minor mods so it compiles with DJGPP out of the box fixed Makefile so it doesn't give problems with BSD make fix uninitialised memory reads in dlltest.c 0.9.5b ~~~~~~ Open stdin/stdout in binary mode for DJGPP. 0.9.5c ~~~~~~ Changed BZ_N_OVERSHOOT to be ... + 2 instead of ... + 1. The + 1 version could cause the sorted order to be wrong in some extremely obscure cases. Also changed setting of quadrant in blocksort.c. 0.9.5d ~~~~~~ The only functional change is to make bzlibVersion() in the library return the correct string. This has no effect whatsoever on the functioning of the bzip2 program or library. Added a couple of casts so the library compiles without warnings at level 3 in MS Visual Studio 6.0. Included a Y2K statement in the file Y2K_INFO. All other changes are minor documentation changes. 1.0 ~~~ Several minor bugfixes and enhancements: * Large file support. The library uses 64-bit counters to count the volume of data passing through it. bzip2.c is now compiled with -D_FILE_OFFSET_BITS=64 to get large file support from the C library. -v correctly prints out file sizes greater than 4 gigabytes. All these changes have been made without assuming a 64-bit platform or a C compiler which supports 64-bit ints, so, except for the C library aspect, they are fully portable. * Decompression robustness. The library/program should be robust to any corruption of compressed data, detecting and handling _all_ corruption, instead of merely relying on the CRCs. What this means is that the program should never crash, given corrupted data, and the library should always return BZ_DATA_ERROR. * Fixed an obscure race-condition bug only ever observed on Solaris, in which, if you were very unlucky and issued control-C at exactly the wrong time, both input and output files would be deleted. * Don't run out of file handles on test/decompression when large numbers of files have invalid magic numbers. * Avoid library namespace pollution. Prefix all exported symbols with BZ2_. * Minor sorting enhancements from my DCC2000 paper. * Advance the version number to 1.0, so as to counteract the (false-in-this-case) impression some people have that programs with version numbers less than 1.0 are in some way, experimental, pre-release versions. * Create an initial Makefile-libbz2_so to build a shared library. Yes, I know I should really use libtool et al ... * Make the program exit with 2 instead of 0 when decompression fails due to a bad magic number (ie, an invalid bzip2 header). Also exit with 1 (as the manual claims :-) whenever a diagnostic message would have been printed AND the corresponding operation is aborted, for example bzip2: Output file xx already exists. When a diagnostic message is printed but the operation is not aborted, for example bzip2: Can't guess original name for wurble -- using wurble.out then the exit value 0 is returned, unless some other problem is also detected. I think it corresponds more closely to what the manual claims now.. 1.0.2 ~~~~~ A bug fix release, addressing various minor issues which have appeared in the 18 or so months since 1.0.1 was released. Most of the fixes are to do with file-handling or documentation bugs. To the best of my knowledge, there have been no data-loss-causing bugs reported in the compression/decompression engine of 1.0.0 or 1.0.1.. Here are the changes in 1.0.2. Bug-reporters and/or patch-senders in parentheses. * Fix an infinite segfault loop in 1.0.1 when a directory is encountered in -f (force) mode. (Trond Eivind Glomsrod, Nicholas Nethercote, Volker Schmidt) * Avoid double fclose() of output file on certain I/O error paths. (Solar Designer) * Don't fail with internal error 1007 when fed a long stream (> 48MB) of byte 251. Also print useful message suggesting that 1007s may be caused by bad memory. (noticed by Juan Pedro Vallejo, fixed by me) * Fix uninitialised variable silly bug in demo prog dlltest.c. (Jorj Bauer) * Remove 512-MB limitation on recovered file size for bzip2recover on selected platforms which support 64-bit ints. At the moment all GCC supported platforms, and Win32. (me, Alson van der Meulen) * Hard-code header byte values, to give correct operation on platforms using EBCDIC as their native character set (IBM's OS/390). (Leland Lucius) * Copy file access times correctly. (Marty Leisner) * Add distclean and check targets to Makefile. (Michael Carmack) * Parameterise use of ar and ranlib in Makefile. Also add $(LDFLAGS). (Rich Ireland, Bo Thorsen) * Pass -p (create parent dirs as needed) to mkdir during make install. (Jeremy Fusco) * Dereference symlinks when copying file permissions in -f mode. (Volker Schmidt) * Majorly simplify implementation of uInt64_qrm10. (Bo Lindbergh) * Check the input file still exists before deleting the output one, when aborting in cleanUpAndFail(). (Joerg Prante, Robert Linden, Matthias Krings) Also a bunch of patches courtesy of Philippe Troin, the Debian maintainer of bzip2: * Wrapper scripts (with manpages): bzdiff, bzgrep, bzmore. * Spelling changes and minor enhancements in bzip2.1. * Avoid race condition between creating the output file and setting its interim permissions safely, by using fopen_output_safely(). No changes to bzip2recover since there is no issue with file permissions there. *. 1.0.3 (15 Feb 05) ~~~~~~~~~~~~~~~~~ Fixes. Fixes CAN-2005-0758 to the extent that applies to bzgrep. * Use 'mktemp' rather than 'tempfile' in bzdiff. * Tighten up a couple of assertions in blocksort.c following automated analysis. * Fix minor doc/comment bugs. 1.0.5 (10 Dec 07) ~~~~~~~~~~~~~~~~~ Security fix only. Fixes CERT-FI 20469 as it applies to bzip2. 1.0.6 (6 Sept 10) ~~~~~~~~~~~~~~~~~ * Security fix for CVE-2010-0405. This was reported by Mikolaj Izdebski. * Make the documentation build on Ubuntu 10.04
http://opensource.apple.com/source/bzip2/bzip2-27/bzip2/CHANGES
CC-MAIN-2016-07
refinedweb
1,455
58.58
Edwin Goei wrote: > > Christophe Jolif wrote: > > > > I know that crimson is deprecated but as it is used in Sun JAXP > > distribution and as Xerces 2 is still not publically available (as far > > as I know), I hope that bug corrections or little enhancements are still > > accepted. If I well read Crimson pages, this list seems to be the right > > place to share my comments... > > Yes, maintenance on crimson continues and bug fixes are very much > appreciated. Thanks! > > > > > So, here is my problem, using the crimson.jar from the JAXP > > distribution, running the following test example (just done to > > demonstrate my problem): > > > > import org.w3c.dom.*; > > > > import org.apache.crimson.tree.*; > > > > public class BugA > > { > > // CJO 03/2001 > > public static void main(String[] arg) { > > XmlDocument doc = new XmlDocument(); > > Element elt = doc.createElementNS(null, "foo"); > > doc.appendChild(elt); > > elt.setAttributeNS(null, "bar", "bar_value"); > > elt.getAttributeNS(null, "bar"); > > } > > } > > Thanks for the short test case. I've checked in a fix for this. Thanks. > BTW, the preferred way to create a DOM document is to use the JAXP 1.1 > APIs and not internal parser APIs like XmlDocument(). Here is an > example: [...] > Substitute appropriate values of the root node qname "pref:root" and > uri. This will allow you to move your app easily to another parser such > as Xerces 2. Yes, I know, I directly used XmlDocument just to simplify the test case. Thanks a lot. -- Christophe --------------------------------------------------------------------- In case of troubles, e-mail: [email protected] To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
http://mail-archives.apache.org/mod_mbox/xml-general/200103.mbox/%[email protected]%3E
CC-MAIN-2014-23
refinedweb
261
59.9
CodePlexProject Hosting for Open Source Software Hi, i'm an italian student (Messina University), i try to parse a prj file of a shapefile to create a projection system. I have used the sharpmap wkt parse classes, but for every wkt string i've used, i obtain the same error: "expecting "]",but got "," ". I have read that there are a few bugs in these classes so i'm looking for a patch for these. What's the correct repo to get a patched version? I'm working with sharpmapcf v0.9... Hi ladrome1, afaik the only patch was FObermaier's for v2 and patching 0.9 isnt as easy as it should be... The referenced ProjNet () in the 0.9 version of sharpmap is old and many changes have been made to the ProjNet project since the referenced dll was built - primarily a recent namespace change and previously an added dependency on GeoAPI. So you would need to get the source of an old version of ProjNet - I would start at revision 12783 and go backwards until you find one that works. Once you have found it (you should probably reference the project rather than the dll you will need to debug both SharpMapCF and ProjNet. If the issue is the same one as Fobermaier patched: on encountering a comma you need to read further - however I believe the tokenizer is different between projnet 1.x and v2 so I am not sure how this translates into method calls. hth jd Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://sharpmap.codeplex.com/discussions/59968
CC-MAIN-2017-51
refinedweb
287
80.31
In the previous tutorials, we discuss C# if statement, C# if-else statement ,if ..else if ..else statement and nested if and else if statement . In this tutorial, we are going to learn the switch statement in C# programming. The C# language supports the switch statement or switch case statement. The switch statement is used to avoid the block of if..else.. elseif. Switch statement bases on the case. A statement can be executed on multiples condition using the C# switch statement. switch (expression) { case 1: code to be executed if the expression is matched break; case 2: code to be executed if the expression is matched break; /*We can create many case statements*/ default: code to be executed if the. 1. The expression used in a switch statement must have an integral or enumerated type. 2. You can create many case statements within a switch. 3. When the case is matched to variable then the case code executes until the break. 4. A break works like termination of the block. 5. In the if-else, the else part also works as a default part. In the C# switch statement, the default part works when any case does not match with switch expression. Let's create an example of a C# switch case statement. Output-Output- using System; namespace SwitchStatement{ public class Program { public static void Main(string[] args) { /* local variable definition */ int day=25; switch (day) { case 1: Console.WriteLine("Festival : Holi :Holiday "); break; case 2: case 3: Console.WriteLine("Festival : Diwali :Holiday "); break; case 25: Console.WriteLine("Festival : Christmas day:Holiday"); break; case 28: Console.WriteLine("Festival : Ram Navami :Holiday"); break; default: Console.WriteLine("No festival or check date"); break; } Console.ReadLine(); } } } Festival : Christmas day:Holiday In the above C# example, we created logic for festivals using C# switch case statement. You can modify according to your need and try this example of C# console. Let's create another example to find out the day name using C# programming with the switch case statement. using System; using System; namespace SwitchStatement{ public class Program { public static void Main(string[] args) { /* local variable definition */ int day=2; switch (day) { case 1: Console.WriteLine("The First day of the week is Mon."); break; case 2: Console.WriteLine("The second day of the week is Tue."); break; case 3: Console.WriteLine("The third day of the week is wed. "); break; case 4: Console.WriteLine("The fourth day of the week is Thurs."); break; case 5: Console.WriteLine("The fifth day of the week is Fri."); break; case 6: Console.WriteLine("The sixth day of the week is Sat."); break; case 7: Console.WriteLine("The seventh day of the week is Sun."); break; default: Console.WriteLine("Invalid day"); break; } Console.WriteLine("Day of the week ={0}",day); Console.ReadLine(); } } } Output - The second day of the week is Tue. Day of the week =2 The second day of the week is Tue. Day of week =2 In the above example, we created a logic to get week name using the C# switch statement.
https://technosmarter.com/csharp/switch-statement
CC-MAIN-2020-05
refinedweb
509
69.18
My computer setup Edit: my updated dot files can be found on github. As a developer it is really important to feel comfortable with his computer setup as we use I use it all day long. I am working on a Mac and for compliance questions I can’t change the OS.This is not really a problem as I am using very few programs (browser, email, terminal), and all of them are in full screen, so I don’t see much the underlying system. My philosophy is to use as much as possible open-source softwares. I prefer spend more time configuring, learning and patching an open-source application than mastering with more easy a non-free one. In reality many free software a way better than commercial one! Web browser I am using Firefox but with some add-ons: - uBlock Origin for content filtering (removes adds but also trackers) - Self Destructing Cookie to remove my old cookies when closing a tab - RefControl to controls sent HTTP referer (avoid leaking some personal informations when following links) - HTTPS Everywhere to automatically switch to HTTPS version if available My default search engine is DuckDuckGo which perform great most of the time and for more specific search queries I use StartPage (via !startpage bang). Of course I don’t have the flash plugin installed on my computer and I don’t see much limitation about that. Maybe one day I will spend some time configuring mutt, but for now Thunderbird does the job very nicely. By moment it takes a huge amount of memory (> 400MB) but nowadays with our machines with 4 or 8GB of RAM you will not notice it. To improve a bit email reading I use Conversations plugin that render thread as in GMail (at least as it was in the beta, I have not used it recently). Last but not least I use Enigmail to at least sign my emails and encrypt it when possible. Terminal emulator I use iTerm2, I could have used Apple Terminal but color support is very poor. iTerm2 is not much better but it allow me to don’t spend too much time on this very basic feature. Even if iTerm have a good tab and pane capabilities, I use tmux for this which is very flexible, feature-full, lightweight and works on different OS if needed. My tmux config is minimalist, I have just defined some shortcuts to navigate between windows/pane and resize it. # utf8 is on set -g utf8 on set -g status-utf8 on # upgrade $TERM set -g default-terminal "screen-256color" set -sg escape-time 1 set -g base-index 1 set -g pane-base-index 1 set -g mode-keys emacs set -g history-limit 10000 # Make copy/paste works on OSX set-option -g default-command "reattach-to-user-namespace -l zsh" bind C-c copy-mode # Mouse works as expected set -g mouse on bind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'copy-mode -e'" # Easy-to-remember split pane commands bind | split-window -h -c '#{pane_current_path}' bind - split-window -v -c '#{pane_current_path}' unbind '"' unbind % # URXVT tab like window switching (-n: no prior escape seq) bind -n S-down new-window -c '#{pane_current_path}' bind -n S-left prev bind -n S-right next bind -n C-left swap-window -t -1 bind -n C-right swap-window -t +1 # Resize panes bind -n M-left resize-pane -L 5 bind -n M-down resize-pane -D 5 bind -n M-up resize-pane -U 5 bind -n M-right resize-pane -R 5 # Reload tmux config bind r source-file ~/.tmux.conf # Status bar set -g status-bg black set -g status-fg white set -g window-status-current-bg white set -g window-status-current-fg black set -g window-status-current-attr bold set -g status-interval 60 set -g status-left '' set -g status-right '%H:%M' On Mac OS X getting clipboard working require to use a small program to fix some Apple issue (hence the specific option in the configuration). Something that surprise a lot my colleagues is that when I want to copy something I just select it in my terminal, no need to Ctrl+C as I don’t use the mouse for another thing! Maybe one day I will spent some time looking at tmux-plugins to extend some functionalities. Text editor As I have a nicely configured terminal I can take advantage of it, that’s why I use Vim. In fact Vim is an awesome editor since you have configured it to fit your need. I have tried other editors like SublimeText (not open-source!) or IDEs like eclipse but since you master Vim you can be far more productive and you are not lost when working on a remote server. As a base Vim config I use spf13 that set reasonable settings and comes with all needed plugins. I have however changed few settings in my ~/.vimrc.local: " -- Spell check set spellsuggest=best,10 nnoremap <F6> :set spell!<CR> inoremap <F6> <Esc>:set spell!<CR>i " --- Folding "UnBundle 'AutoClose' "set nofoldenable set foldlevel=9 " --- Colors let g:solarized_termcolors=16 color solarized " --- Line numbers function! ToggleNumbering() if &relativenumber set number set norelativenumber else set relativenumber set nonumber endif endfunc noremap <leader>n :call ToggleNumbering()<cr> " -- Tabs navigation nnoremap <tab> :tabnext<CR> nnoremap <S-tab> :tabprevious<CR> " -- Search current selection vnoremap // y/<C-R>"<CR> " -- Python-mode related let g:pymode_lint_checkers = ['pyflakes', 'pep8'] let g:pymode_trim_whitespaces = 3 let g:pymode_options_max_line_length = 150 let g:pymode_lint_on_fly = 1 let g:pymode_rope = 1 let g:pymode_folding = 0 let g:pymode_lint_ignore = "E502" " -- Tagbar related nnoremap <silent> <leader>t :Tagba " -- Copy/Paste related "set pastetoggle=<F12> nnoremap <Leader>yp :let @*=expand("%:p")<cr>:echo expand("%:p")<cr> nnoremap <Leader>yf :let @*=expand("%")<cr>:echo expand("%")<cr> and in ~/.vimrc.before.local: let g:spf13_bundle_groups=['general', 'writing', 'neocomplcache', 'programming', 'python', 'javascript', 'html', 'misc',] " let g:autoclose_on = 0" You may need to install ctags and flake8 to get all features. Other tools As I am developing APIs, I often need to make requests, to do so I use httpie, a curl/wget replacement with human friendly command-line options. Beware, httpie is based on python-requests which means that HTTP requests are done using your Python distribution. Until Python 2.7.9, only TLSv1 was supported, if the cipher suite of your server is a bit strict you may have some trouble. If you need a Python interpreter from time to time to test some code snippets, iPython can help you with nice completion, help and debug capabilities. I also use ZSH with Oh My ZSH as my default shell interpreter with the default configuration but with the philips theme. To listen music from my terminal I use cmus which does the job great but may freeze from time to time.
http://www.alexandrejoseph.com/blog/2016-01-08-my-computer-setup.html
CC-MAIN-2017-47
refinedweb
1,156
53.04
Visual Studio Integration SDK Roadmap Creating a VSPackage is a powerful way to extend Visual Studio. In fact, Visual Studio includes many components that are VSPackages, for example, the editor, the debugger, and predefined toolbars. Therefore, developing and deploying a VSPackage is like producing the core functionality of Visual Studio 2008 itself. You can create VSPackages by using Visual C#, Visual Basic, or Visual C++. You can use VSPackages to extend the Visual Studio integrated development environment (IDE) by creating project types, menu commands, tool windows, document editors, and much more. The Visual Studio Integration SDK contains tools, samples, wizards, designers, and documentation that are designed to help you create VSPackages. The following diagram shows a high-level view of the Visual Studio architecture. At the base of the diagram is the IDE itself, which is what you are extending. The IDE can be controlled in two ways, by using automation or by using the Package API. Typically, automation is programmed by using add-ins or macros. For more information, see Automation and Extensibility for Visual Studio. VSPackage programmers use the Package API, which is the same API that Microsoft uses to develop Visual Studio. If you use C++, you can call the Package API by using a template-based library named the Visual Studio Library (VSL). The Package API can also be called from managed code, either by calling the .NET Visual Studio Interop assemblies, or by using the Managed Package Framework (MPF). For more information, see The MPF is a set of helper classes that call the .NET Visual Studio Interop assemblies. Most of the classes and interfaces in the MPF can be found in the Microsoft.VisualStudio.Package and Microsoft.VisualStudio.Shell namespaces. The MPF provides the base class, Package. The Package class provides a managed implementation of several useful interfaces. Without writing much code, you can create a VSPackage by deriving your VSPackage class from the Package class and attaching various attributes to it. After you build a VSPackage assembly, you use the regpkg.exe utility to register it. The regpkg.exe utility examines the VSPackage assembly and registers the VSPackage in the system registry by using the information it finds in the attributes. For more information, see To test a VSPackage extension, you run Visual Studio under a separate system registry hive named the experimental hive and register the VSPackage there. Doing this lets you test your extension without worrying about whether your "real" installation of Visual Studio will be damaged. The experimental hive is created when you install the Visual Studio SDK. During installation, the Visual Studio registration entries are copied into the experimental hive. You can reset the experimental hive at any time by running Reset the Microsoft Visual Studio Experimental Hive in the Microsoft Visual Studio SDK/Tools folder on the Start menu. For more information, see Experimental Build. A service is a contract between two VSPackages. One VSPackage provides a specific set of interfaces for another VSPackage to consume. Use a service when two VSPackages must exchange information or when one VSPackage must access the functionality of another. Many of the VSPackages that are included in Visual Studio are interconnected by services. Some services are included in Visual Studio. For example, included services enable tool windows to be shown or hidden dynamically, enable access to Help, status bar, or UI events, and enable access to a code editor. You can take advantage of the included services when you deploy your own VSPackages. For more information, see When you create user interface (UI) elements such as buttons, the buttons are actually a visual representation of a command. You may have seen this design before. For example, when you customize a toolbar in a product such as Microsoft Word, you add buttons to a toolbar by dragging a command to the toolbar. In Visual Studio, a user accesses most functionality by using commands. For example, when you click Build Solution on the Build menu, you are using a menu "button" to execute the Build Solution command. You can see this connection when you right-click the menu bar in Visual Studio, and then click Customize. When the Customize dialog box opens, click the Build menu to reveal the names of the commands that are currently on the menu. However, clicking the command names does not run the commands. Right-click the Build Solution command name to reveal the customization form, as shown in the following picture. By using the customization form, you can change the command name, but you cannot change what the command does. The command name "button" is just a visual representation of the command, as opposed to a button that has a command attached to it. If you want a different command, you must drag one from the Customize dialog box to the menu. Doing this adds a command name "button" that represents the new command. Commands can be issued in various ways in the IDE. For example, if you right-click the name of a solution in Solution Explorer, the Build Solution command is available on the shortcut menu. If you show the Build toolbar, the command is available as a graphical button. Visual Studio even provides a tool window in which you can issue commands at a command prompt. To show this window, on the View menu, point to Other Windows and then click Command Window. In the Command Window, commands are grouped by category and can be entered interactively by typing the category, a period, and then the command. For example, to run the Build Solution command, you type Build.BuildSolution and then press ENTER. Notice that when you type the period after "Build," an IntelliSense window opens and shows you the commands that are available in the Build category. The categories that you can use in the Command Window are the same as the ones in the Customize dialog box, except that they may not contain spaces. For example, the Class Diagram commands are available by typing ClassDiagram. When you extend Visual Studio, you can create commands and add them to the IDE. You can specify how these commands will appear in the IDE, for example, on a menu or toolbar. Typically, a custom command would appear on the Tools menu, and a command for displaying a tool window would appear on the Other Windows submenu of the View menu. For more information, see How VSPackages Add User Interface Elements to the IDE. Menus and toolbars provide a graphical way for VSPackage users to access its commands. Commands are functions in VSPackages that accomplish tasks, such as printing a document, refreshing a view, or creating a new file. Menus are rows or columns of commands that typically are displayed as individual text items at the top of the IDE or a tool window. Submenus are secondary menus that appear when a user clicks commands that include a small arrow. Shortcut menus appear when a user right-clicks certain UI elements. Some common menu names are File, Edit, View, and Window. For more information, see Common Menu Tasks. respond in a hierarchical manner, starting with the innermost command context, based on the local selection, and proceeding to the outermost context, based on the global selection. Commands added to the main menu are immediately available for scripting. For more information, see Tool Windows How to: Create a Tool Window (C#) Video: Iterate through the Items in the Error List Document Windows You use an editor to create and modify the code and other text in your applications. The Visual Studio editor is a VSPackage. The Visual Studio SDK editor model lets you include the Visual Studio editor in your own VSPackages, and it also lets you create your own custom editors or call external editors. Not only is Visual Studio editor functionality used in the code entry window, it is also used in other text-entry features such as the Output window and the Command Window. Editors can also be form-based, as shown by the Visual Basic forms designer. The kinds of editors that you can use in Visual Studio 2008 are as follows: The Standard File-based Editor, which provides basic editor functionality. The Visual Studio 2008 Core Editor, which contains the functionality of the Standard editor, plus features such as the ability to take advantage of specific Visual Studio 2008 project types. A custom editor that you have designed for a custom project type. An external editor such as Notepad or Microsoft WordPad. For more information, see In Visual Studio 2008, projects are the containers that developers use to organize the source code files and other resources that appear in Solution Explorer. Typically, projects are files that store references to source code files and resources such as bitmap files. Projects let you organize, build, debug, and deploy source code, references to Web services and databases, and other resources. VSPackages can extend the Visual Studio 2008 project system by providing project types, project subtypes, If you want the Visual Studio editor to support new programming keywords or even a new programming language, create a language service or modify an existing one. Each language service may implement certain editor features fully, partially, or not at all. Language service classes provide full support for frequently used features and partial support for some other features. You can use managed language services (with Visual C#) or unmanaged language services (with Visual C++). At the heart of a language service are a parser and a scanner. A parser converts a programming language source file into discrete elements that are known as tokens. When you create a language service, you write configuration files for both the parser and the scanner so that Visual Studio can understand the tokens and grammar of the new language service. Depending on how it is configured, the language service can provide syntax highlighting, brace matching, IntelliSense support, and other features in the editor. For more information, see Implementing a Language Service By Using the Managed Package Framework. Walkthrough: Creating a Language Service (Managed Package Framework) Developing a Language Service Video: Create a Language Service Video: Add Intellisense to a Language Service Video: Add Code Snippet Functionality to a Language Service If you create a custom tool that would benefit from having its own version of the Visual Studio IDE, you can build on the Visual Studio Shell. The Visual Studio Shell provides a hierarchal project system, integration with editors and designers, source code control, and many other features that can support and enhance your custom tool. In addition to acquiring the look of Visual Studio, your custom tools built on the Visual Studio Shell have access to the features of the Visual Studio IDE. The Visual Studio 2008 Shell has two modes, integrated mode and isolated mode. Each mode addresses a different market, as follows: The Visual Studio 2008 Shell (integrated mode) integrates into Visual Studio on an end-user computer and enables your custom tools to have the look of Visual Studio. By using the integrated Shell, you can provide custom tools that your customers can use together with Visual Studio. Integrated mode is optimized to host language and software tool products. The Visual Studio 2008 Shell (isolated mode) lets you create unique and self-contained custom tools that run side-by-side with other versions of Visual Studio that are installed on a computer. The isolated Shell is optimized for specialized tools that have full access to Visual Studio services but also have a custom appearance and branding. For more information, see Visual Studio Shell-Based Applications Visual Studio Shell (Integrated Mode) Setup and User Guide Visual Studio Shell (Isolated Mode) MSDN Webcast: Build Tools for Any Platform with the VS 2008 Shell Video: Customize the Visual Studio Isolated Shell (Part 1) Video: Create a VSPackage for a Visual Studio Isolated Shell (Part 2) Video: Create a Setup Project for Visual Studio Isolated Shell (Part 3) The Visual Studio SDK includes tutorials that teach how to create VSPackages, give them functionality, integrate them into Visual Studio, and distribute them to other users. For more information, see Tutorials for Customizing Visual Studio By Using VSPackages
https://msdn.microsoft.com/en-us/library/cc138569(v=VS.90).aspx
CC-MAIN-2018-05
refinedweb
2,029
52.49
. Great post! I think blocks are also difficult to understand. "Don't test because a segment of the Ruby community forces you to, do it because it is so damn easy there is no reason not do." As a ruby newbie who's just started playing with test/unit, this seems to be a perfectly valid statement. Thanks for the feedback and the challenge. Automate things. Write test and have them executed in the background everytime you save a file. What might seem like more work in the beginning, turns out to be less work in the end. Felipe: Dunno if this helps or not, but I didn't really grok blocks until I coded the equivalent in prototype.js (which mimics much of Ruby in Javascript). Consider the "doubler" mapping in Ruby: [1, 2, 3].map{ |x| x * 2 } The equivalent in prototype.js: [1, 2, 3].map(function(x) { return x * 2 }) Looking at it this way, I realized that Ruby blocks are just anonymous functions with less verbosity. There are some semantic differences between blocks and methods in Ruby, but 90% of the time, you can think of them as equivalent. Of course, you can do some interesting stuff with the other 10%, but that's something to look forward to :) I love the comment you made above about how close Ruby allows you to get to the domain of the problem and clearly express the intent. The newbie challenge was a very interesting problem and one that I actually spent three hours on just trying to do the math than actually coding. Once I finally had the math figured out, the Ruby came almost effortlessly. I spent about 15 minutes coding and testing. But the remarkable thing is that the solution decidedly took the exact shape of the hand-written worksheet where I solved the problem mathematically. Powerful stuff! I can't get the tests to run without placing them in a class which is a subclass of Test::Unit::TestCase. Is it possible to run test/unit tests without doing so? Could you provide a gist? Thanks! Also, test_simple_times is missing a paren. nstielau - Thanks! Two copy & paste errors in three lines of code is not a good ratio :-/ You are correct, the test cases do need to be a sub-class of Test::Unit::TestCase (corrected above). Not sure how I missed that, but thanks for pointing it out! In the symbol-to-proc blog posting you mentioned, there is a comment that the symbol-to-proc method was over 3x slower than the {|i| i.to_time} version. I don't know if this is still the case (2009 vs 2006), but its worth considering. Isn't symbol-to-proc essentially syntactic sugar, or is there a more substantial / less obvious benefit that might outweigh the slower performance? Pete - There is no overhead for symbol-to-proc... at least not in Ruby 1.8.7+. The compiler remembers the initial symbol-to-proc conversion and uses it in subsequent iterations. So you can have clearer intent without worry of performance penalty. Hi Chris, Great document - a must read for all newbies. After reading the inject, chaining and symbol-to-proc -- I do consider myself a newbie again ! :D There is a subtle issue with your time summing up: in case the array is empty you get nil and not 0 (omitting the map here): irb(main):006:0> [].inject(&:+) => nil You rather want this form: irb(main):007:0> [].inject(0, &:+) => 0 I don't believe the remark about adding a ! to average_time_of_day is quite correct. To begin with, that particular method shouldn't have any side effects anyway. If you really want to manipulate an object passed into a method, call #dup on it first. The second thing is that adding a ! to methods that change data only really applies to situations where there is also a method without ! that works on/returns a copy. There are exceptions, of course, supposedly for "dangerous" situations. But suffixing ! to every method that changes data would be a bit excessive. Excellent advice about testing that method. Using TDD on functions makes it so easy, I always feel it's practically cheating. Mark -- I agree. I would not suggest adding a bang to average_time_of_day. I think it more appropriate to leave it side-effect free and not use any bang methods inside it. The rest of your point is well taken. In the interest of correcting mistakes... In Ruby parameters are NOT passed by reference. They are passed by value. You are confusing the concepts of immutability and pass by reference. In pass by reference, the formal parameter becomes an *alias* for the actual argument in the method body and so any changes made inside the method will be reflected afterwards. Simple proof of Ruby's not pass-by-reference-ness: irb(main):018:0> def changeme(a) irb(main):019:1> a = 10 irb(main):020:1> end => nil irb(main):021:0> b = 20 => 20 irb(main):022:0> changeme(b) => 10 irb(main):023:0> b => 20 Under pass-by-reference semantics, b's value would be 10. But in Ruby it clearly isn't, it's still 20. However, Ruby *is* an impure or mutable language (unlike Haskell, Erlang, etc..) and so does have side effects. This means that if a method body is given a copy of a reference to an object, and makes a change to that object, that too will be reflected after the method call. It is for these types of methods that Ruby uses the bang operator. The reasons this is confusing to many people is that in Ruby everything is an object. But it's still pass by value, you're just passing around references to objects. Subtle, but different. (In fact, it's the exact same function passing semantics as Java minus Java's primitive types). El Popa - Agreed. You are completely right. I've updated the post accordingly. Thanks! hey Chris, I was wondering about performance when you chain things like x.map(blah).inject(blah). I know it is really practical and cool to use and nice to read but aren't we adding up lots of avoidable iterations(comparing with using a simple while loop) slowing down the code? ....................................................... Nice post... Now I wish I had done the exercise before reading the post to see how close we were :) # A method that generates a side effect: def changeme!(a) puts a.object_id a[3] = "yep" end b = {} b.object_id # ==>12860680 changeme!(b) 12860680 b # ==>{3=>"yep"} # A method with no side effect: def changeme(a) puts a.object_id a = "rudolph" puts a.object_id end ==>nil b = {1=>2} ==>{1=>2} b.object_id ==>12535460 changeme(b) 12535460 12459832 ==>nil b ==>{1=>2} b.object_id ==>12535460
https://japhr.blogspot.com/2009/10/newbie-feedback.html
CC-MAIN-2016-44
refinedweb
1,141
66.44
2003 Filter by week: 1 2 3 4 5 store complex data as file Posted by hosey at 10/31/2003 2:04:29 PM is there a way to send a object from flash through remoting ex: object.dude="me" object.friends=["1","2"] and have it save the object as a file to be gathered later, or does it have to hit a database? ... more >> amf file size comparison Posted by hosey at 10/31/2003 9:06:39 AM I am using xml in flash to send/receive data. I have not used remoting. What is the benefit on the network of flash remoting. My xml get to be 500kb, what might an equivalent AMF send be? ... more >> Flash Components 2004 Remoting - Download Posted by Jason Huff at 10/31/2003 6:32:03 AM It seems a lot of people are having this question. Download the components: Jason ... more >> DataGlue works, but scroll bar doesnt. Posted by Jason Huff at 10/31/2003 6:02:32 AM #include "DataGlue.as" #include "NetDebug.as" #include "NetServices.as" NetServices.setDefaultGatewayUrl(" teway") Streamer_conn = NetServices.createGatewayConnection(); Songs = Streamer_conn.getService("testing.streamtest.streamer",this); function lo... more >> Flash remoting components in Flash 2004 Posted by Bloke at 10/31/2003 3:44:30 AM I am so confused with all the components and what I had before i intalled Flash MX 2004. Is Flash remoting components incorporated in Flash MX 2004 now? Or do I need to install Coldfusion to re-install it to Flash MX 2004? -Bloke ... more >> can't send an array to cf component Posted by edpsych at 10/31/2003 3:05:36 AM Ok, I'm having some issues trying to send an array from Flash to a Coldfusion Component. I can send variables without a hitch, but when I send an array, CF throws an exception because my array parameter is null. This seems like it should be a pretty elementary problem, but I have been unable to fin... more >> Remoting not working with Localhost... Posted by gongati2k at 10/29/2003 12:09:51 PM Hi, My application is just checking for user validity, If I give a local path like this "C:\Inetpub\wwwroot\csa_int\login.html" it is working going to database and validating the user. but if i run from localhost i.e "" its not working. can anybody help me p... more >> Flash Remoting... Posted by D-Loco at 10/29/2003 10:05:57 AM i have just purchased the new studio mx 2004 pro.... and i want to expand on flash... having got the coldfusion server with the studio i am now wanting to learn about remoting.... If i get the trial version of remoting, after the trial has run out does the program not work anymore or is it like c... more >> Don't see what you're looking for? Search DevelopmentNow.com. Flash Remoting, ColdFusion MX and Apache Web Server 1.3 Posted by Flashm NO[at]SPAM n at 10/29/2003 9:26:40 AM Hi I'm trying to get remoting to work with the following set up: ColdFusion MX Apache Web Server The default path for the htdocs on the Apache server is : C:\Program FIles\Apache Group\Apache\htdocs Say I create a new folder called remoting within htdocs and place my flash swf and c... more >> How determine when data is in? Posted by feiloiram at 10/28/2003 3:47:33 PM Hi I'm using flash2004MX. I can succefully get data; being a query, from a coldfusion component via flash remote into flash. I need to know how can check when all data is in. Even more I need to halt my movie until all dat is in. Anybody? ... more >> DataGlue.bindFormatStrings doesn't work in 2004, Why? Posted by feiloiram at 10/28/2003 10:29:59 AM Hi, following is done in Flash 2004 MX. function ProvideBrandNames_Result (result) { //trace(result.getLength()); //var titles = result.getColumnNames(); //trace(titles[0] + ", " + titles[1]); //var record = result.getItemAt(0); //trace(record.BrandName); //combobox.addItem(record.... more >> NetConnection.Call.Failed message in Flash MX 2004 Posted by termax at 10/27/2003 7:37:09 PM Hi, I began getting "NetConnection.Call.Failed" error in Flash Remoting NetConnection Debugger when I switched to Flash MX 2004. It worked fine in previous Flash MX. The possible reason for this problem I found on macromedia's site (see below) does not describe the situation correctly - becaus... more >> coldfusion developer looking for a new interface Posted by TWesson at 10/27/2003 1:17:48 AM Let me preface this by saying first that I have very little actual experience with Flash. I know what it is in its most basic form and it appears that if I combine Flash MX with ColdFusion MX and the remoting objects that I'll be able to completely redesign my intranet application into a fast, flui... more >> php/flash/loadvars = confusion Posted by etg7 at 10/24/2003 8:08:57 PM hello all... i have a php page with multiple, small swfs embedded...one of them i use for a menu...the other i use to show text information... my question is this... i'm using a loadvars object to send info from the menu-swf to a php page (that is not loaded) that does nothing but echo the dat... more >> How do i???? Posted by Panther893 at 10/24/2003 5:58:01 AM Im sort of new to flash. well i know the basics like making animations and static websites but i got a few questions about some other stuff. some of that is how do i do the following (which is in cfm) in flash. <cfquery name="users" datasource"tcl1" blockquote="100"> SELECT from users WHERE use... more >> form details send to Email ??? Posted by cosmits at 10/23/2003 10:08:17 AM Hi all I bet this one is realy simple couldn't find it on the remoting help though.... mabey it's just me... ok - I created my first form in flash using flash remoting with serverside actionscript to run on a CF platform it actually works !!! God shave the Queen !!! I'm so happy !!! I ... more >> Will remoting run on a standalone projector? Posted by Jorge Mesa at 10/23/2003 8:38:36 AM Will remoting run on a standalone projector with connection to a server? Thanks ... more >> Do all functions need to be at the root level? Posted by ooba at 10/22/2003 6:31:26 PM I have the following code that when on the root level, it works just fine. function cfcNewRegistration(form){ trace("submit function call active"); functionCFCCall.cfcDelayLoop(); } function cfcDelayLoop_Result(result){ trace('results = ' + result); } But, when I put it into the MC... more >> Binding Null Fields from a Remoting Recordset Posted by Simon Sadler at 10/22/2003 3:36:04 PM I'm setting a DataSet's DataProvider with a Remoting recordset containing fields with a null value. The type seems to be correctly converted to undefined but binding from the DataSet to a TextInput shows the string "undefined" in the field. Surely no one would want the UI to show such a value; the i... more >> OLE objects pulled with remoting (pics,shockwave,video etc') Posted by cosmits at 10/22/2003 2:32:23 PM Hi all Please excuse me for my ignorance would you be so kind as to answer this simple and basic question? I am working with: ColdFusion MX (a beginner) Flash MX 2004 professional (intermediate) Flash... more >> Is Flash Remoting included in CFMX 6.1 Standart Edition? Posted by KnightRider at 10/22/2003 1:16:08 PM Hello I'm a cold fusion developer. In development process of an apllication I've used flash remoting services to send/receive data between CFMX and Flash MX. Although it is running on my companies' local network, I can't run this apllication on my partner companies' web-server. CF MX 6.1 standart ed... more >> Remoting with .NET Web Services 1.1 Framework Posted by Theresa Smallwood at 10/22/2003 10:09:51 AM We're having a problem with a web service that we've created that I'm using for Flash remoting. We originally created the service using the 1.0 .NET Framework, but we have since upgraded to .NET 1.1 Framework. Now, when I try to connect to the services, I get errors like "No such service exists"... more >> The Best server side works with Flash MX Posted by monkeykk at 10/22/2003 9:01:53 AM Hi there, ASP,PHP,Perl or dotnet(VB.NET,C#.NET,etc...) Which is the best as the server side works with Flash MX2k4? Give another one if there isn't shown. Tim ... more >> new features of Flash MX Pro 2004 Posted by Aaron Lee at 10/22/2003 6:15:34 AM Hi, I am trying out the Flash MX Pro 2004 and like it a lot. I wonder with all the new features e.g. WebServices and backend database support - does it mean that Flash Remoting or Flash Comm Server is no long necessary? I read an article on Macromedia's tutorial talking about the importance o... more >> Flash remoting works for localhost only, not IP Posted by MissionView at 10/21/2003 9:07:45 PM I just upgraded to Flash MX 2004 and noticed that when my Flash pages that use Flash remoting are brought up in my web browser using...... they work fine. But when I bring the up using http://[IP Address]:8500/..... I can see the page, but the data is not retrieved. It's like... more >> DataGrid 2004 Column Header Names ? Posted by RedBaron_ICT at 10/21/2003 3:40:50 PM Has anyone noticed that when you want to change the column headers of a dataGrid, that it defaults to the field name unitl the data is loaded from a dataSet, THEN changes the names after the data is loaded? This is also the same for changing the column widths. How do you initialize the coulmn hea... more >> DRK 4 - Charting Components 2 Posted by kafafl at 10/20/2003 8:34:47 PM This is my second attempt at this. I hope someone replys. I have created a Flash Object for VB that performs Charting in a financial application. (The chart and the issue apply to all DRK Charting Components) I want to chart the returns of bonds against one another. However, the time period... more >> not working over http... Posted by Rooster60602 at 10/20/2003 6:29:46 PM Using Flash Remoting for asp.net, Windows IIS, one development box connected to a web server that's also exposed to the Internet. I've been working through some sample files for Remoting and things work fine when testing the movie within flash. Connects to the .aspx file in the application folder... more >> communicate to flash client Posted by rodrigo guerra at 10/20/2003 5:38:57 PM what protocol does flash remoting use to communicate to the flash client AMF, SOAP, WSDL, XHTML ?? ... more >> Flash Remoting: How does Flash know Classes defined in Webservices? Posted by amanda theinert at 10/20/2003 9:01:03 AM I have a Webservice with Client written in .NET with VB. When invoking the WebMethod an out-Parameter is passed on to the Client. Example - Method-Call in .NET: flashservice.GetStatus("en","1", out Status) Status is declared before the method-Call in the .NET-Client. Status as Se... more >> cf.query in flash emoting Posted by Marco Polo at 10/19/2003 2:19:08 PM having created as asr page with a simple sql statement in CF.query how can I address this: ie. var myVar=CF.query("ds","SELECT max (Id) as 'go' +1 from tbTest") how can I address 'go' : I would have thought return myVar.go or myVar["go"] but doesn't work: any idea?? thanks mat... more >> Problem running sample apps on Tomcat 5.0 Posted by Adam_Ratcliffe at 10/18/2003 9:10:37 PM Hi, I've installed the Flash Remoting for J2EE sample apps on Tomcat 5.0 but have been unsuccessful in running them. When I go to index.jsp is invoked, the header and a 'Loading...' message are displayed and nothing more. I installed the WAR fi... more >> Sending PHP variables to Flash Posted by goran187 at 10/17/2003 2:20:15 PM I have a flash swf on a PHP page. All it does is serve as a button to open a customized page based on an ID number generated by PHP. How do I send this ID number to th Flash movie in order to open the desired page. For example: the Flash movie should open the address ... more >> Help: Flash Remoting only works when accessed via localhost Posted by g35driver at 10/16/2003 3:28:33 PM I am sure that this is a simple fix, but.... I put together a very small Flash Remoting application on my local machine (running the CF web server at). It works fine, accesses data, change handlers are good, etc. The production server is not using the CF Web server, so ... more >> If I am remoting, how do I know I am connected? Posted by ooba at 10/16/2003 12:48:39 PM I have been playing around with remoting to connect to some cfc files. I can do this just fine. The question is this "If the connection to the cfc file is not made, how do I know? other than the obvious no data is returned." Is there a function that checks to see if the gatewayConnection.getServic... more >> protocol Posted by rodrigo guerra at 10/16/2003 9:36:50 AM what protocol flash mx 04 uses to comunicate with flash client ? i know that remoting in flash mx, uses AMF, but i heard that mx 04, uses SOAP too ..???? thanks. ... more >> Flash MX 2004 Pro - What about CFCs?!? Posted by NewMXer at 10/15/2003 9:09:52 PM Well, crud!! Is it just me or what? I'm really disappointed with the new version of Flash. As a CF developer, I decided to play around with adding some flash content to some forms a couple of months ago using Flash MX Remoting. Just when I decided that I was writing too much code in ActionScri... more >> **Error** RsDataProviderClass.as Posted by db tahoe at 10/15/2003 6:05:41 PM **Error** RsDataProviderClass.as: Line 35: ActionScript 2.0 class scripts may only define class or interface constructs. RsDataProviderClass.prototype.addItemAt = function(index, value) When I include the "NetServices.as" in my .FLA I get 14 of these errors. I have re-installed the 2004 remoting ... more >> Setting up SQL with ASP.NET assemblies Posted by Voods at 10/15/2003 9:06:27 AM Hello all, The flash remoting sample file - RemoteCatalog.cs, a .NET calss file, has the following code: //start of code using System; using System.Data.SqlClient; using System.Data; using System.Data.OleDb; namespace samples.remoting { public class RemoteCatalog { protected Sy... more >> Webservice response size limitation Posted by ricardo_cambio at 10/14/2003 2:29:13 PM Tom, Thanks for replying, I'm having issue with what I a believe to be a size limitation with a web service response. I've written a application with Flash MX that uses Flash Remoting(.Net) for communication with a JRUN web service. The webservice returns formatted xml data based upon the r... more >> Flash Remoting MX vs Flash 2004 MX Standalone Posted by Lukas Bradley at 10/13/2003 3:18:00 PM Hello all, I have a short question and a long question. Both ask the same thing; my long version just clarifies what I'm asking. Please do not thing I am trolling, or attempting to insult a Macromedia product. I'm trying to better understand the offerings. Short question: what does Flash... more >> Flash Remoting Samples woes........ Posted by Lim Hoe Keat at 10/13/2003 3:04:38 AM i got this pdf file from in page 2, it states "Sample files included in "flashremoting dotnet examples.zip"" in page 4, it states "Extract all from the FlashRemotingSamples.zip............"... more >> Flash Remoting - Java Deployment Posted by badalex at 10/12/2003 6:39:41 AM Hi, I recently purchased Flash Remoting - Java and I'm trying to get this to run on Caucho's Resin. But, I could not deploy the war file. So what I did was unjar the war file and took out the lib file and placed it in its proper place on the system WEB-INF/lib. And it does work perfectly. I... more >> refresh data from database Posted by James Crook at 10/12/2003 4:57:43 AM Hi, I've got Flash remoting and cfmx accessing a mySQL database, and it's all working nicely, but when I ADD something to the database and then query it again, I only get the same query results as the first time, even though if I close the application and start it again, the added items show up. ... more >> what is flash remoting? Posted by Raam at 10/10/2003 6:59:15 AM Hi all, What is flash remoting? With this can i develop e-commerce applications? what else are needed with this tool. Do we need flash comm server for this? or else we need cold fusion? I am new to all other products except flash and dreamweaver. What would be ther server... more >> Urgent help Needed! no response from a cfc method call Posted by Tonio34 at 10/9/2003 9:24:49 AM Hi everyone, i'm in need of some clue of a problem i have here. i've developped on my local machine a flash movie using flash remoting. during the developpement i connect on a local server to get data from a database, and everything works fine. when i release it to my customer, it seems that t... more >> Can't Consume Web Services Posted by othethe at 10/8/2003 3:29:30 AM I could use the WebServiceConnector to consume a web service in authoring environment. It fails when I published the application due to the security policy of the latest flash player. I can no longer sucessfully consume web services in IE and Netscape, but I can do it in Mozilla! Do anyone know w... more >> recordset object, Flash7 datagrid, checkbox Posted by Runnerman12 at 10/7/2003 7:20:56 PM Is it possible to add a checkbox component within a recordset object? I just need a Yes or No. Thanks, Runner ... more >> Flash Remoting Software installation Posted by dennismr at 10/7/2003 6:20:17 PM Does the flash remoting software have to be installed on both the remote server and the sever hosting the webservice? ... more >> Calling a cfc that is calling a java class with cfobject Posted by anglisa at 10/7/2003 2:06:31 PM I have created a very simple cfc that is calling a java class using the cfobject tag. The cfc was tested with a cfm page and it works fine. However, when I am using flash remoting to access the cfc, it fails when the method is called with the following AMF and status messages. I am running coldfusi... more >> · · groups Questions? Comments? Contact the d n
http://www.developmentnow.com/g/72_2003_10_0_0_0/macromedia-flash-flash-remoting.htm
crawl-001
refinedweb
3,273
74.79
CSS and Controls I have been learning about the CSS part of JavaFX, which seems like a great idea, but Im just wondering something. If I have a custom control, which extends CustomNode, that, in the create() method returns a Group of nodes, such as a heap of different lines, and rectangles etc... How would I apply a CSS style to an internal node of that group (i.e. one of those internal rectangles)? I cant seem to get it to work just using the styleClass properties, although maybe my CSS declarations are wrong, anyone had any experience with this? Thanks Heaps! Mark Ok, so after a little more testing, I have found that you can indeed do what I wanted, Im not sure why my earlier tests did nothing, but now they are working. I have a similar setup to this: [code] public class MyOwnControl extends CustomNode { public override function create():Node{ var yAxisLine = Line { startX: 10, startY: 10, endX: 100, endY: 500 styleClass: "yAxisLine" } return Group{ content: [ yAxisLine ] } } } [/code] [b]Note, keep in mind that the +yAxisLine+ is an inner property of the returned Group[/b] Then inside the CSS sheet, I have this: [code] "mytestingpackage.MyOwnControl" .yAxisLine { stroke: BLACK; } [/code] I hope this helps anyone else with a similar issue
https://www.java.net/node/689319
CC-MAIN-2015-14
refinedweb
213
62.01
Connect with developers and Intel engineers In my application, I have the following pattern occurring at many places: loop_over_i(i){ do_something(i); // common for all loops do_remaining_things(i);} The do_something part is common for most loops and so I am trying to extract it out with a class. However, the loop no longer vectorizes. A sample code is given below: #include <iostream> using namespace std; class Loop { public: Loop(int max) : max_(max), count_(0), sqr_(0) { } void more() { ++count_ ; sqr_ = count_ * count_; } bool done() const { return count_ >= max_; } operator int() const { return count_; } int sqr() const { return sqr_; } private: int count_, max_, sqr_; }; int main() { int square[50]; // this loop vectorizes for (int i = 0; i < 50; ++i) { square[i] = i*i; } // this loop doesn't vectorize for (Loop loop(50); !loop.done(); loop.more()) { square[loop] = loop.sqr(); } } How can I implement what I need in a way that the loop vectorizes? When indexing square[i], elements i and i+1, i+2, ... are in adjacent memory locations, thus permitting vectorization. The compiler vectorization code will recognize loop syntax such as i<50 and ++i and can analyse that for candidate of vectorization. When you declare the class Loop, and declare the member functions, the compiler does not have the necessary information in order to vectorize the loop. The compiler _potentially_ could vectorize the code if you inline the member functions. But having the !loop.done() may present a problem. Also note that your more() function is computing the result for the element after the last element. Consider converting the more() to return a bool and remove the done function. for(Loop loop(50); loop.more();) where inline bool more() { if(count_ >= max_) return false; sqr_ = count_ * count_; ++count_; return true; } This syntax will not necessarily vectorize. It would depend on how aggressive the the compiler was at vectorization. Jim Dempsey The fundamental problem you are experiencing is by making the code opaque by encapsulation with member functions and iterators you also tend to make it difficult, if not impossible for the compiler to vectorize or even to determine vectoizability of the code (as it chews through the data). This is the penalty you pay for progress. By using "old school" programming techniques such as array of like members of each object (as opposed to C++ arrays of objects of elements) the data layout favors vectorization. The requirements of the application would indicate the better of the two techniques. If you need the vectorization for performance then consider unpackaging the objects. Jim English | 中文 | Русский | Français
http://software.intel.com/en-us/forums/intel-c-compiler/topic/61345/
crawl-002
refinedweb
428
53.71
ioctl(2) [redhat man page] IOCTL(2) Linux Programmer's Manual IOCTL(2) NAME ioctl - control device SYNOPSIS #include <sys/ioctl.h> int ioctl(int d, int request, ...); DESCRIPTION The ioctl function manipulates the underlying device parameters of special files. In particular, many operating characteristics of charac- ter), ioctl_list(2), mt(4), sd(4), tty(4) BSD Man Page 2000-09-21 IOCTL(2) IOCTL(2) BSD System Calls Manual IOCTL(2) NAME ioctl -- control device LIBRARY Standard C Library (libc, -lc) <sys/ioctl.h>. GENERIC IOCTLS Some generic ioctls are not implemented for all types of file descriptors. These include: FIONREAD int Get the number of bytes that are immediately available for reading. fd argument is not a valid descriptor. [ENOTTY] The fd argument is not associated with a character special device. [ENOTTY] The specified request does not apply to the kind of object that the descriptor fd. BSD September 11, 2013 BSD
https://www.unix.com/man-page/redhat/2/ioctl/
CC-MAIN-2020-10
refinedweb
154
57.67
Hi bro, you can use this {% url 'home' %} the app_name isn't the namespace, the namespace is define on the main urls.py, if you haven't a namespace defined, so, with the name assined on the path() is sufficient greetings! ____________________________________________________ *Franklin Sarmiento* *Full-stack developer* *skype: franklin.s.dev* *Twitter: @franklinitiel* *linkedin: Franklin Sarmiento ( [email protected] <[email protected]> )* *E-mail: [email protected] <[email protected]>* *Teléfono(s): +57 320 490.79.64 / +58 426 273.8103 ( whatsapp )* El mié., 11 jul. 2018 a las 20:56, Aniket Aryamane (< [email protected]>) escribió: > Hello, > > If in the urls.py, I can write: > *app_name* = '*posts*' > . > . > path(' ', views.home, name='home'), > > > > then why it is required to refer url name (from the template) by the *app_name > value* as: > {% url '*posts*:home' %} > > It should be referred instead by *app_name variable* like below: > {% url '*app_name*:home' %} > > > What do you guys think? > > > > Thanks, > Anik.
https://www.mail-archive.com/[email protected]/msg50550.html
CC-MAIN-2018-47
refinedweb
154
69.28
Using Gatsby to create your website has great SEO benefits. Simply saying, Gatsby converts your React components into html, css and javascript that can be easily be served using CDN. Then what’s so fancy about SEO with Gatsby? Gatsby has the power to call a datasource(mostly an API or in our case the local filesystem) at build-time and make the data available to React components through props using a GraphQL query. Let’s take a deeper look at that in action by creating a static page from a markdown file. Create a basic hello world starter using gatsby cli Download and install the gatsby-cli globally npm install gatsby-cli -g Initiate a hello world starter gatsby new static-page Change the directory and run the project cd static-page npm start - At this point you’ll be able to see a page with “Hello world!” in your browser by navigating to This is the basic setup for a hello world gatsby website Add plugin to read markdown files from file system To read data from the files in your local system, we’ll be adding a plugin called gatsby-source-filesystem. This plugin will read data from markdown files and make it available to be queried by GraphQL. Install gatsby-source-filesystem yarn add gatsby-source-filesystem --save Configure the use of plugin in gatsby-config.js. If the file is not present, you can create one at the root of the project, Gatsby would read this file to load the set of plugins listed in the pluginsarray. Your gatsby-config.js should look something like this: module.exports = { plugins: [ { resolve: `gatsby-source-filesystem`, options: { path: `${__dirname}/src/static-pages`, name: "markdown-pages", } } ] } Now we can go ahead and create a markdown page in our file system. At the path mentioned during plugin configuration, create a file with any name in /src/static-pagesdirectory with the following content --- path: "/my-first-static-page" date: "2020-01-05" title: "Creating a static page to learn Gatsby" --- Hello, this my first gatsby static page by reading <a target="_blank" href="">this</a> blog. Any markdown syntax in this file, will be converted to html to be displayed on a webpage. Gatsby has a plugin right for that. At this point, if you run your project using npm start and go to, you’ll be able to query the file details that this plugin has read from the filesystem. You’d be able to see something like this If you have multiple files, you’ll be able to see multiple items in the edgesarray. One for each file. You’ll need to rerun the project npm startif you make any changes to gatsby-config.jsfile. By this step we can know that Gatsby has read the content from our local filesystem and made it available to be used in our project. Converting markdown files to HTML Let’s add one more plugin, gatsby-transformer-remark to our project. gatsby-transformer-remark would convert the files that we fetched using gatsby-source-filesystem from markdown to HTML. Install gatsby-transformer-remark yarn add gatsby-transformer-remark --save Configure the use of this plugin in gatsby-config.js. Let’s go ahead and add one more item to the array of plugins in gatsby-config.js. Your gatsby-config.jsshould look something like this by now. module.exports = { plugins: [ { resolve: `gatsby-source-filesystem`, options: { path: `${__dirname}/src/static-pages`, name: "markdown-pages", } }, 'gatsby-transformer-remark' ] } We’ll not be configuring other options of this plugin and use the default ones. But I recommend checking out the plugin documentation if you need custom implementation. Did you notice the values of path, date and title that we wrote inside of ---in the markdown file? It is this plugin that reads the values inside and makes it available as separate nodes to be fetched data from. These values are called frontmatter and we’ll further look more into it. gatsby-transformer-remark would create a new node at the parent level of your GraphQL data that you can query and check the markdown converted to HTML. Execute this query query MyQuery { allMarkdownRemark { edges { node { frontmatter { date path title } html } } } } We’ll get a result like this Notice the values of date, path and title are displayed within the frontmatterobject. These values are not expected to be of markdown syntax and are not converted to HTML during the conversion process. These values serve as metadata for your blog that can also be queried and used as filters, just in case, in your GraphQL query. Notice the value against the htmlkey? All the markdown that was inside of our file got converted into html. Pretty neat huh? Create a template to display queried HTML Now, let’s go ahead and create a template page(actually a react component) that can be used to query data from GraphQL and render it on a web page. Create a file blog-temple.js anywhere inside your src directory with the following content import React from "react"; import {graphql} from 'gatsby'; const BlogTemplate = ({data}) => { const { markdownRemark } = data; const { frontmatter: { title, date } , html } = markdownRemark; return ( <div> Title: {title}<br/> Date: {date}<br/> <div dangerouslySetInnerHTML={{ __html: html }} /> </div> ); } export default BlogTemplate; export const pageQuery = graphql` query StaticPage($path: String!) { markdownRemark(frontmatter: { path: { eq: $path } }) { html frontmatter { date path title } } } `; This component is capable of querying data from our GraphQL source and render it on to a web page. See how the frontmatter data and the html data are available and used in different ways. Create a page using the template and Gatsby lifecycle APIs Gatsby has lifecycle APIs that let you control the website build flow like fetching data from different sources, creating pages from those fetched data and many more that you can find here. For our use case, we’ve already fetched data to be displayed using gatsby-source-filesystem from our local filesystem. We’ll now be using Gatsby’s node API createPages to use that data and create a page using the blog-template that we just created. Create a new file at the root of your project and call it gatsby-node.js. Gatsby will read exported functions from this file at build time and we can execute our custom code inside those functions. Go ahead and add this content to gatsby-node.js. exports.createPages = ({ actions, graphql }) => { }; In this function we can now query our data using the graphql variable and create a page using actions.createPage. Let’s now do the following things in this function - Fetch data using a graphql query - Loop through the data and call createPageusing blog-template.js const path = require("path"); exports.createPages = ({ actions, graphql }) => { const {createPage} = actions; const blogTemplate = path.resolve(`src/templates/blog-template.js`); return graphql(` { allMarkdownRemark( sort: { order: DESC, fields: [frontmatter___date] } ) { edges { node { frontmatter { path } } } } } `).then(result => { if (result.errors) { return Promise.reject(result.errors); } result.data.allMarkdownRemark.edges.forEach(({ node }) => { createPage({ path: node.frontmatter.path, component: blogTemplate, context: { url: node.frontmatter.path } }); }); }) }; This would create a page at the path mentioned in the path value of the frontmatter of the markdown file. Here’s how the page would look: You can update the blog-template.js (react component) to style the different elements displayed on the static page to suit your needs. When you build the project, generally using the npm run build or gatsby build command, Gatsby would generate a static HTML file from your markdown file that can be found in the public directory at the root of your project. Adding more static pages Moving forward, adding more static pages is very simple. You may just add more markdown files to the directory that you have configured with the gatsby-source-filesystem and they’ll be generated and built in the gatsby-node.js file as the function loops through all the items in the graphql query. Important Tip: Always remember to specify a unique value for url in frontmatter throughout your website. Here’s a GitHub repo of all the code generated during this blog: In the next blog, I’ll be writing about how you can configure Gatsby to use multiple templates.
https://www.parthp.dev/static-page-in-gatsby/
CC-MAIN-2020-29
refinedweb
1,371
60.85
I started out by taking an early morning ride on my Triumph Bonneville. I stopped for breakfast at one of my favorite coffee shops in Boulder Colorado, Caffe Sole, where I got a chapter read in Dixit and Nalebuff's popular introduction to basic game theory, Thinking Strategically [Norton, 1993]. I had lunch with Mrs. Overclock at a deli we'd never eaten at before while on our way to a hardware store that I'd never been to before (but she had), Harbor Freight, where I discovered an economically priced solar charge regulator for one of my projects. We finished the afternoon and evening at a cookout with some good friends of ours where I may have indulged in a few fine brewed adult beverages. Somewhere amongst all those activities I discovered that I have not in fact memorized Stroustrup's epic reference book The C++ Programming Language [Addison-Wesley, 1997]. A big Thank You to Nishant in the LinkedIn.com group C++ Professionals for that little tidbit regarding the .* and ->* operators in C++. I was so surprised by this revelation that I had to write a little test program to convince myself how they worked. Here is the complete source code below annotated with comments. The remarkable part is in the second half of the program. I used both references and pointers, and both fields and methods, just to demonstrate to myself how all of it worked. #include <cstdio> class Foo { public: int field; void method() { printf("Foo@%p::method(): field=%d\n", this, field); } }; int main() { // We declare an object named bar of type foo. Foo bar; // We declare a reference to bar named barr // and a pointer to bar named barp. Foo &barr = bar; Foo * barp = &bar; // We use the usual mechanisms to set a field // of bar to 1 and to invoke a method of bar. barr.field = 1; barp->method(); // We declare a pointer to the field and a pointer // to the method in _any_ object of type Foo. Note // that this has absolutely _no_ mention of any // specific object of type of Foo, like for example // bar. Perhaps the implementation is storing the // offsets to the field and method in the variables. int Foo::*fieldp = &Foo::field; void (Foo::*methodp)() = &Foo::method; // We use the .* and ->* operators to set a field // of bar to 2 and invoke a method of bar. barr.*fieldp = 2; (barp->*methodp)(); return 0; } And when you compile this little marvel for i686 using GNU C++ 4.4.3 and run it on Ubuntu GNU/Linux 2.6.32-40, here is what you get. coverclock@silver:misc$ g++ -o pointer pointer.cpp coverclock@silver:misc$ ./pointer Foo@0xbff1841c::method(): field=1 Foo@0xbff1841c::method(): field=2 Stroustrup calls this Pointers to Data Members (section C.12, p. 853 in my hardbound Special Edition), although as he describes and as you can see above, it applies equally to fields or methods..
http://coverclock.blogspot.com/2012/05/c-pointer-to-member-operator.html
CC-MAIN-2017-09
refinedweb
492
71.24
How not to do dependency injection - using XML over fluent configuration Virtually all the developers that I come across who do not like using IoC containers have been put off by verbose, error prone XML-based configuration. Many people do not even realise that a type-safe alternative is available in all the popular IoC containers. If you are struggling with XML config, fluent configuration could be the answer to your prayers. This implementation of each interface in your solution (which is very common), XML configuration has many drawbacks whilst offering no discernible. Another method of component resolution can be found in many IoC containers which can dramatically reduce the amount of configuration required. Auto Registration can be set up to automatically register all components matching certain criteria. For example, you could specify a certain namespace, a list of assemblies, an interface name and many more. Not everyone likes this approach though and some developers prefer to see exactly what is being registered., I would really appreciate it if you could share it with your Twitter followers.Share on Twitter There is a serious disadvantage to using the fluent configuration: in order to compile fluent code you must have references to all the code. If I have an interface and three implementations of that interface, if I use fluent configuration I must import all three implementations. If I use XML configuration then the application is "wired up" by the XML. Only the XML has the references. I like this set of three articles and I also agree with Jeffrey's comment with regards to the Unity container. To his point, because the container depends on the application objects (so it can register the types) and the app objects depend on the container (so they can get injections) a conceptual circular dependency is formed. There are ways around this circular dependency but I'd argue they are more trouble than using XML files. (For an example of a work-around see the Microsoft - Spain DDD sample app and associated doc.) StructureMap and other containers have additional mechanisms ("auto discovery") to deal with bootstrapping the container while minimizing the use of XML files. Even though this series of DI posts are already one and a half years old I believe they contain a lot of useful explanation about how things should be done. However, it's not correct that config settings cannot be injected using XML. In Unity for example it's very straight forward to write your own extension that does the job for you. I frequently use an extension for appSettings and connectionString injection. A sample can be found here: Even though for simple projects I agree with the author on using fluent config I have also worked on a couple of projects where we leveraged interception for logging and exception handling purposes (cross-cutting concerns). It's very handy if you can just change the interception configuration on a production server to enable extra logging/tracing information in order to resolve an issue. The beauty of that is that you can do that on a component basis. I'm sure some people will argue that this could be controlled via some appSettings but that would certainly not be as simple as changing the interception configuration. I also agree with Jeffrey McArthur's comment. I used fluent registration in one of my project for which I created a bootstrap file in the "startup" project. However even though the concrete types are referred to in the bootstrap file only, the "startup" project required reference of all assemblies that contained the concrete implementation. This leads to one more disadvantage of fluent configuration - Exposing your concrete types inadvertently. So now despite abstracting all your concrete implementation, you are still providing a "gateway" to direct usage and potential code-break.
https://www.devtrends.co.uk/blog/how-not-to-do-dependency-injection-using-xml-over-fluent-configuration
CC-MAIN-2017-34
refinedweb
636
51.07
IRC log of musicbrainz on 2006-03-08 Timestamps are in UTC. 00:00:31 [Jetpack] Jetpack has quit 00:06:14 [chocomo] Ta~daa! 00:07:09 [chocomo] of course , not inlcuding stuff I couldn't find 00:07:11 [chocomo] and now 00:07:15 [chocomo] * chocomo will roll to bed 00:09:19 [fuchs] mr ownership deal? 00:09:23 [chocomo] yes 00:09:25 [chocomo] I am weird 00:09:29 [fuchs] what's that mean? 00:09:50 [chocomo] its chocomo-wonk for 'I own this junk' 00:10:16 [chocomo] hey! ruaok greats us with waga and klepto 00:10:23 [fuchs] that reminds me of the summit where we wanted to tag everything with either "good music" or "bad music", but noone seems to be really doing it ;) 00:10:26 [chocomo] I can say mr ownership deal 00:10:35 [chocomo] heh 00:11:13 [chocomo] but mb really needs a 'owned releases' type feature 00:11:21 [chocomo] because then I can totaly spesify that 00:11:24 [chocomo] woo 00:12:03 [fuchs] no in mb as mb is more about the music not the users 00:12:04 [chocomo] and ofcourse 'owned' albums could one pretty much go around and ask who owned it for more info / or be sure this person keeps its data good 00:12:07 [yllona] so can i, but i doubt i'd put in the effort 00:12:15 [fuchs] +t 00:12:37 [fuchs] that's what discogs is for :) 00:12:48 [chocomo] ewll mb should cater a bit to some users want to put things in lists 00:12:49 [chocomo] :p 00:12:59 [chocomo] besides disogs is crap + hasn't got my releases 00:13:11 [chocomo] and I'm NOT adding them 00:13:12 [fuchs] add them ;) 00:13:12 [yllona] i only owrk on mB stuff that I own anyway, unless someone asks for asistance 00:13:14 [chocomo] fuck no 00:13:21 [yllona] *work on MB stuff 00:13:21 [chocomo] hehe yll 00:13:34 [yllona] hello, mo 00:13:42 [chocomo] only I'm saying it would be a like a second 'subscribed' type thing, but for albums 00:13:45 [chocomo] hi Y 00:14:04 [fuchs] which would blow up the db with non factual stuff 00:14:08 [chocomo] and it *would* help metadata as 'owned' albums are less liekly to be erranous 00:14:29 [chocomo] fuchs: I am dreaming, stop puttign your foot on my break gear pleace 00:14:45 [fuchs] well this could be done as an external sideproject :) 00:14:54 [chocomo] and I own that is a fact :p 00:15:07 [fuchs] *g* 00:15:25 [chocomo] but for example my oppinion of it, is not something that belongs in mb 00:15:31 [chocomo] :9 00:15:34 [chocomo] now I will bed 00:15:39 [chocomo] say hi to sen from me 00:15:42 [chocomo] natta 00:15:44 [fuchs] owning it is an opinion of yourself 00:15:57 [fuchs] sleep well 00:16:10 [yllona] but fuchs it would be nice to have a flag for mods that says something like "do you own this album?" .might cut down on mod flames :) 00:17:05 [fuchs] there aren't that many flames 00:17:10 [fuchs] and most of them are good 00:17:29 [yllona] yes, i agree. only a few spoilsports 00:17:42 [yllona] * yllona plays for fuchs 00:17:45 [yllona] * yllona is listening to Fair But So Uncool by Earth Wind & Fire from Open Our Eyes (1974) (1:37 / 3:40) 00:18:27 [fuchs] pah! :-þ 00:20:37 [yllona] :) 00:26:28 [yllona] err, did i deliver a "chatkill"? :P 00:26:51 [yllona] okay fuchs, try this one 00:26:54 [yllona] * yllona is listening to Natural Man by Lou Rawls from Natural Man (1971) (0:56 / 3:43) 00:34:57 [SenRepus] fuchs discogs is incomplete and slow and stupid 00:35:53 [fuchs] now that i call constructive criticism ;) 00:35:57 [orogor] orogor has quit 00:37:53 [Shepard] the best criticism... ever! 00:37:56 [Shepard] sorry bad joke 00:41:16 [oqp] oqp has quit 00:42:16 [fuchs] shep, i'm glad you recognized this yourself ;) 00:46:44 [fuchs] * fuchs is still upset about yllona calling him "fair" 00:47:45 [yllona] fuchs: fair as in "even-handed", not fair as "pretty" 00:48:55 [fuchs] oh, now she's even saying that i'm not pretty! it's getting better and better 00:49:26 [yllona] that's waht the song is about. "the life you lead is fair, but so uncool" 00:51:57 [Jetpack] Jetpack has joined #musicbrainz 00:54:53 [BGreeNZ] BGreeNZ has joined #musicbrainz 00:56:31 [BGreeNZ] luks: How's it going with Picard 0.6.0? 00:57:30 [yllona] okay fuchs i give up. i apologize. 00:58:10 [yllona] here ya go: 00:58:16 [yllona] * yllona is listening to I Apologize by Billy Eckstine from That Old Black Magic (1951) (0:15 / 3:20) 00:58:33 [BGreeNZ] !seen luks 00:59:31 [fuchs] BGreeNZ: luks is here ;) 01:00:08 [fuchs] yllona: i'm jsut kidding (and cooking, that's why it's taking so long for me to respond) ;) 01:00:31 [yllona] no worries 01:00:42 [yllona] what are you cooking? 01:00:59 [fuchs] not sure yet ;) 01:01:01 [BGreeNZ] Yes, I see that. I was trying to ask BrainzBot (who *doesn't* seem to be here) when he was last seen 01:01:23 [fuchs] BGreeNZ: brainzbot would have answered the same thing i did 01:02:12 [BGreeNZ] Usually says when the person last spoke, though, doesn't it? 01:03:29 [yllona] i theink the brainzbot scripts are offline 01:03:41 [yllona] !weather 90732 01:05:14 [yllona] yep. brainzbot down for the count 01:05:24 [BGreeNZ] BGreeNZ is now known as BGreeNZBot 01:05:35 [BGreeNZBot] Look out the window 01:05:40 [BGreeNZBot] BGreeNZBot is now known as BGreeNZ 01:07:26 [Jetpack] hehe 01:09:28 [BGreeNZ] Glad someone enjoyed it. :-D 01:12:57 [Jetpack] girls like guys with skills... nunchaku skills.. computer hacking skills.. 01:13:46 [Jetpack] hmm doesn't sound as funny in print as it does in the movie 01:13:53 [BGreeNZ] * BGreeNZ is confused 01:14:16 [yllona] * yllona too 01:14:50 [Jetpack] line starting with "girls" is a quote from a movie: i thought i would throw in something funny in return but didn't work obviously 01:15:22 [BGreeNZ] Awww.. So you don't like me? :-( 01:15:26 [Muz] Hmmm, is it possible to rezise a partition using Knoppix, and if so, how? 01:15:50 [yllona] * yllona hasn't seen many recent movies, avoids pop-culture in general 01:15:55 [Jetpack] no its just something funny i've been saying to myself all day 01:16:10 [Jetpack] im a guy 01:16:38 [BGreeNZ] Knoppix? Isn't that one of the many Debian-based micro-distributions of GNU/Linuk? 01:16:39 [Jetpack] the movie is "napoleon dynamite", i haven't seen it either but my friends have been quoting it and making me piss myself 01:16:50 [BGreeNZ] s/Linuk/Linux/g 01:16:54 [Muz] Yes 01:17:15 [Jetpack] "parted" does resizing i think 01:17:43 [BGreeNZ] What FS? ext2 or ReiserFS? 01:17:55 [Muz] ReiserFS 01:18:43 [BGreeNZ] * BGreeNZ logs into his Debian firewall to browse through APT 01:21:32 [fuchs] The resize_reiserfs tool resizes an unmounted reiserfs file system. It enlarges or shrinks an reiserfs file system located on a device 01:22:02 [BGreeNZ] Personally, I use Acronis OS Selector 5. Aside from being a kick-arse boot manager, it also has a partition manager that can resize (almost) every FS under the sun 01:22:36 [BGreeNZ] But, yeah, resize_reiserfs will probably be what you need. 01:23:14 [Muz] Ah okay, /me looks into it 01:24:24 [Shepard] luks? 01:24:35 [BGreeNZ] It's part of the "reiserfsprogs" package 01:25:35 [Joan_W] Joan_W has joined #musicbrainz 01:26:02 [BGreeNZ] Or, if you're keen, you could try "reiser4progs" 01:27:42 [BGreeNZ] * BGreeNZ checks for updates while he still has aptitude open 01:30:03 [BGreeNZ] * BGreeNZ envies people who aren't stuck in "dialup hell" 01:30:23 [flamingcow] dialup?!?! 01:30:53 [Muz] * Muz goes to sleep instead to return to this hdd pain in the arse tomorrow 01:31:09 [Joan_W] Night Muz 01:31:32 [yllona] g'night muz 01:31:40 [Shepard] natta muz 01:32:18 [BGreeNZ] Yes, "beep-beep-beep-beep-beep-beep-beep-beep-beep, Shreeeeee kululup Grepeow Grepeow hisss HISSSS.." 01:35:07 [Joan_W] very quiet on here tonight. I think I'll go to bed. Night 01:35:37 [BGreeNZ] Okay. Sleep well. 01:36:04 [Shepard] night 01:36:42 [Joan_W] Joan_W has quit 01:37:05 [BGreeNZ] Hmm. This probably wouldn't take as long if I didn't have a surplus of mirrors in my apt.sources or whatever it is... 01:37:31 [flamingcow] sources.list :-P 01:37:46 [BGreeNZ] * BGreeNZ <3 his ISP for mirroring Debian 01:39:06 [BGreeNZ] At least that way I don't have to drag it half-way across the world/country in conjunction to squeezing it through a 56Kbps modem... 01:39:48 [flamingcow] they can mirror debian, but not get dsl/cable/fibre out to you? :-P 01:41:28 [BGreeNZ] Well, Telecom New Zealand owns most of the phone lines, so ADSL ultimately has to come through them. InSPire Net do, however, offer high-speed *symmetrical* wireless 01:42:06 [Jetpack] what qualifies as high-speed? 01:42:47 [Jetpack] oh you've got dial-up 01:44:19 [BGreeNZ] Well, if you're talking about Telecom's most basic ADSL package (JetStart), they actually call 256Kbps *broadband*, believe it or not 01:44:50 [ojnkpjg] i get 15Mbit/2Mbit now 01:44:51 [ojnkpjg] hooray for me 01:46:06 [Jetpack] BGreeNZ in South Africa, Telkom sell 192 as broadband 01:46:25 [BGreeNZ] * BGreeNZ gags 01:46:34 [BGreeNZ] 01:46:37 [ojnkpjg] 01:46:39 [Jetpack] 3 gig cap 01:47:05 [BGreeNZ] Those are InSPire Net's wireless plans 01:47:57 [Jetpack] ojnkpjg: *drool* 01:47:59 [ojnkpjg] what's nice about FIOS is that they install a wireless router that broadcasts SSID, gives just anyone a dhcp lease, and they leave the default admin password on the router as 'password' 01:48:03 [BGreeNZ] They also offer their own brand of ADSL (BitStream), of course, but Telecom owns the copper => Telecom controls the price 01:48:13 [ojnkpjg] so you could do all kinds of nasty things to joe average who gets it installed 01:48:19 [ojnkpjg] like turn the thing into a brick 01:48:20 [ojnkpjg] or worse 01:49:11 [Jetpack] alright from now on we're hosting Musicbrainz at ojnkpjg's house 01:49:29 [ojnkpjg] :) 01:49:32 [ojnkpjg] "no servers!" 01:49:37 [flamingcow] musicbrainz is on less than 15/2mbit ? 01:49:40 [ojnkpjg] whatever a that means 01:50:38 [Jetpack] im not sure what MB is on actually 01:50:54 [Jetpack] someone told me but i forgot 01:52:07 [BGreeNZ] ojnkpjg: Better not run ICQ, BitTorrent, or a zillion other consumer "clients" that operate as Internet servers as part of their normal functionality, then ;-) 01:52:26 [flamingcow] Musicbrainz at 01:52:26 [flamingcow] ojnkpjg's house 01:52:26 [flamingcow] [01:49:29] <ojnkpjg> :) 01:52:26 [flamingcow] [01:49:32] <ojnkpjg> "no servers!" 01:52:26 [flamingcow] [01:49:36] <flamingcow> musicbrainz is on less than 15/2mbit ? 01:52:26 [flamingcow] [01:49:40] <ojnkpjg> whatever a that means 01:52:28 [flamingcow] [01:50:38] <Jetpack> im not sure what MB is on actually 01:52:30 [flamingcow] [01:50:54] <Jetpack> someone told me but i forgot 01:52:32 [flamingcow] [01:52:07] <BGreeNZ> ojnkpjg: Better not run ICQ, BitTorrent, or a zillion 01:52:34 [flamingcow] other consumer "clients" that operate as Internet 01:52:36 [flamingcow] eek, sorry 01:52:38 [flamingcow] mouse gone nuts 01:53:59 [Shepard] I thought they like cheese... 01:53:59 [BGreeNZ] ojnkpjg: Better also check your IRC client is running in passive (firewall) mode. Don't want to be running an IRC server now ;-) 01:54:25 [BGreeNZ] Let's not forget about your FTP client.... 01:57:50 [yllona] later folks 01:57:56 [yllona] yllona has quit 02:00:33 [ojnkpjg] the router they give you does allow remote administration, if you enable it 02:00:47 [ojnkpjg] DIRECT CONTRADICTION 02:01:07 [ojnkpjg] but seriously, i think they just don't want people doing webhosting and such on the service 02:03:04 [BGreeNZ] * BGreeNZ starts installing the 7-or-so security updates available for his Debian Sarge firewall/router 02:04:26 [ojnkpjg] i compulsively dist-upgrade 02:05:12 [BGreeNZ] Hmm. I'm not too sure about that 02:05:24 [flamingcow] i cron it 02:06:58 [BGreeNZ] Picture this. I had just got Debian Woody installed on a desktop machine. About a week later, Sarge is released, so I go ahead and do the dist-upgrade and (almost) everything breaks :( 02:07:34 [ojnkpjg] good times 02:07:45 [ojnkpjg] i've just been running unstable for a very long time 02:07:49 [BGreeNZ] I haven't gotten around to reinstalling Sarge on it yet, and I still have a backup of the Woody installation 02:08:11 [ojnkpjg] debian stable lags about 3 years 02:08:43 [ojnkpjg] i've never had it break on me, either 02:09:09 [ojnkpjg] a few times it's wanted to remove about every package, but waiting a day or two usually gives them time to fix it 02:10:04 [BGreeNZ] Well, in the (upgraded) Sarge installation, KDE/GNOME are a complete mess :( 02:10:09 [ojnkpjg] oh 02:10:25 [flamingcow] the upgrade path is *intended* to be well tested 02:10:28 [ojnkpjg] my machine can barely handle either of them 02:11:06 [ojnkpjg] gnome+firefox will eat most of my ram 02:11:47 [BGreeNZ] Sarge has been out (as stable) for several months now. Do you think it would be worth trying the upgrade again from the Woody backup? 02:12:05 [ojnkpjg] why not just install fresh? 02:13:14 [BGreeNZ] Yes, I think I like that idea better. I don't think there's any important data there, as I had only just started playing with it. 02:13:41 [ojnkpjg] i have tons of cruft that has accumulated over the years here 02:13:46 [ojnkpjg] i really ought to do a fresh install sometime 02:14:01 [BGreeNZ] What is the recommended way to partition a desktop Debian machine? 02:14:07 [ojnkpjg] no idea 02:14:16 [ojnkpjg] it probably lets you select general options 02:14:29 [ojnkpjg] i usually just make one partition the size of the whole disk for a personal machine 02:14:38 [ojnkpjg] even though that's not always smart 02:15:05 [BGreeNZ] * BGreeNZ notices the topic hasn't been changed yet 02:15:12 [ojnkpjg] i don't know, maybe give /var its own partition 02:15:37 [ojnkpjg] i can run stuff over nfs on my desktop machine faster than i can from disk 02:15:40 [ojnkpjg] which is kind of sad 02:16:58 [Jetpack] that is pretty sad 02:17:01 [Jetpack] your hdd must be ancient 02:17:05 [ojnkpjg] ata/33 02:17:11 [ojnkpjg] from 1998, i think 02:17:27 [inhouseuk] inhouseuk has quit 02:19:43 [BGreeNZ] Is there any easy way to share a partition between two root-level directories, e.g. var, tmp, home on one partition and usr, opt on another? 02:20:18 [flamingcow] you can symlink opt in /usr/opt 02:20:59 [ojnkpjg] it's been ages since i installed fresh, but i imagine you could use the shell console to create the partitions the way you want 02:21:02 [ojnkpjg] if they still have that vt up 02:22:08 [flamingcow] well you could mount --bind /usr /opt 02:22:13 [flamingcow] but you'd be mixing stuff in the same dir 02:24:19 [BGreeNZ] Oh, I can create the partitions from my boot manager before I run Setup. That's not the problem. What I want to know is if there is a way to mount a partition containing, say, /var and /home without having symlinks all over the place 02:25:28 [ojnkpjg] you can use mount --bind 02:25:32 [fuchs] *g* 02:26:27 [fuchs] may I? 02:26:38 [fuchs] you could use mount --bind :) 02:27:02 [BGreeNZ] example? 02:27:15 [fuchs] see cow 02:27:36 [ojnkpjg] oh, i didn't see that 02:27:44 [fuchs] hihi 02:27:56 [BGreeNZ] (I'm reading the info page for mount now, but I'm not always sure on these sorts of things) 02:28:24 [fuchs] BGreeNZ: just use the thing flamingcow posted 02:28:32 [fuchs] it's really that simple 02:28:38 [fuchs] 03:22:13 < flamingcow> well you could mount --bind /usr /opt 02:29:08 [BGreeNZ] What exactly does that do? 02:29:20 [fuchs] the thing you want 02:29:33 [ojnkpjg] a unionfs on linux would be nice 02:29:38 [fuchs] bind /opt to the same mountpoint as /usr 02:30:08 [ojnkpjg] i think someone's working on one, but i am way too scared to trust it 02:32:27 [BGreeNZ] That's not what I would want, though. Let's say, for argument's sake, that I have a partition mounted on /mnt. What I would want to do is access /mnt/var from /var and /mnt/tmp from /tmp 02:32:48 [ojnkpjg] that's what mount --bind will do for you 02:32:53 [fuchs] BGreeNZ: yep 02:33:07 [BGreeNZ] * BGreeNZ suspects he may have just answered his own question, but is unsure 02:33:15 [fuchs] BGreeNZ: it's just working with directories, you can use any subdir of a device root 02:33:27 [ojnkpjg] it's just mount --bind cannonical mountpoint 02:33:48 [ojnkpjg] so you could do something like mount --bind /var/tmp /tmp 02:33:52 [BGreeNZ] So that would be mount --bind /mnt/var /var ? 02:34:00 [fuchs] ding 02:34:01 [LTjake] LTjake has quit 02:34:56 [ojnkpjg] it allows -o, too 02:35:10 [ojnkpjg] so you could put something like -onosuid,nodev for the /tmp dirs 02:35:16 [ojnkpjg] and etc 02:35:19 [ojnkpjg] it's really useful 02:36:03 [johntramp] is there a way to make picard open firefox in an existing window when looking up albums? 02:36:55 [BGreeNZ] So, after mount --bind /mnt/tmp /tmp, would the permissions on /tmp be taken from /tmp mountpoint or /mnt/tmp directory? 02:37:11 [ojnkpjg] /mnt/tmp, i believe 02:41:38 [BGreeNZ] johntramp: Do you mean, you want FireFox to recycle an existing window when Picard opens it? 02:42:29 [BGreeNZ] As opposed to creating a new window? 02:48:46 [BGreeNZ] Interesting. johntramp asks for help, but then fails to answer a request for more info. 02:48:49 [ue\sleep\] ue\sleep\ has joined #musicbrainz 02:51:22 [Brandon_72] Brandon_72 has joined #musicbrainz 02:51:58 [Brandon_72] Brandon_72 has left #musicbrainz 02:55:11 [johntramp] BGreeNZ, sry 02:55:38 [johntramp] when ever I locate another album it opens a separate window 02:55:49 [johntramp] I would like it to keep the same window the whole time 02:56:19 [johntramp] also when I locate an album it will open the same window in 2 separate tabs 02:56:49 [BGreeNZ] So you want Firefox to recycle the current window / tab when Picard launches it. Is that correct? 02:56:55 [johntramp] yeah 02:58:38 [fuchs] johntramp: preferences -> tabs -> "open links from other application in" - "the most recent tab/winow" 02:58:55 [fuchs] +d 02:59:10 [BGreeNZ] What he said 03:00:16 [BGreeNZ] I was going to give you the equivalent option in Mozilla SeaMonkey, but that is more relevant to you 03:01:28 [BGreeNZ] Catch will be, of course, that *all* links from *all* other programs will reuse the currently active tab/window, which may not be what you want 03:04:24 [BGreeNZ] This will be in Firefox preferences, not Picard preferences, of course 03:05:06 [johntramp] cheers 03:06:53 [BGreeNZ] johntramp: Just make sure the currently active tab/window contains nothing important before you click on a link in any other program. 03:07:05 [UserErr0r] UserErr0r has quit 03:07:23 [BGreeNZ] 'cause whatever's in that tab will be nuked ;-) 03:08:00 [johntramp] :) 03:08:02 [johntramp] i know 03:09:36 [BGreeNZ] Just wanted to be sure you were aware of that caveat, as it will apply to *all* programs external to FireFox, such as your e-mail client 03:10:49 [Jetpack] Jetpack has quit 03:12:02 [BGreeNZ] luks, are you awake at all? 03:12:23 [Shepard] tried that before, no response :) 03:16:48 [BGreeNZ] Me too 03:17:42 [BGreeNZ] 00:56:31 [BGreeNZ] 03:17:44 [BGreeNZ] luks: How's it going with Picard 0.6.0? 03:20:08 [BGreeNZ] I helped him figure out why Picard was crashing on startup when installed on Win98SE and WinME. It was looking for a UNICOW.DLL that isn't there. 03:21:05 [BGreeNZ] * BGreeNZ wonders if *this* computer has a unicow.dll file he can pinch to see if Picard works on ME 03:22:34 [BGreeNZ] * BGreeNZ finds hordes (well, five) of them on here 03:22:38 [luks] luks has quit 03:22:52 [johntramp] is luks a dev? 03:24:02 [WolfsongOpera] yes 03:26:44 [BGreeNZ] * BGreeNZ swipes a copy of UNICOW.DLL version 1.0.4018.0 from OpenOffice 1.1.0 and, hey presto, up pops Picard! Yay!! :) 03:27:46 [BGreeNZ] Now I just have to figure out what to do with it ;-) 03:27:58 [ojnkpjg] complain about the UI :P 03:28:00 [johntramp] johntramp has quit 03:28:16 [ojnkpjg] that's what most people seem to do 03:29:34 [BGreeNZ] Okay, here goes: The Picard UI looks too pretty 03:29:38 [WolfsongOpera] something i don't understand 03:29:50 [WolfsongOpera] MB Tagger's UI isn't that great 03:29:54 [ojnkpjg] picard is really perfect for what i do 03:30:46 [ojnkpjg] i think the whiner mostly have a directory with 5000 random mp3 files 03:30:57 [ojnkpjg] whiners 03:31:10 [WolfsongOpera] and personally i'd rather complain about web browsers :-) 03:31:25 [ojnkpjg] can we start with IE? 03:31:42 [WolfsongOpera] with mp3s from questionable sources 03:31:51 [WolfsongOpera] nah lets start w/ FF 03:32:00 [WolfsongOpera] IE gets too much attention 03:32:12 [BGreeNZ] Micro$oft Internet Exploder, or Micro$oft Internet Exploiter? 03:32:35 [ojnkpjg] well, let's bash lynx, then 03:32:58 [WolfsongOpera] nah not a fair fight 03:33:05 [ojnkpjg] ok 03:33:15 [BGreeNZ] Yeah, lynx is too fast! 03:33:34 [WolfsongOpera] LOL 03:34:19 [BGreeNZ] I just love what some web servers spew out when you point a text-only browser at them ;-) 03:34:35 [ojnkpjg] "come back with a real browser," in not so many words 03:35:15 [ojnkpjg] which is a form or racism, i think 03:36:41 [BGreeNZ] [spacer.gif] [spacer.gif] Click here! Click here! [title.gif] Click here to visit our homepageHome ... etc being quite typical :) 03:37:58 [BGreeNZ] Click what? My keyboard? :) 03:39:39 [ojnkpjg] might work with gpm support at the console 03:40:52 [BGreeNZ] Hmm. This computer doesn't seem to have Lynx installed. How odd. 03:41:20 [BGreeNZ] * BGreeNZ goes off to download the latest version of Lynx for Windows 03:43:29 [ojnkpjg] they used to have dumb terminals connected via serial cable to some server at the local library where i grew up 03:43:39 [ojnkpjg] i felt very proud of myself for figuring out how to break out of lynx 03:44:08 [ojnkpjg] into a vms shell 03:44:39 [WolfsongOpera] lil hacker 03:44:43 [BGreeNZ] * BGreeNZ never thought to try to break out of the city library's catalog program... 03:45:04 [ojnkpjg] i just wanted to telnet out to read email 03:46:47 [BGreeNZ] I think the Palmerston North City Library did a fairly decent job of securing most of their Wyse terminals to start with. As I recall, a large portion of the terminal's keys were glued down. 03:47:38 [ojnkpjg] hehe 03:47:42 [mudcrow_] mudcrow_ has quit 03:48:35 [mudcrow_] mudcrow_ has joined #musicbrainz 03:48:37 [mudcrow_] mudcrow_ is now known as Mudcrow 03:50:01 [BGreeNZ] What did you do, Mudcrow.... :) 03:50:31 [BGreeNZ] * BGreeNZ uses ChatZilla too ;-) 03:52:40 [Muz] * Muz waves bye bye to BGreeNZ's RAM 03:54:50 [MBChatLogger] I hate the evil empire 03:54:50 [BGreeNZ] * BGreeNZ fires up Microsoft System Monitor to have a look 03:58:22 [toxickore] toxickore has joined #musicbrainz 03:59:19 [BGreeNZ] 464.2MB allocated memory, 19.3MB disk cache size, 94.6MB swap in use, 2.3MB of 192MB free physical memory. 04:00:08 [BGreeNZ] That's just with running SeaMonkey 1.0, Picard, and a couple of PuTTY windows 04:03:11 [BGreeNZ] Muz, Mudcrow appears to use ChatZilla too: 04:03:13 [BGreeNZ] -->|mudcrow_ ([email protected]) has joined #musicbrainz 04:04:03 [Muz] mem[Usage: 398/512MB (77.73%) [||||||||--]] 04:04:08 [Muz] Firefox >.< 04:04:12 [Muz] That and something else... 04:04:50 [BGreeNZ] Ah, so FireFox is just as much a memory hog as SeaMonkey, it appears 04:06:20 [MBChatLogger] windows == pure evil 04:06:20 [BGreeNZ] Muz: Just imagine what my RAM usage would be like if I was running <shudder> Windoze XP 04:06:31 [Muz] This machine is XP 04:06:55 [BGreeNZ] Or, perhaps a more accurate term would be crawling? :-P 04:07:02 [ojnkpjg] or thrashing 04:08:50 [Muz] *crashing 04:09:46 [BGreeNZ] * BGreeNZ thinks his old '286 would boot faster than Windows XP on a Celeron 800 with 192MB RAM, even if I reduced the '286's RAM from 4MB back to 1MB and disabled all ROM shadowing and disk caching 04:11:59 [ojnkpjg] i have a pentium machine that caused me no end of headaches saved in a closet 04:12:07 [ojnkpjg] i am waiting for an opportunity to throw it from a very high place 04:13:01 [BGreeNZ] ROM shadowing and disk caching are nice, and I was very surprised at the difference when I enabled them for the first time, but with 1MB *total* RAM at the time, we just couldn't afford to do that :( 04:13:30 [BGreeNZ] ojnkpjg: What is wrong with the Pentium? 04:13:42 [ojnkpjg] it had a flaky l2 cache 04:14:22 [BGreeNZ] Oh. Too much of a speed penalty if you disable it, I suppose? 04:15:02 [ojnkpjg] yeah, it was dog slow with it disabled 04:15:26 [ojnkpjg] i used to have to disable it to compile libc 04:15:29 [ojnkpjg] it took a couple of days 04:15:52 [BGreeNZ] * BGreeNZ blinks in disbelief 04:16:16 [ojnkpjg] i think it rated at like 20 bogomips with the cache off 04:16:21 [ojnkpjg] maybe less 04:16:25 [ojnkpjg] might ahve been 2 04:16:27 [ojnkpjg] i forget 04:16:48 [BGreeNZ] Perhaps the cache is replacable? 04:16:57 [ojnkpjg] probably was 04:17:09 [ojnkpjg] but i didn't know enough about how to fix it at the time 04:18:02 [BGreeNZ] * BGreeNZ wonders how many bogoMIPS he could get out of a '286 running Bochs 04:22:57 [BGreeNZ] Still, it seems a shame to waste it. 04:23:12 [BGreeNZ] (the pentium, that is) 04:23:32 [Muz] Set it up to run some useless daemon... 04:23:33 [Muz] Hmm 04:23:35 [Muz] * Muz thinks 04:24:24 [ojnkpjg] it's not worth the power it'd suck 04:25:11 [BGreeNZ] I would be finding out if the L2 cache is replacable first, either as a module or individual SRAM chips 04:26:07 [BGreeNZ] socketed ICs, that is. I wouldn't bother wasting time with a soldering iron 04:34:53 [BGreeNZ] * BGreeNZ decides to give up trying to locate a current binary of Lynx for Win32 to download 04:38:47 [nikki_] brr 04:38:59 [BGreeNZ] nikki_ cold? 04:39:14 [nikki_] yeah 04:39:31 [nikki_] the heating hasn't been on since like 7 last night 04:40:10 [BGreeNZ] Oh. 04:40:52 [Muz] * Muz snuggles up in his blanket more 04:41:44 [WolfsongOpera] * WolfsongOpera wonders if nikki_ wet the bed 04:41:52 [Muz] lol 04:41:58 [BGreeNZ] * BGreeNZ stares out the window at the bright Autumn sun 04:42:39 [BGreeNZ] 01:41:45 [WolfsongOpera] 04:42:41 [BGreeNZ] * WolfsongOpera sneaks around to nikki_ and places her hand in a bucket of water and tiptoes back 04:42:47 [BGreeNZ] Yesterday 04:42:53 [nikki_] no, I don't wet the bed. 04:43:14 [WolfsongOpera] did you wake up face down in a bucket? 04:43:24 [nikki_] no. 04:43:40 [WolfsongOpera] oh good... no guilt for me! 04:43:41 [WolfsongOpera] LOL 04:43:46 [Muz] * Muz thinks someone's grouchy from no sleep 04:44:03 [BGreeNZ] nikki_: 01:41:45 04:44:39 [nikki_] I do read the chatlogs. 04:44:53 [nikki_] usually at college... 04:45:16 [Muz] College... :\ 04:45:30 [BGreeNZ] Oh, so you know about the practical (or virtual?) joke wolfsong played? 04:47:47 [nikki_] yes 04:48:22 [WolfsongOpera] hmph! 04:48:52 [WolfsongOpera] all the detail i put into it and that's all the commentary it gets?!?! 04:49:11 [BGreeNZ] * BGreeNZ hands nikki_ a wet towel to slap WolfsongOpera with 04:50:11 [nikki_] yes. it's almost 5 in the morning. 04:50:20 [SenRepus] yl should have said "That's all, folks!" when she left 04:50:24 [BGreeNZ] Almost? 04:50:29 [Muz] Time: 04:48:31 Date: 08 Wednesday March 06 Timezone: GMT 04:50:46 [Muz] Hmmm, when do the post offices open :/ 04:53:54 [nikki_] you can check on postoffice.co.uk or something 04:54:12 [Muz] I'll just wait for some generic "normal" time 04:54:18 [Muz] 11am should be good I reckon 05:05:24 [BGreeNZ] * BGreeNZ is confused by the seemingly apparant lack of anything that resembles a "Lookup" button in Picard 0.6.0, despite repeated assertions by the documentation that it exists 05:08:48 [BGreeNZ] There is a "Lookup CD" button, but no obvious "Lookup" button :( 05:09:23 [BGreeNZ] Oh, wait! There it is! :-o 05:09:34 [Muz] Ooooh, I wanna ride you like the animal you are :D 05:09:38 [Muz] * Muz <3 hair metal 05:09:51 [Muz] Muz is now known as HairMetalListene 05:10:02 [HairMetalListene] * HairMetalListene goes off to do some more perl 05:10:12 [toxickore] toxickore has quit 05:10:17 [HairMetalListene] HairMetalListene is now known as Muz 05:10:27 [Pipian] Pipian has quit 05:13:08 [TheDarkOne] TheDarkOne has joined #musicbrainz 05:13:52 [TheDarkOne] when i try to compile python-musicbrainz i'm getting an error that it can't find libLoadLibrary.so, any idea? 05:14:22 [oqp] oqp has joined #musicbrainz 05:19:02 [BGreeNZ] * BGreeNZ shugs 05:20:05 [TheDarkOne] only reason i can think of is because it's gentoo lol 05:20:56 [Muz] Give it a week to compile 05:21:06 [TheDarkOne] ? 05:29:51 [BGreeNZ] Bye-bye Picard :( "Tagger has caused an error in KERNEL32.DLL" 05:31:58 [BGreeNZ] ..and again :'( 05:33:02 [Muz] * Muz <3 MBTagger 05:33:10 [Muz] Pish posh to Picard I say 05:34:12 [nikki_] * nikki_ likes picard because she has albums and not random downloaded mp3s 05:34:26 [Muz] I have albums too 05:34:36 [Muz] I rarely if ever obtain single tracks 05:34:47 [nikki_] mbtagger always mixes up albums with the same name 05:34:57 [Muz] That's why I do it manually 05:35:05 [nikki_] slooooow 05:35:11 [Muz] I find it therapeutic : P 05:35:37 [TheDarkOne] what does libLoadLibrary.so belong to? anyone know? 05:35:49 [BGreeNZ] Well, it took long enough for me to get it to actually running on 98SE / ME, that I thought I'd better give it a try 05:36:05 [BGreeNZ] TheDarkOne: Have you tried looking in CVS? 05:36:35 [BGreeNZ] 05:38:33 [BGreeNZ] Of course, that doesn't actually work when it's a shared library that isn't actually part of Picard :( 05:38:56 [TheDarkOne] can't find it 05:38:58 [TheDarkOne] Picard? 05:40:12 [BGreeNZ] Hah! Look at the #1 Google result for libLoadLibrary! 05:40:38 [TheDarkOne] i know 05:40:53 [TheDarkOne] i cant find anything beyond a log from here for libLoadLibrary.so on anywhere 05:41:08 [BGreeNZ] TheDarkOne: Or, tunepimp or whatever it is you're working on 05:41:23 [TheDarkOne] it's for xmms2covers 05:41:31 [TheDarkOne] it gets the album art for every album 05:41:42 [BGreeNZ] Try searching for it without the .so (use the link above) 05:41:47 [TheDarkOne] i did 05:41:49 [TheDarkOne] nothing relevant 05:42:51 [TheDarkOne] which is why i'm completely lost. 05:42:58 [BGreeNZ] With a name like that, I'd assume it's part of gcc? 05:43:11 [TheDarkOne] hmm 05:43:14 [TheDarkOne] lemme check 05:43:45 [BGreeNZ] A library for loading libraries, perhaps? 05:44:45 [BGreeNZ] (if that really is the case, I doubt it'd be much use when linked dynamically) :) 05:45:39 [BGreeNZ] * BGreeNZ is a Pascal programmer, so he doesn't have to deal with all that, er, stuff 05:49:56 [TheDarkOne] not in gcc 05:52:03 [SenRepus] ramen noodles at bedtime... worst idea ever 05:52:09 [SenRepus] im going to wake up from heartburn tonight 05:52:50 [SenRepus] Tum tahTum Tum TUMS! 05:56:16 [SenRepus] i miss the old chalky tasting tums, now they are loaded with sugar, or at least i think its sugar judging from how bad it makes my tooth hurt 05:56:52 [SenRepus] BGreeNZ: woot i learned pascal in high school 05:57:31 [BGreeNZ] SenRepus: I did too, but it wasn't in class ;-) 05:57:53 [SenRepus] mine was 05:57:58 [SenRepus] class of 9 kids 05:58:04 [BGreeNZ] Up until then, I was hacking away in <gulp> QBASIC 05:58:17 [TheDarkOne] SenRepus: you have any idea about libLoadLibrary.so? 05:58:47 [SenRepus] no, sorry, ive since stopped programing all together and run windows ME 05:58:55 [TheDarkOne] ... 05:59:17 [Muz] I learnt Logo in school... 05:59:56 [SenRepus] i only screwed with HTML before i learned pascal, took some java in college but i just find programing too dull to follow 06:01:22 [TheDarkOne] so no one has any clue what libLoadLibrary.so belongs to 06:02:44 [SenRepus] of the 4 of us here, no. there are other people who would have a better chance of konwing who arent here right now 06:03:01 [BGreeNZ] TheDarkOne: I just asked my Debian firewall / router, and it didn't know either 06:03:13 [SenRepus] its kind of nighttime in america and for some reason the people from over the lake are always gone when its night here 06:04:02 [BGreeNZ] TheDarkOne: What directory is it in? 06:04:39 [BGreeNZ] or, doesn't your computer know anything about it either? 06:04:53 [TheDarkOne] oh, ctypes requires it 06:04:57 [TheDarkOne] i don't have a clue 06:05:25 [BGreeNZ] did you try find / -iname "*libload*" ? 06:06:13 [TheDarkOne] /usr/local/home/root/X11R6/xc/programs/Xserver/hw/xfree86/loader/libloader.a 06:06:19 [TheDarkOne] thats it so far. 06:06:36 [TheDarkOne] some kde stuff 06:06:43 [TheDarkOne] klibloader 06:06:46 [BGreeNZ] :) 06:06:54 [TheDarkOne] nothing relevant yet 06:07:13 [SenRepus] night everyone 06:07:24 [SenRepus] * SenRepus takes his tums and turns the lights out 06:07:25 [BGreeNZ] Maybe you should start more specific 06:07:53 [BGreeNZ] like with find / -iname "*libloadlib*" 06:07:55 [mustaqila] mustaqila has joined #musicbrainz 06:08:28 [Muz] Muz has quit 06:09:09 [BGreeNZ] The theory is, if you do manage to find it on your box, the path might give you a clue as to what it belongs to 06:09:20 [TheDarkOne] well i need it 06:09:28 [TheDarkOne] so i can use python-musicbrainz 06:09:59 [TheDarkOne] nothing 06:10:02 [BGreeNZ] Not sure, but you might have to login as root to find all files 06:10:31 [BGreeNZ] "find" will only find files that *you* can see 06:10:54 [SenRepus] woot! 06:11:16 [BGreeNZ] SenRepus: Woot!? 06:11:37 [TheDarkOne] i am su'd to root 06:12:47 [BGreeNZ] Oh. So you have absolutely no files that (case insensitively) match "*libloadlib*" :( 06:14:38 [TheDarkOne] thats right 06:15:36 [BGreeNZ] You say ctypes needs it? 06:16:11 [TheDarkOne] File "/usr/lib/python2.4/site-packages/ctypes/__init__.py", line 300, in __init__ 06:16:11 [TheDarkOne] self._handle = _dlopen(self._name, mode) 06:16:11 [TheDarkOne] OSError: libLoadLibrary.so: cannot open shared object file: No such file or directory 06:16:13 [TheDarkOne] last lines 06:22:14 [BGreeNZ] Hmm, 06:23:17 [BGreeNZ] Where does "LoadLibrary" appear in the source? 06:23:31 [TheDarkOne] source of what? 06:24:06 [BGreeNZ] Your program, or ctypes 06:24:31 [TheDarkOne] the program is python-musicbrainz 06:24:35 [TheDarkOne] i have no clue where in ctypes 06:25:33 [BGreeNZ] I'm just wondering if maybe "LoadLibrary" is a string that's passed around somewhere, that ends up being used to construct a (non-existant) filename 06:25:46 [BGreeNZ] Have you tried grepping for it? 06:26:50 [TheDarkOne] 1 sec 06:29:08 [TheDarkOne] from _ctypes import LoadLibrary as _dlopen 06:29:59 [SenRepus] SenRepus has quit 06:32:59 [BGreeNZ] Is that the only reference? 06:33:17 [TheDarkOne] and thats if it was windows nt or ce 06:36:04 [BGreeNZ] That makes perfect sense. All the references to libLoadLibrary on Google are in the context of Windows or cygwin, aren't they? 06:36:22 [TheDarkOne] i suppose 06:36:23 [TheDarkOne] but 06:36:28 [TheDarkOne] why am i getting this 06:36:32 [TheDarkOne] since im linux 06:36:52 [TheDarkOne] and when i commented out that line 06:36:52 [BGreeNZ] Yes, that is the question 06:36:54 [TheDarkOne] nothing changes 06:39:42 [BGreeNZ] that line seems to me to be importing LoadLibrary and renaming it to _dlopen 06:40:18 [BGreeNZ] so, what other references to "import .* _dlopen" are there? 06:40:19 [TheDarkOne] for windows ce 06:40:51 [TheDarkOne] elif _os.name == "posix": 06:40:51 [TheDarkOne] from _ctypes import dlopen as _dlopen 06:42:15 [BGreeNZ] So, the question now is to find out why the "from _ctypes import LoadLibrary as _dlopen" path is being executed 06:42:23 [TheDarkOne] it isn't 06:42:32 [TheDarkOne] because i still get the same error with that line commented out 06:43:02 [BGreeNZ] Did you recompile it? 06:43:45 [TheDarkOne] it's a .py file 06:43:53 [TheDarkOne] they compile on the fly 06:43:54 [TheDarkOne] like perl 06:43:56 [TheDarkOne] or tcl 06:44:13 [TheDarkOne] or java 06:45:07 [BGreeNZ] Yes, I'm well aware of that, but Python likes to keep compiled versions of all those .py files lying around 06:45:17 [TheDarkOne] .pyc 06:45:21 [TheDarkOne] or 06:45:22 [TheDarkOne] .pyo 06:45:57 [BGreeNZ] Yes, exactly. So, are you sure the .py file(s) are actually being recompiled? 06:46:27 [TheDarkOne] ok 06:46:32 [TheDarkOne] removed the compiled versions 06:46:36 [TheDarkOne] same error. 06:47:22 [BGreeNZ] Even with the only reference to "LoadLibrary" in the entire source commented out? 06:47:42 [TheDarkOne] yes 06:47:56 [BGreeNZ] Seems impossible 06:48:38 [BGreeNZ] Can you add something to that .py file to print a message on the screen? 06:48:38 [TheDarkOne] hmm 06:48:47 [TheDarkOne] i dont know python 06:49:59 [BGreeNZ] print "Hello" seems to work... 06:50:08 [TheDarkOne] it might be boehm-gc 06:50:08 [TheDarkOne] hmm 06:50:09 [TheDarkOne] 1 sec 06:50:22 [TheDarkOne] because dlopen-gc belongs to it 06:50:42 [TheDarkOne] dl open is dynamic link loading 06:50:45 [TheDarkOne] dlopen* 06:53:43 [TheDarkOne] which ties to ld 06:54:17 [BGreeNZ] * BGreeNZ installed Python for the sole purpose of playing Pysol :) 06:55:31 [BGreeNZ] * BGreeNZ then (somehow) managed to get Pysol to run under Python 2.3 or 2.4, including the sound server 06:55:45 [BGreeNZ] on Windows, that is 06:57:48 [BGreeNZ] Point is, there has to be some logical reason why your program is still looking for "libLoadLibrary.so" when it clearly shouldn't be. 06:58:05 [TheDarkOne] probably ties into dlopen 06:59:10 [BGreeNZ] Maybe, but that "LoadLibrary" text *has* to come from *somewhere* 07:00:08 [orogor] orogor has joined #musicbrainz 07:00:09 [BGreeNZ] And, the most logical place is that (commented out) line in whichever .py file it is. 07:01:13 [BGreeNZ] Python isn't, by chance, still running the old compiled version of your program, is it? 07:01:20 [TheDarkOne] nope 07:01:29 [TheDarkOne] im trying redoing binutils and libtools 07:01:49 [BGreeNZ] Can you force it to rebuild, just to be sure? 07:03:04 [TheDarkOne] i di 07:03:05 [TheDarkOne] d 07:03:17 [TheDarkOne] because i made a mistake and it gave an error 07:04:02 [BGreeNZ] Hmm. 07:07:40 [BGreeNZ] Still, in order for the text "LoadLibrary" to appear in that error message, it *must* appear somewhere in one of the source files or precompiled libraries. The compiler / interpreter can't just magically pluck it out of thin air 07:10:15 [BGreeNZ] Any chance you could make a out of the relevant files, starting with the one "LoadLibrary" appears in and the main file of your app? 07:10:33 [TheDarkOne] the LoadLibrary is in ctypes 07:10:41 [TheDarkOne] the app is python-musicbrainz 07:11:05 [BGreeNZ] Yes, but I can't see those. 07:12:41 [g0llum] g0llum has joined #musicbrainz 07:15:23 [intgr] intgr has quit 07:17:07 [BGreeNZ] I don't have those files on this computer (running WinME, btw) 07:18:21 [TheDarkOne] it doesnt seem to be anything relevant in those files :/ 07:21:50 [BGreeNZ] Then, how is it possible that you are getting that error message? Python can't just pluck that text out of nothing. It's deterministic, not magic 07:33:22 [intgr] intgr has joined #musicbrainz 07:35:42 [BGreeNZ] Where do I get python-musicbrainz, then? 07:40:38 [yalaforge] yalaforge has joined #musicbrainz 07:40:58 [yalaforge] TheDarkOne: still trouble with python-mb? 07:42:02 [mustaqila] I wonder what it;d be like to have one of those for this room 07:43:49 [yalaforge] well, whatever. if he returns tell him he's running the wrong version of ctypes 07:44:24 [yalaforge] 0.9.9.3 changed the API, which was obviously a bad idea. use ctypes 0.9.6 and it'll work 07:44:27 [yalaforge] yalaforge has quit 07:54:47 [Shrikey] Shrikey has joined #MusicBrainz 07:54:53 [Shrikey] Shrikey has quit 07:59:24 [BGreeNZ] BGreeNZ has quit 08:32:00 [luks] luks has joined #musicbrainz 08:45:44 [pankkake] pankkake has quit 08:55:20 [pankkake] pankkake has joined #musicbrainz 09:20:21 [Shepard`] Shepard` has joined #musicbrainz 09:36:59 [Shepard] Shepard has quit 09:36:59 [Shepard`] Shepard` is now known as Shepard 09:48:45 [sidd] sidd has joined #musicbrainz 09:50:41 [sidd] hey, I need some help trying to use MBQ_FileInfoLookup 09:50:54 [sidd] what arguments should i give with it? it doesnt say in the docs 09:51:16 [sidd] doing it with the perl module btw 09:54:45 [sidd] ah, never mind. found it in the docs 09:54:51 [sidd] * sidd hides 11:29:17 [luks] luks has quit 11:38:39 [Shepard`] Shepard` has joined #musicbrainz 11:45:48 [sidd] grawhhg! 11:46:01 [sidd] i still cant work out MBQ_FileInfoLookup 11:46:18 [sidd] anyone familiar with the perl module? i cant work out how to pass the arguments 11:46:47 [sidd] when i give them as an array like [artist,album,title] it doesnt seem to be working 11:55:54 [Shepard] Shepard has quit 11:55:55 [Shepard`] Shepard` is now known as Shepard 11:57:14 [HairMetalAddict] Used the perl module, but never that particular command, so no clue here. 12:00:32 [sidd] hm, well. i gather it would be the same for any query with multiple args 12:00:43 [sidd] i could just be doing something simple wrong 12:04:21 [HairMetalAddict] I use it in a script run every 1-2 months to compare already-tagged MP3s to the current information at MB (spelling, artist, caps changes and such), so I only send the MBIDs that are found in the MP3s. 12:04:26 [HairMetalAddict] $query = [$mp3info{'ambid'}]; 12:04:26 [HairMetalAddict] if (!$mb->query_with_args(MBQ_GetArtistById, $query)) { 12:04:47 [HairMetalAddict] $mp3info{'ambid'} is the artist MBID found in the MP3 12:05:21 [HairMetalAddict] in this example 12:05:59 [sidd] hrm 12:06:52 [sidd] okay, in 12:06:59 [sidd] it gives a bit list of parameters 12:10:25 [HairMetalAddict] I think ... 12:10:27 [HairMetalAddict] $query = [$artistname, $albumname, $trackname, $tracknum, $duration, $filename, $artistid, $albumid, $maxitems]; 12:10:45 [HairMetalAddict] With empty strings sent where you're not sending info. 12:10:55 [sidd] yeah. ill play a bit more 12:11:04 [HairMetalAddict] $mb->query_with_args(MBQ_FileInfoLookup, $query) 12:11:25 [HairMetalAddict] Not guaranteeing that, though. That's what I'd guess, not having tried it. 12:26:02 [LTjake] LTjake has joined #musicbrainz 12:27:04 [chocomo] * chocomo roll 12:27:57 [sidd] HairMetalAddict: worked it out 12:28:10 [sidd] i had to put a blank entry at the start of the array 12:28:12 [sidd] no idea why 12:28:24 [sidd] eg ["",artist,album,track] 12:30:51 [HairMetalAddict] Odd. But if it works... 12:33:36 [sidd] yeah, maybe the values arnt in the same order as listed in that link :/ 12:42:40 [HairMetalAddict] Or [0] doesn't get used in the array. Artist is [1] and so on... 12:42:54 [sidd] yeah, that too 13:10:01 [Jetpack] Jetpack has joined #musicbrainz 13:22:52 [GURT] g0llum: are you around? 13:27:00 [g0llum] <up 13:29:37 [chocomo] >down 13:29:48 [chocomo] scnr 13:32:21 [fctk] fctk has joined #musicbrainz 13:32:26 [fctk] hello 13:32:35 [chocomo] hi 13:32:44 [fctk] i'm new to mb moderations 13:32:50 [chocomo] wow your nick is.. misunderstanable 13:33:02 [fctk] what is the difference between "feat." and "pres." ? 13:33:24 [fctk] chocomo, my nick? 13:33:56 [sidd] hrm, how complete is the ASIN query feature? i mean, do most albums on musicbrainz have an ASIN? 13:34:38 [chocomo] asin query feature is turned off. only manual adds now 13:34:42 [fctk] on the cd cover it is written "pres.", but on mb it is written "feat." instead. should i correct it? 13:34:52 [chocomo] fctk: no 13:34:56 [chocomo] wait 13:35:02 [chocomo] wat does pres. mean 13:35:08 [fctk] i don't know 13:35:11 [_thom_] presenting? 13:35:23 [_thom_] or presents? 13:35:23 [fctk] maybe 13:35:42 [sidd] hurm, asin is turned off? are there plans to have it turned back on? 13:35:49 [chocomo] sidd: no 13:35:54 [fctk] anyway, if i click on "guess case" it corrects it to "Pres." with "P" uppercase 13:35:55 [chocomo] just manualy add therm 13:35:55 [_thom_] like if this artist being 'pres.' is their first time appearing on a record label? 13:36:14 [_thom_] eg a new actor/actress in a movie or something. I dunno 13:36:39 [fctk] ok i understood. i keep "feat.". 13:38:10 [fctk] 13:38:21 [fctk] it is the artist i am speaking about 13:38:26 [fctk] someone already created it 13:38:36 [chocomo] yarg 13:38:42 [chocomo] evil evil nebies (not you9 13:39:05 [fctk] it is a wrong thing making a new entry such as: "deijay presents svetla"? 13:39:11 [chocomo] ye 13:39:12 [chocomo] s 13:39:16 [fctk] ok 13:39:16 [chocomo] please do not 13:39:28 [chocomo] have you read ? 13:39:56 [chocomo] its a good place to start anyway :) 13:41:33 [fctk] it only speaks about "feat." 13:42:24 [ojnkpjg] bleh 13:42:28 [ojnkpjg] so i added this artist a while ago: 13:42:30 [ojnkpjg] Pink 13:42:30 [ojnkpjg] electronic netlabel musician on monotonik 13:42:44 [ojnkpjg] there is about a 20:1 bogus album ratio 13:42:53 [ojnkpjg] i guess people just ignore the resolution note 13:43:18 [chocomo] they are asses 13:43:23 [chocomo] hatasses 13:43:30 [chocomo] erh asshats 13:43:39 [ojnkpjg] i wish modbot would mail you about subscribed artists on va albums 13:43:47 [chocomo] god yes 13:43:50 [ojnkpjg] i should enter a trac ticket 13:43:52 [ojnkpjg] if there isn't one 13:43:58 [chocomo] tere should be 20 13:44:01 [chocomo] already 13:44:02 [ojnkpjg] :) 13:44:03 [chocomo] :) 13:44:29 [Shepard] :) 13:44:37 [ojnkpjg] :) 13:44:47 [GURT] (: 13:44:56 [ojnkpjg] ok, now it's ruined for everyone 13:45:08 [chocomo] :( 13:45:12 [ojnkpjg] hehe 13:45:17 [GURT] sorry my chair swivles 13:46:05 [ojnkpjg] You Make Me Sick TRM Pink (Change) 13:46:07 [ojnkpjg] that's about right 13:46:21 [chocomo] :( 13:46:26 [chocomo] dakar ojnk 13:46:43 [chocomo] I haver the same prolem with mana / maná and Duke Ellington 13:48:25 [Shepard] err... 13:49:12 [chocomo] err 13:50:33 [Shepard] that's strange: 13:50:37 [Shepard] the last track is being added 13:50:42 [Shepard] but it has a disc id 13:50:48 [Shepard] guess he submitted that afterwards 13:51:10 [Shepard] yes he did 13:51:11 [Shepard] hmmm 13:51:19 [ojnkpjg] i only just now noticed the "approve" link after zout's email 13:57:59 [inhouseuk] inhouseuk has joined #musicbrainz 14:02:08 [orogor] orogor has quit 14:04:33 [chocomo] jeg burde egentlig vaske opp 14:04:44 [chocomo] ska se hvor lenge jeg can utsette det 14:06:02 [chocomo] chocomo has left #musicbrainz 14:09:32 [chocomo] chocomo has joined #musicbrainz 14:14:01 [g0llum] g0llum has quit 14:15:11 [Amblienne] Amblienne has joined #musicbrainz 14:19:08 [Amblin] Amblin has quit 14:31:00 [Shepard] some votes please: 14:31:13 [zout] zout has joined #musicbrainz 14:31:17 [zout] hi 14:33:27 [Shepard] hi zout 14:34:25 [Shepard] where was this guess case test suite again... 14:34:32 [zout] I could use some votes on 14:35:16 [chocomo] wow i could remember when we hit mod nr 3000000 14:35:36 [zout] Shepard: can't find it in my mail 14:42:19 [fctk] another question 14:42:26 [fctk] i have a cd made by a certain artist 14:42:36 [fctk] all tracks except one are made by that artist 14:42:47 [fctk] how can i show one track is not made by him? 14:43:02 [sidd] <-- bed time 14:43:06 [sidd] sidd is now known as sidd|sleep 14:43:18 [zout] click on 'Show artists' 14:43:25 [zout] and then change the track artist for that track 14:43:33 [zout] with a modnote of course 14:43:58 [fctk] ok thanks 14:44:13 [g0llum] g0llum has joined #musicbrainz 14:45:27 [zout] which album are you talking about? 14:46:33 [Shepard] g0llum? 14:50:08 [g0llum] yes Shepard 14:50:37 [Shepard] GC will change "Bla (vs. Bla)" to "Bla (vs.Bla)" :] 14:51:22 [g0llum] i you want this, i can code that, yes :-) 14:51:29 [g0llum] or am i misunderstanding you? 14:51:30 [g0llum] *g* 14:52:33 [g0llum] ticket, please 14:53:27 [Shepard] can you link me to the test suite on the server again, please? 14:54:15 [g0llum] 14:54:33 [g0llum] this link needs to be featured somewhere prominent, i guess 14:57:29 [Shepard] argh I even had it in my bookmarks 15:00:49 [HairMetalAddict] <-- The album title that Guess Case will never get correct. ;-) 15:01:26 [HairMetalAddict] Exceptions to "feat." are impossible to code in, of course. 15:03:28 [GURT] should i have left this as fixed or should it be marked as fixed when the fix is uploaded and on the site? 15:06:26 [oqp] oqp has quit 15:08:26 [Shepard] HMA: why is this an exception? 15:08:57 [toxickore] toxickore has joined #musicbrainz 15:09:19 [GURT] The album title is a joke 15:09:49 [Shepard] indeed ;) 15:12:11 [g0llum] GURT: technically, the ticket can be closed. but someone has got to check them into CVS. i'll do that in a sec 15:12:36 [GURT] ok, thanks. i didn't know if it was marked closed if it would get over looked or something 15:14:11 [Shepard] if the album title really is an exception, why did noone yet add that to the annotation? lazy asses 15:14:28 [GURT] yeah.. lazy asses 15:15:15 [g0llum] thing don't do themselves here :) 15:15:31 [g0llum] you have to bug (and bug times again) people who have cvs access to do it 15:15:37 [GURT] that dosen't stop people from waiting for them to though :-P 15:16:34 [g0llum] nobody is subscribed to the other category, that's why :) 15:16:55 [g0llum] zouts version is the one to use? 15:17:05 [GURT] well 15:17:10 [GURT] mine uses the old search 15:17:38 [GURT] it'd be nice to have the option for both on the page as long as we have the option of the search on the search pages 15:17:45 [g0llum] yes 15:18:33 [zout] yes, and I like the old search better! 15:18:44 [GURT] (me too) 15:19:35 [zout] something else: why is removing a release date an autoedit for automoderators these days? 15:19:57 [zout] RFV: 15:22:31 [GURT] i wish when i clicked on an albums "view album edits" that it showed me track edits on that album as well 15:22:59 [GURT] is that something taht will be changed? or is there a reason that it dosen't already? 15:28:00 [fctk] fctk has quit 15:28:44 [zout] GURT: if I understand correctly, that was too much a strain on the database 15:35:32 [Mudcrow] votes please 15:36:16 [Mudcrow] it always helps to spell the bands name correctly when searching :/ 15:38:42 [luks] luks has joined #musicbrainz 15:38:51 [GURT] dosen't matter witht he new search! i think you're more apt to get the band you're looking for if you spell it wrong. :D 15:39:00 [GURT] hi luks 15:39:15 [luks] hey 15:43:39 [Mudcrow] I searched twice for pink torpedos got no results, only noticed after I added the album its spelt Pink Torpedoes 15:48:33 [GURT] oysterhead is playing bonnaroo, i hope they will tour again 15:48:34 [Mudcrow] I'm surprised the album is already in MB, Pink Torpedoes fan base must have doubled. Theres at least two of us now 16:09:16 [luks] fuchs: ping 16:09:28 [fuchs] pong 16:09:47 [luks] do you have any idea how to fix the approve vs modbot issue? 16:10:11 [luks] because it causes more than just an added note :/ 16:10:21 [luks] modpending for those entities is -1 16:10:52 [fuchs] *g* 16:11:01 [fuchs] typical problem where you add semaphores ;) 16:11:08 [fuchs] just kidding 16:11:16 [fuchs] no idea yet 16:12:07 [luks] i would have to ask ruaok to fix the broken modpending fields 16:12:23 [g0llum] GURT: i've checked in the firefox plugins 16:12:29 [g0llum] can you check them quickly? 16:12:31 [g0llum] 16:13:21 [fuchs] luks: easiest way would be to add another status for TOBEAPPROVED or something, and let those approves be handled by modbot as well 16:14:24 [fuchs] but that would reduce the benefits we earn from this feature :/ 16:14:39 [luks] or change modbot's loop to something like: while (1) { $mod = load_one_open_mod(); if (!$mod) break; ... } 16:14:47 [luks] which would slow down it :/ 16:14:47 [fuchs] urgs 16:14:55 [fuchs] yes, a lot 16:15:01 [luks] i know 16:15:46 [luks] or some flag at start and end of modbot's loop. so we can simply disable approving while modbot runs 16:16:22 [fuchs] where would you put such flag? 16:16:29 [luks] that's the problem 16:17:05 [fuchs] maybe when ModBot sees a STATUS_APPLIED mod, _then_ do the reloading of only this mod 16:17:31 [luks] ModBot can't see STATUS_APPLIED mod 16:17:51 [nikki_] * nikki_ returns 16:17:55 [luks] it has old data, so it sees only open or tobedeleted mods 16:17:56 [fuchs] it sets STATUS_APPLIED for mods 16:19:39 [fuchs] luks: line 319ff, there it knows about the current state according to the old data 16:20:40 [fuchs] luks: in this loop ignore all mods that have APPLIED and are updated 16:21:47 [fuchs] luks: you know what i mean? 16:21:53 [luks] yes 16:21:53 [sidd|sleep] sidd|sleep has quit 16:22:42 [fuchs] so in if ($newstate == STATUS_APPLIED) first load the mod again from the moderation_open table and decide whether to silently drop it or continue it's processing 16:23:08 [luks] yep, that's probably the only thing we can do 16:23:12 [fuchs] i think there are max a few 100 mods applied in every modbot run 16:25:15 [fuchs] luks: we could also add a table lock, so approves are not possible durring modbot runs as you said, but that would mean you can't add any mod 16:26:31 [luks] yes, that would be too restrictive 16:31:20 [GURT] g0llum: i have no idea what to do with that page :P 16:33:21 [g0llum] you can click on the files and review the contents :) 16:33:53 [g0llum] i've added a lucene "u" into the icons 16:34:05 [g0llum] replaced the old plugin with the lucene urls 16:34:10 [GURT] ah,. i tried to download them all to throw in the search plugin folder 16:34:11 [g0llum] added ...-old plugins and images 16:34:17 [GURT] it looks fine to me 16:34:53 [GURT] thanks 16:34:58 [g0llum] what do you think about the images, ok? 16:34:59 [g0llum] 16:35:18 [GURT] yeah i saw them. looks good 16:35:20 [g0llum] just a quick fix 16:35:38 [fuchs] luks: hmm, still, this would only reduce the probability of the occurence of such conflicts 16:35:55 [fuchs] :) 16:36:19 [g0llum] GURT: i had not installed them yet, they are quite useful in fact :) 16:36:27 [GURT] actually 16:36:43 [GURT] it looks like the new .scr files need to have .gif changed to .png 16:37:21 [fuchs] luks: so going one step down in the tree, and do the handling in Moderation::CloseModeration by checking if the "UPDATE moderation_open SET status = ? WHERE id = ?" succeeded would be even better 16:37:33 [fuchs] and less code 16:37:45 [g0llum] no, the pngs are just in there because png supports layers 16:37:58 [GURT] updateIcon=" " should be <browser 16:37:58 [GURT] update=" " 16:37:58 [GURT] updateIcon=" " 16:37:58 [GURT] no? 16:38:21 [g0llum] for the chance that someone does not like the icons and i have to re-create them... 16:38:28 [g0llum] oh, let me see 16:38:35 [GURT] but the .src will see the old icons and ignore the .pngs 16:38:51 [fuchs] luks: we just had to return a success status in CloseModeration and check this in ModBot, and continue with the next mod in the loop 16:39:20 [fuchs] +in case it failed 16:39:38 [luks] fuchs: modbot checks for prerequisites and calls DaniedAction, *before* calling the CloseModeration 16:40:05 [fuchs] gnarf 16:40:22 [g0llum] no, .GIF is fine. i've exportem them from fireworks, which uses PNG to save layers 16:40:44 [g0llum] so, ignoring the PNG is quite what i wanted :) 16:41:30 [GURT] there are 1 set of 3 .src files -- shouldn't there be 2? 16:41:50 [GURT] 1 set for old search 1 for lucene 16:42:41 [g0llum] you have to select the branch, else you won't see them 16:42:54 [g0llum] tricky stuff this cvs webview huh? :) 16:42:58 [GURT] ok.. im not familiar with this is all 16:43:07 [g0llum] again, 16:44:08 [GURT] im just going to trust you how about that :) 16:46:03 [Brandon_72] Brandon_72 has joined #musicbrainz 16:46:27 [Brandon_72] Brandon_72 has left #musicbrainz 16:46:46 [GURT] i still think updateIcon=" " 16:46:46 [GURT] should be .png because the PNG files hae the U.. and when i install these i wont see the U because i already have musicbrainzalbum.gif 16:49:47 [GURT] i'll copy the files locally and test it out after i eat some lunch 16:50:36 [g0llum] g0llum has quit 17:13:03 [chocomo] rofl 17:15:22 [ojnkpjg] i like this one 17:15:23 [ojnkpjg] 17:24:56 [Shepard] aww mr slowhand 17:27:39 [chocomo] uff lol 17:47:54 [yalaforge] yalaforge has joined #musicbrainz 18:34:47 [yalaforge] yalaforge has quit 18:44:54 [kcapteJ] kcapteJ has joined #musicbrainz 18:45:14 [Jetpack] Jetpack has quit 18:45:18 [kcapteJ] kcapteJ is now known as Jetpack 19:11:03 [chocomo] chocomo has quit 19:15:12 [chocomo] chocomo has joined #musicbrainz 19:24:36 [Jetpack] Jetpack has quit 19:27:37 [chocomo] erh.. wtf is going on with asins 19:28:51 [HairMetalAddict] "Dead ASIIIIINS can't take care of themselves...." 19:29:31 [chocomo] hair: you don't care about this, its ããã¢ãã 19:29:35 [chocomo] :p 19:31:44 [HairMetalAddict] Squiggly lines! Run for yer lives!!! 19:31:47 [GURT] whats the ASIN issue? 19:32:36 [nikki_] * nikki_ can read it fine :P 19:32:57 [Jetpack] Jetpack has joined #musicbrainz 19:33:20 [chocomo] album has an asin but cover is not turinig up, same is true for albums that I *knew* had album art before 19:33:25 [chocomo] hæ? 19:33:26 [chocomo] 19:33:30 [chocomo] wtf bold 19:33:39 [chocomo] test 19:33:42 [chocomo] jøss da 19:33:53 [nikki_] * nikki_ is glad she has colours/styles off 19:33:59 [chocomo] good idea 19:34:04 [chocomo] it was an accident anywa 19:34:05 [chocomo] y 19:34:14 [GURT] well.. the ASIN is still correct 19:34:14 [nikki_] exactly 19:34:34 [nikki_] * nikki_ watches èªæ®ºãµã¼ã¯ã« again 19:39:08 [yllona] yllona has joined #musicbrainz 19:41:59 [chocomo] oh, soall mu hard work to linbk albumsto their correct release is just 'so it will work' then? if you haven't noticed. there is no 'purchase' type link. even if the coverart didn't work, there should be a bye link. but there is no point 19:42:03 [chocomo] apparently whatever 19:42:18 [chocomo] buy 19:43:00 [chocomo] chocomo has left #musicbrainz 19:44:24 [LTjake] LTjake has quit 19:49:29 [GURT] they speak as if they are the only one that adds ASINs to albums.. 19:55:02 [zout] maybe someone with database access can do a Add ASIN AR query to see who add them the most? 19:55:36 [nikki_] why? 19:59:26 [zout] response to GURT's remark 19:59:38 [nikki_] oh 20:00:02 [nikki_] * nikki_ checks the chatlogs 20:00:57 [nikki_] I see. 20:02:41 [zout] It's really not important 20:02:55 [zout] although I'm curious how many ASIN's already have been added 20:03:29 [nikki_] I can do that much 20:04:03 [zout] I'd like that 20:04:32 [nikki_] although right now my replication is a couple of days out of date 20:04:39 [luks] 5920 ASIN URLs 20:04:48 [nikki_] fine 20:04:52 [luks] :) 20:04:54 [nikki_] * nikki_ won't do that much 20:08:49 [zout] thanks! 20:08:57 [zout] luks: 20:09:25 [luks] errrr 20:11:18 [luks] i can't do much about that :/ 20:12:41 [luks] we shouldn't push ruaok to rename newsearch.html to oldsearch.html :) 20:23:11 [zout] of course not 20:24:18 [luks] nope, that was wrong tense used in that sentence :) 20:26:06 [luks] * luks goes back to coding... 20:32:03 [oqp] oqp has joined #musicbrainz 20:55:03 [yllona] has ruaok left for burning man? 20:55:30 [nikki_] I thought that was august 20:56:47 [chocomo] chocomo has joined #musicbrainz 20:57:37 [yllona] oh yeah, sorry i was thinking the msuic festival in Austin, SXSW. 21:05:58 [oqp] oqp has quit 21:08:04 [chocomo] hmm 21:08:33 [chocomo] * chocomo wants nikki to give him some tips on what to add to tag 21:09:00 [chocomo] has to be bouncy 21:10:10 [TheDarkOne] TheDarkOne has left #musicbrainz 21:11:10 [sandeen] sandeen has left #musicbrainz 21:12:18 [nikki_] 21:12:36 [chocomo] \o/ 21:20:40 [chocomo] wtflol 21:24:22 [oqp] oqp has joined #musicbrainz 21:25:50 [oqp] oqp has quit 21:35:33 [rowaasr13] rowaasr13 has joined #musicbrainz 21:35:36 [chocomo] :O 21:35:44 [chocomo] crtl+shift click in opera 21:35:46 [chocomo] <333 21:36:22 [rowaasr13] 31 applied changes for Kajiura in subscription report. Why did I knew that it is caps again even without looking? 21:36:34 [rowaasr13] I hate, hate, hate it. 21:36:43 [chocomo] because people lare idiots 21:36:45 [chocomo] grrr 21:37:06 [rowaasr13] Editing it back now. 21:37:26 [rowaasr13] Any news on making that not auto-mod or way of quick reverting around? 21:37:51 [chocomo] album locking is part of scmeo 2 I think 21:37:59 [chocomo] schema 21:38:21 [rowaasr13] That's interesting, but some sort of generic revertin facility would be good too. 21:38:38 [nikki_] * nikki_ hates trying to read kanji names from screenshots 21:38:54 [chocomo] row: that will also pe part of schema 2 21:38:54 [rowaasr13] Damn, one of those albums had those caps edit just a two days ago and now I'm reverting it once again... 21:38:58 [chocomo] total wiki style 21:39:22 [rowaasr13] Oh, what a joy! That will be just the best. 21:39:27 [chocomo] :) 21:40:36 [rowaasr13] nikki_, yeah, especially when you don't remember it and kanji is too blurry to accurately count strokes for search. 21:41:00 [g0llum] g0llum has joined #musicbrainz 21:41:33 [nikki_] rowaasr13: well, I can't read many kanji to start with :( 21:42:00 [rowaasr13] I have one artist in my subscription list whose reading for name is most likely wrong - I'm 90% sure of that. I have scan of booklet with romanization and string length is quite different from what is set as romanization now, but it too blurry to read what it actually is! 21:42:57 [nikki_] <- this is what I'm trying to understand 21:43:18 [nikki_] or rather, what the artist for sore de wa minsan sayounara is 21:43:50 [nikki_] er, minasan 21:44:12 [chocomo] I wanted to ask qacuestion about japanese 21:44:15 [nikki_] oh? 21:44:25 [chocomo] I know that trhey need open vovels 21:44:28 [rowaasr13] Speaking of scans, anybody know about good source of cover/booklet scans for japanese works? 21:44:38 [chocomo] and the only consonatn that can be is n 21:44:54 [yllona] rowaasr13: giantrobot.com 21:45:02 [chocomo] (exept for chu tsu etc) so how is 'teMpura possible? 21:45:03 [rowaasr13] n' consonant is actually a syllable too. 21:45:15 [rowaasr13] Because it is tenpura. 21:45:15 [chocomo] yes exactly 21:45:19 [nikki_] chocomo: it's written tenpura 21:45:19 [chocomo] but what about m? 21:45:24 [chocomo] ahaaaaa 21:45:26 [chocomo] heee 21:45:32 [nikki_] the syllabic n is pronounced as m or n 21:45:39 [chocomo] akuratt 21:45:50 [rowaasr13] n is sometimes romanised as m before n t and p because it sounds slightly different in this position. 21:46:03 [nikki_] I thought it was p, b and m 21:46:23 [rowaasr13] Ahem... I'm thinking slowly today. 21:46:25 [chocomo] hmm ranma? 21:46:29 [chocomo] ramma? 21:46:31 [chocomo] lol 21:46:41 [nikki_] n and t are articulated in the same place as n, so it wouldn't make much sense to change it to m there... 21:46:50 [nikki_] duh 21:46:53 [rowaasr13] Right. You can write it as Ramma too. :) 21:46:54 [nikki_] I write silly things sometimes 21:47:01 [chocomo] yea I noticed 21:47:20 [chocomo] the pronounicatinon in the anime is very simmialr to 'ramma' yes 21:47:23 [rowaasr13] Actually in russian's Polivanov transcription this n->m change is mandatory so you HAVE to write it that way. 21:47:49 [chocomo] polivamov? 21:48:09 [rowaasr13] Nay, Polivanov is authors name - like Hepburn. 21:48:16 [chocomo] Ã¥ sÃ¥nn :p 21:50:14 [chocomo] * chocomo should do the dishes :( 21:50:25 [chocomo] I know! I'll take minimoni with me! 21:50:34 [rowaasr13] Name up to featuring is Kuramoto Mitsuru. Sorry, no kanji in this mIRC. 21:51:54 [rowaasr13] I have to go AFK for a while now. I'll be back in a hour. 21:53:00 [nikki_] ahh 21:53:02 [nikki_] thanks 22:06:02 [oqp] oqp has joined #musicbrainz 22:13:11 [Shepard] * Shepard is back :o 22:13:53 [nikki_] hi shep 22:14:34 [Shepard] huhu 22:16:26 [nikki_] oooh. 22:16:38 [nikki_] l'âme immortelle are releasing another single :o 22:23:33 [toxickore] toxickore has quit 22:29:15 [Mudcrow] these amazon asin errors are really starting to piss me off 22:31:42 [oqp] oqp has quit 22:35:16 [Shepard] .oO( some people have problems... ) 22:36:10 [Mudcrow] seems almost every asin i've entered either now doesnt show as anything or say there is no image 22:37:21 [GURT] its because they know you go back to check on it and want to play games with you 22:42:35 [zout] zout has quit 22:46:15 [Shepard] someone stole my all-time last.fm charts 22:47:03 [Mudcrow] * Mudcrow goes off to listen to Atilla The Stockbroker & John Otway - Cheryl: A Rock Opera (An everyday tale of Satanism, Trainspotting, Drug Abuse and Unrequited Love) 22:47:49 [Mudcrow] and a goat 22:49:30 [toxickore] toxickore has joined #musicbrainz 22:49:32 [fuchs] Shepard: not me :) 22:49:42 [Shepard] sure, who else 22:51:21 [fuchs] * fuchs points at mustaqila 22:51:27 [Shepard] gib sie zurück, du sau! 22:51:28 [mustaqila] What?2 22:51:41 [fuchs] i just stole the switch ;) 22:51:54 [mustaqila] Errr? 22:51:56 [mustaqila] Jesus 22:52:01 [mustaqila] I just wake up and I'm confused 22:52:17 [fuchs] nice start of the day :) 22:52:29 [mustaqila] It was a nap ; ) 22:53:34 [fuchs] Shepard: hmm seem that not only your charts have been stolen ;) 22:53:43 [fuchs] +s 22:53:48 [mustaqila] * mustaqila thinksa clustefr may be broken 22:53:50 [mustaqila] * mustaqila looks 22:54:20 [Shepard] chart robbers! 22:54:20 [mustaqila] Seeems fine to me... 22:54:21 [fuchs] Shepard: now everything is back :) 22:54:58 [Shepard] better look if something is missing 22:55:03 [fuchs] mustaqila: according to the forum, a switch "crashed" 22:55:11 [mustaqila] Ah 22:55:18 [mustaqila] * mustaqila hasn't visitd the forum yet 22:55:22 [fuchs] everything was offline 22:56:02 [fuchs] Shepard: wtf is james labrie? 22:56:19 [fuchs] . o O ( Käsejames? ) 22:56:23 [Shepard] the singer of Dream Theater 22:56:31 [g0llum] löl! 22:56:34 [mustaqila] La Brie! 22:56:38 [mustaqila] * mustaqila likes cheese 22:57:18 [g0llum] <-- o graus, warum muss ich mir das nur antun 22:58:47 [g0llum] * g0llum excessively checks his mails and news sites every 2 minutes because the article is so boring. 22:59:47 [g0llum] .. and the mb channel ;) 23:00:48 [g0llum] ... 23:00:53 [fuchs] chatkiller! 23:01:11 [g0llum] erm. 23:01:22 [g0llum] why? 23:01:44 [fuchs] that what i should ask you :) 23:01:59 [g0llum] where is everybody? hello? 23:03:05 [fuchs] reading boring MS ads, i fear ;) 23:03:58 [rowaasr13] Back. nikki_, figured out that "featuring" yet? 23:04:39 [g0llum] checking their last.fm stats? :p 23:06:01 [g0llum] * g0llum wonders why MBChatLogger does not bitch about MS or Windows inside an URL 23:06:58 [g0llum] lazy hack. a GC developer would be fired within days if he worked like them. 23:07:42 [MBChatLogger] windows == pure evil 23:07:42 [Shepard] windoze! 23:08:12 [g0llum] he he 23:08:25 [g0llum] powerpoint 23:08:33 [g0llum] doh. 23:08:46 [Shepard] jetzt hast du dich verraten :) 23:08:47 [g0llum] this is the *evilest* of programs 23:09:17 [Shepard] * Shepard ripped seine alten Jugendsünden 23:09:34 [fuchs] shep the ripper 23:09:53 [Shepard] be aware! 23:10:17 [g0llum] where is gecks when i need a distraction? yesterday, i was @work when i was writing those mails 23:10:38 [Shepard] I have some Die Prinzen CDs here and I will make use of them! 23:11:09 [g0llum] O.o 23:13:02 [Shepard] bwahaha 23:15:55 [orogor] orogor has joined #musicbrainz 23:21:30 [tolsen] tolsen has quit 23:26:37 [rowaasr13] Damn 31 changes to Kajiura - all caps. Now 9 changes to Kawai. All caps too. 23:27:00 [chocomo] :( 23:31:20 [tolsen] tolsen has joined #musicbrainz 23:34:13 [chocomo] * chocomo pokes WolfsongOpera, you wanted to know why the japanese have Caps? 23:34:20 [chocomo] or lack there of 23:35:45 [SenRepus] SenRepus has joined #musicbrainz 23:36:11 [Shepard] hi mo, hi sen 23:36:51 [rowaasr13] Well, Kawai's album was full of errors in romanization though, so that gave me the chance to fix them. 23:36:58 [chocomo] :) 23:37:03 [chocomo] HISEN!!! 23:37:05 [chocomo] hi shep 23:40:09 [chocomo] hm 23:40:10 [chocomo] 23:43:28 [ChrisBradley] ChrisBradley has joined #musicbrainz 23:43:46 [SenRepus] himo@!! 23:43:51 [chocomo] !!! 23:43:57 [chocomo] :DD 23:45:30 [futurist] futurist has quit 23:49:35 [luks] luks has quit 23:57:15 [toxickore] toxickore has quit
http://chatlogs.musicbrainz.org/musicbrainz/2006/2006-03/2006-03-08.html
CC-MAIN-2014-10
refinedweb
13,106
71.28
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug scourge 0.13 is out, I tried to just rename the ebuild but I get a segmentation fault error while compiling. I extracted from the source a piece of code where g++ fails: #include <set> class Fog { std::set<void *> players[600][600]; } This is the error it gives: g++: Internal error: Segmentation fault (program cc1plus) Please submit a full bug report. See <URL:> for instructions. Platform here is i686 gcc-3.4.5 Adding [email protected] to get help on that Scourge 0.13 needs >= gcc 4 there is a thread on the scourge forums... I'll try to track down the fix and see if we can backport it. Added to CVS. I'll be rolling a new patchset which will include this very soon. Just an update, 3.4.6-r1 has been added to the tree *** Bug 134274 has been marked as a duplicate of this bug. *** 0.13 is now in portage. Thanks for report gcc-4 is unmasked now and it must be used to compile it
http://bugs.gentoo.org/126279
crawl-002
refinedweb
204
85.79
Docker registry bindings for The Update Framework Project description Docker registry bindings for The Update Framework in Python. Uses dxf to store TUF metadata and target files in a Docker registry. - Easy-to-use Python module and command-line tool. - Distribute your data with the security and trust features of The Update Framework. - Backed by the scalability and flexbility of a Docker registry. - No extra moving parts: just Python client-side and Docker registry server-side. - Docker client and daemon not required. - Notary not required. See this issue for discussion. - Supports Docker registry schema v1 and v2. - Works on Python 2.7 and 3.4. Examples Command-line example This assumes at least DTUF_HOST has been set to the hostname of a Docker registry (see Usage below). You may need to set DTUF_USERNAME and DTUF_PASSWORD too depending on the registry you’re using. You can run your own registry or use a hosted one such as Docker Hub (registry-1.docker.io). If you want to run your own, see this example script or the unit tests. There are also full instructions available here. # On the master machine $ echo 'Hello World!' > demo.txt $ dtuf create-root-key fred/demo $ dtuf create-metadata-keys fred/demo $ dtuf create-metadata fred/demo $ dtuf push-target fred/demo demo.txt demo.txt $ dtuf push-metadata fred/demo # pub key is in dtuf_repos/fred/demo/master/keys/root_key.pub # distribute it out-of-band # On some other machine $ dtuf pull-metadata fred/demo root_key.pub demo.txt $ dtuf pull-target fred/demo demo.txt Hello World! # Update on the master machine $ echo 'Update World!' > demo.txt $ echo 'Another World!' > demo2.txt $ dtuf push-target fred/demo demo.txt demo.txt $ dtuf push-target fred/demo demo2.txt demo2.txt $ dtuf push-metadata fred/demo # On the other machine $ dtuf pull-metadata fred/demo demo.txt demo2.txt $ dtuf pull-target fred/demo demo.txt Update World! $ dtuf pull-target fred/demo demo2.txt Another World! Module example This example uses the Docker Hub. Change the username, password and repository name to suit. Publish on the master machine: from dtuf import DTufMaster def auth(dtuf, response): dtuf.authenticate('fred', 'somepassword', response=response) dtuf = DTufMaster('registry-1.docker.io', 'fred/demo', auth=auth) with open('demo.txt', 'w') as f: f.write('Hello World!\n') dtuf.create_root_key() dtuf.create_metadata_keys() dtuf.create_metadata() dtuf.push_target('demo.txt', 'demo.txt') dtuf.push_metadata() # pub key is in dtuf_repos/fred/demo/master/keys/root_key.pub # distribute it out-of-band Retrieve on some other machine: from dtuf import DTufCopy def auth(dtuf, response): dtuf.authenticate('barney', 'otherpassword', response=response) dtuf = DTufCopy('registry-1.docker.io', 'fred/demo', auth=auth) with open('root_key.pub', 'r') as f: assert dtuf.pull_metadata(f.read()) == ['demo.txt'] s = '' for download in dtuf.pull_target('demo.txt'): for chunk in download: s += chunk assert s == 'Hello World!\n' Usage The module API is described here. Environment variables The dtuf command-line tool uses the following environment variables. Only DTUF_HOST is strictly required but you may need to set others depending on your set up. - DTUF_HOST - Host where Docker registry is running - DTUF_INSECURE - Set this to 1 if you want to connect to the registry using http rather than https (which is the default). - DTUF_USERNAME - Name of user to authenticate as. - DTUF_PASSWORD - User’s password. - DTUF_REPOSITORIES_ROOT - Directory under which TUF metadata should be stored. Note that the repository name is appended to this before storing the metadata. Defaults to dtuf_repos in the current working directory. - DTUF_AUTH_HOST - If set, always perform token authentication to this host, overriding the value returned by the registry. - DTUF_PROGRESS - If this is set to 1, a progress bar is displayed (on standard error) during dtuf push-target, dtuf push-metadata, dtuf pull-metadata and dtuf pull-target. If this is set to 0, a progress bar is not displayed. If this is set to any other value, a progress bar is only displayed if standard error is a terminal. - DTUF_BLOB_INFO - Set this to 1 if you want dtuf pull-target to prepend each blob with its digest and size (printed in plain text, separated by a space and followed by a newline). - DTUF_ROOT_KEY_PASSWORD - Password to use for encrypting the TUF root private key. Used by dtuf create-root-key, dtuf create-metadata and dtuf reset-keys. If unset then you’ll be prompted for the password. - DTUF_TARGETS_KEY_PASSWORD - Password to use for encrypting the TUF targets private key. Used by dtuf create-metadata-keys, dtuf create-metadata, dtuf reset-keys and dtuf push-metadata. If unset then you’ll be prompted for the password. - DTUF_SNAPSHOT_KEY_PASSWORD - Password to use for encrypting the TUF snapshot private key. Used by dtuf create-metadata-keys, dtuf create-metadata, dtuf reset-keys and dtuf push-metadata. If unset then you’ll be prompted for the password. - DTUF_TIMESTAMP_KEY_PASSWORD - Password to use for enrypting the TUF timestamp private key. Used by dtuf create-metadata-keys, dtuf create-metadata, dtuf reset-keys and dtuf push-metadata. If unset then you’ll be prompted for the password. - DTUF_ROOT_LIFETIME - Lifetime of the TUF root metadata. After this time expires, you’ll need to use dtuf reset-keys and dtuf push-metadata to re-sign the metadata. Defaults to 1 year. - DTUF_TARGETS_LIFETIME - Lifetime of the TUF targets metadata. After this time expires, you’ll need to use dtuf push-metadata to re-sign the metadata. Defaults to 3 months. - DTUF_SNAPSHOT_LIFETIME - Lifetime of the TUF snapshot metadata. After this time expires, you’ll need to use dtuf push-metadata to re-sign the metadata. Defaults to 1 week. - DTUF_TIMESTAMP_LIFETIME - Lifetime of the TUF timestamp metadata. After this time expires, you’ll need to use dtuf push-metadata to re-sign the metadata. Defaults to 1 day. - DTUF_LOG_FILE - Name of file to write log messages into. Defaults to dtuf.log in the current working directory. Set it to an empty string to disable logging to a file. - DTUF_FILE_LOG_LEVEL - Name of the Python logging level to use when deciding which messages to write to the log file. Defaults to WARNING. - DTUF_CONSOLE_LOG_LEVEL - Name of the Python logging level to use when deciding which messages to write to the console. Defaults to WARNING. Command line options You can use the following options with dtuf. In each case, supply the name of the repository on the registry you wish to work with as the second argument. Creating, updating and uploading data dtuf create-root-key <repo> Create TUF root keypair for the repository. The private key is written to $DTUF_REPOSITORIES_ROOT/<repo>/master/keys/root_key and can be moved offline once you’ve used dtuf create-metadata. You’ll need it again if you use dtuf reset-keys when the root metadata expires. The public key is written to $DTUF_REPOSITORIES_ROOT/<repo>/master/keys/root_key.pub and can be given to others for use when retrieving a copy of the repository metadata with dtuf pull-metadata. dtuf create-metadata-keys <repo> Create TUF metadata keypairs for the repository. The keys are written to the $DTUF_REPOSITORIES_ROOT/<repo>/master/keys directory. The public keys have a .pub extension. You can move the private keys offline once you’ve used dtuf push-metadata to publish the repository. You don’t need to give out the metadata public keys since they’re published on the repository. dtuf create-metadata <repo> Create and sign the TUF metadata for the repository. You only need to do this once for each repository, and the repository’s root and metadata private keys must be available. dtuf reset-keys <repo> Re-sign the TUF metadata for the repository. Call this if you’ve generated new root or metadata keys (because one of the keys has been compromised, for example) but you don’t want to delete the repository and start again. dtuf push-target <repo> <target> <file|@target>... Upload data to the repository and update the local TUF metadata The metadata isn’t uploaded until you use dtuf push-metadata. The data is given a name (known as the target) and can come from a list of files or existing target names. Existing target names should be prepended with @ in order to distinguish them from filenames. dtuf del-target <repo> <target>... Delete targets (data) from the repository and update the local TUF metadata. The metadata isn’t updated on the registry until you use dtuf push-metadata. Note that the registry doesn’t support deletes yet so expect an error. dtuf push-metadata <repo> Upload local TUF metadata to the repository. The TUF metadata consists of a list of targets (which were uploaded by dtuf push-target), a snapshot of the state of the metadata (list of hashes), a timestamp and a list of public keys. The metadata except for the list of public keys will be signed here. The list of public keys was signed (along with the rest of the metadata) when you used dtuf create-metadata (or dtuf reset-keys). dtuf list-master-targets <repo> Print the names of all the targets defined in the local TUF metadata. dtuf get-master-expirations <repo> Print the expiration dates of the TUF metadata. Downloading data dtuf pull-metadata <repo> [<root-pubkey-file>|-] Download TUF metadata from the repository. The metadata is checked for expiry and verified against the root public key for the repository. You only need to supply the root public key once, and you should obtain it from the person who uploaded the metadata. If you specify - then the key is read from standard input instead of a file. Target data is not downloaded - use dtuf pull-target for that. A list of targets which have been updated since you last downloaded them will be printed to standard output, one per line. dtuf pull-target <repo> <target>... Download targets (data) from the repository to standard output. Each target’s data consists of one of more separate blobs (depending on how many > were uploaded). All of them will be downloaded. dtuf blob-sizes <repo> <target>... Print the sizes of all the blobs which make up a list of targets. dtuf check-target <repo> <target> <file>... Check whether the hashes of a target’s blobs match the hashes of list of files. An error message will be displayed if not and the exit code won’t be 0. dtuf list-copy-targets <repo> Print the names of all the targets defined in the local copy of the TUF metadata. dtuf get-copy-expirations <repo> Print the expiration dates of the local copy of the TUF metadata. dtuf list-repos Print the names of all the repositories in the registry. Authentication tokens dtuf automatically obtains Docker registry authentication tokens using your DTUF_USERNAME and DTUF_PASSWORD environment variables as necessary. However, if you wish to override this then you can use the following command: dtuf auth <repo> <action>... Authenticate to the registry using DTUF_USERNAME and DTUF_PASSWORD, and print the resulting token. action can be pull, push or *. If you assign the token to the DTUF_TOKEN environment variable, for example: DTUF_TOKEN=$(dtuf auth fred/demo pull) then subsequent dtuf commands will use the token without needing DTUF_USERNAME and DTUF_PASSWORD to be set. Note however that the token expires after a few minutes, after which dtuf will exit with EACCES. Installation pip install python-dtuf Tests make test Lint make lint Code Coverage make coverage coverage.py results are available here. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/python-dtuf/
CC-MAIN-2021-49
refinedweb
1,923
59.9
Well I'm not sure how to program this, but anyway, I need to make 81 objects and assign their value (int value). So My Class is: Code : public class Cell { int value; int xloc; int yloc; int possiblevalues[]; } And I need basically 81 objects; I really don't want to say All my values are stored in a array, so is there any way to efficiently:All my values are stored in a array, so is there any way to efficiently:Code : Cell one = new Cell(); Cell two = new Cell(); one.value etc etc Make 81 objects, assign their values without repeating what I said above? I was thinking a for loop and an array of some sort, but I'm not quite sure how to approach this; I got this far: Not sure what I did but it seems to work? Code : public void assignvalues(){ GetDataMethods one = new GetDataMethods(); Cell[]cellarray; //Declares a Array of the type Cell cellarray = new Cell[81]; // Makes 80 slots, starting with 0 for(int x =0;x<=80;x++){ System.out.println(cellarray[x]); } }
http://www.javaprogrammingforums.com/%20object-oriented-programming/1181-arrays-objects-printingthethread.html
CC-MAIN-2014-15
refinedweb
182
54.7
I know how to convert a char to an int using atoi but how do I convert a single digit int to a char???? thanks. I know how to convert a char to an int using atoi but how do I convert a single digit int to a char???? thanks. try itoa() if your compiler comes with the non-standard conio.h file and your are willing to use non-standard material or use the standard compliant stringstream class and extract the char you want from the string which is derived. use itoa. It's not portable so, if you just want a sigle int to char, you could do something like: What we do is:What we do is:Code:char int_to_char(int input) { if(input<0 || input>9){ std::cerr<<"Wrong int taken as input"<<std::endl; return '0'; }else{ return reinterpret_cast<char>(input + 48); } } First, we check that the number is appropiate for conversion. If the number is not right, then we output error and return a zero (char), because we need to return something. We add 48 to the int. This gives us the ascii code for the number (48 ascii is 0 int). Then we let the compiler reinterpret the number as a char, that way, the correct number will be returned. This will only work for numbers between 0 and 9. No other numbers will be made into chars. >I know how to convert a char to an int using atoi but how do I convert a single digit int to a char???? Provided the int is really a single digit you can simply add '0' to it: If at any point you want multiple digit numbers to a string you will need to use sprintf or stringstreams. Don't use itoa because there are standard alternatives.If at any point you want multiple digit numbers to a string you will need to use sprintf or stringstreams. Don't use itoa because there are standard alternatives.Code:#include <iostream> using namespace std; int main() { int num = 5; char digit = char ( num + '0' ); cout<< num <<'\t'<< digit <<endl; } >return '0'; Not a good error code. Perhaps returning an obviously invalid char, or throwing an exception would be better suited. >return reinterpret_cast<char>(input + 48); reinterpret_cast is used for type conversions between two types that are unrelated. There's no need for it here. A static_cast would be better for removing the warning. And why use 48? That just makes the function less portable. Use '0' instead to avoid assuming ASCII. My best code is written with the delete key. I was going to point out what Prelude just did... instead of using char(input+48), which requires you to memorize that the character '0' is dec 48 in the ASCII chart, you could just do char(input+'0') that also is more readable... unless you put a comment saying that 48 is the ASCII dec equivalent for '0', people might not know what you're talking
https://cboard.cprogramming.com/cplusplus-programming/51228-help-convert-char-int.html
CC-MAIN-2017-13
refinedweb
501
71.34
Robots are supporting human behaviors, activities, and jobs in several notable ways. From robotic arms and hands to robots that provide surveillance and maintenance tasks, this technology is rapidly advancing to assist in ways that make certain tasks safer, quicker, or more accessible. Such robots are either automatic — meaning they can operate automatically without any human intervention — or controlled remotely, so that they operate manually from a remote location. Remote-controlled robots are typically wireless. Examples of remote-controlled robots include: - Snake robots – can enter into extremely narrow or small tunnels or pipelines, these robots are mainly used to find problems (such as a leak in a pipeline) but can also be used for search and rescue operations. - Fire-fighting robots – are used as a fire extinguisher that sprays water or CO2 on a fire. An operator controls this robot from a safe place and tries to extinguish the fire so that a firefighter does not have to put his or her life in danger. - Mine diffusing robots – are used to diffuse live bombs or mines in a battlefield. An operator can control its movements from a remote and safe distance, diffusing bomb (or mine) without risking a life. The Bluetooth-controlled robot As robots advance, so is the ability to communicate with them. Today’s remote-controlled robot are often wireless, which is typically done by using a smartphone via Bluetooth communication. This means anyone can control the robot’s movements with the touch of the fingers. One industry that has benefitted from robotic support is the agricultural sector. These robots can be wirelessly controlled or programmed to automate slow or repetitive tasks for farmers, allowing them instead to focus more on improving overall production yields. Some of the most common robots in agriculture are used for seeding, spraying, harvesting, picking, and weed control. The project For our purposes, let’s build a robot that can help with basic plant growth. Four conditions must be ideal for a plant to grow healthy. 1. Temperature 2. Humidity 3. Soil moisture 4. Ambient light (light intensity). Three sensors will be used in our robot to monitor those conditions. Additionally, for this project, the operator will be: - Capable of maneuvering the robot in a radius of 10-30 meters while taking measurements and selecting the ideal best place for plantation. - Able to attain the measurements of temperate, humidity, soil moisture, and ambient light after plantation. - Able to take corrective actions to alter either of those four conditions. Although this robot will be built for plant care, it’s important to note that the concept could easily be used for different applications by altering the sensors. For example, if we equipped the robot with MQ2, MQ3, or a similar gas sensor, the robot could be used to detect gas leakage of any GAS like CO2, CO, LPG, etc. But, for now, let’s concentrate on developing a robot for plant care. The system block diagram The major building blocks of the system are three sensors for: 1. Soil moisture: used to measure moisture content in soil and provides the analog voltage output as per the measured moisture level. Its output voltage decreases as the moisture content increases. 2. DHT11-temperature and humidity sensor: measures atmospheric temperature and humidity. It gives direct digital values for temperature in Celcius and for humidity in % RH. It is a smart sensor. 3. LDR, Arduino UNO development board, Bluetooth module, two DC motors, and motor-driver chip. The Bluetooth module is used for command purposes (to move the robot forward, reverse, left, and right) from a smartphone. It provides this data, serially, to the Arduino UNO microcontroller. The Arduino UNO board performs the following tasks: - Reads analog output voltage from the soil moisture sensor and converts it to digital. Then, it calibrates it between a 0 – 100% moisture level. - Reads the temperature and humidity values from the DHT11 sensor. - Reads the analog voltage output from the LDR and calibrates the light intensity between 0-100%. - Gets different commands from the Bluetooth module and rotates the two DC motors to move the robot forward, reverse, left, or right. - Sends (transmits) these readings of all three of the sensors to a smartphone via the Bluetooth module. The motor driver provides sufficient voltage and current to both motors to rotate them. It also amplifies the output of the Arduino board and drives the motors. The DC motors drive the left and right wheels of the robot and move the robot forward, reverse, left and right. Now, let us see how the circuit is built from this block diagram. The circuit diagram The circuit description - The HC-05 module operates on the 5V that’s provided by the Arduino board. It communicates with the Arduino board with the USART pins Tx-Rx. This means its Tx pin is connected with the Rx pin of the Arduino board and vice versa. - The DHT11 sensor also provides the 5V supply from the Arduino board. Its digital output is connected to digital pin D7 of the Arduino board. - The analog output of the soil-moisture sensor is connected to the analog input pin A1 from the Arduino board. It requires 5V biasing voltage from the Arduino board. - The LDR is configured in the pull-down mode with the 10 KΩ pull-down resistor. Its analog output is given to the analog input pin A0. - the digital pins D8, D9, D10, and D11 drive the two DC motors using the L293D chip. These pins are connected to the inputs of L293D and the two motors are connected with the output of the chip. - The motor supply pin Vss of L293D (pin num 8) receives 12V from the battery. - The Arduino board also receives 12V input to its Vin pin from the battery. It gets 12V input and provides 5V of output to all of the other components. Circuit working and operation The circuit operation starts when the 12V battery is connected with the Arduino UNO board and the L293D chip. - Initially, both motors are stopped and the robot is also at rest. - To move the robot in any direction, we have to give commands from the smartphone through the Bluetooth-based Android application. - To do this, we have to open (start) the Bluetooth Android application in the smartphone and then search for the HC05 Bluetooth module. Once the smartphone detects the HC05 module, it’s required to pair the module with the application so that it can send commands from the smartphone via the Bluetooth to the HC05 module (note: it’s required to enter the passkey “1234” the first time to pair with the HC05 module). - Now, we can send commands to the robot to move forward, reverse, left, or right via the smartphone through the application. These commands are used to move the robot (all these commands are set in the Android application): - When any of the above commands are sent (by sending direct character or pressing the button in the application), it’s received by the HC05 module. The module further gives this command to Arduino, serially, through the Tx-Rx pins. - Arduino receives this command and compares it with the set commands. If it finds a match, it will rotate the left and right motors accordingly to move the robot in any of the appropriate four directions. - Once the robot starts its motion, it will continuously move until we send the command ‘S’ to stop. - When the robot stops, it will start reading the sensor values from the DHT11, soil moisture, and LDR. It will read the analog voltage output from the soil moisture sensor and the LDR, and convert it in a range of 0 – 100%. It will also read the digital values of the temperature and humidity from the DHT11 sensor. - Then it transmits all four values of soil moisture, light intensity, temperature, and humidity to the smartphone via the Bluetooth module, it will continuously transmit these four values after every two seconds until it is stopped. - When given the command to start moving again, it will stop transmitting the values. - The operator can take this robot to the required place and measure these four conditions. He or she will get the readings on his or her smartphone while moving the robot with the touch of a finger. - It should be easy to program the ambient conditions of temperature, humidity, soil moisture, and light intensity in the nearby area The software program The program is written in C / C++ language using the Arduino IDE software tool. It’s also compiled and downloaded in the internal memory (FLASH) of the ATMega328 microcontroller using this same software. Here’s the program code: #include <Servo.h> #include <LiquidCrystal.h> #include “DHT.h” #define LDR_pin A0 #define soil_moisture_sensor_pin A1 #define DHTPIN 2 #define DHTTYPE DHT11 Servo soil_moisture_servo; int light_intensity, soil_moisture; LiquidCrystal lcd(12, 13, 8, 9, 10, 11); DHT dht(DHTPIN, DHTTYPE); int motor1Pin1 = 4; // pin 2 on L293D IC int motor1Pin2 = 5; // pin 7 on L293D IC int motor2Pin1 = 6; // pin 10 on L293D IC int motor2Pin2 = 7; // pin 15 on L293D IC int moisture_sensor_servo_pin = 3; int state; int stopflag = 0; void setup() { // sets the pins as outputs: pinMode(motor1Pin1, OUTPUT); pinMode(motor1Pin2, OUTPUT); pinMode(motor2Pin1, OUTPUT); pinMode(motor2Pin2, OUTPUT); soil_moisture_servo.attach(moisture_sensor_servo_pin); soil_moisture_servo.write(0); Serial.begin(9600); lcd.begin(16, 2); lcd.clear(); lcd.setCursor(2, 0); dht.begin(); lcd.print(“Data logger”); lcd.setCursor(6, 1); lcd.print(“Robot”); delay(5000); lcd.clear(); lcd.print(” t*C H % L % M %”); } void loop() { int h = dht.readHumidity(); int t = dht.readTemperature(); light_intensity = analogRead(LDR_pin); light_intensity = map(light_intensity, 750, 50, 5, 100); lcd.setCursor(9, 1); lcd.print(light_intensity); lcd.setCursor(1, 1); lcd.print(t); lcd.setCursor(5, 1); lcd.print(h); if (stopflag == 1) { soil_moisture = analogRead(soil_moisture_sensor_pin); soil_moisture = map(soil_moisture, 1015, 100, 0, 100); lcd.setCursor(13, 1); lcd.print(soil_moisture); Serial.print(“Humidity:”); Serial.println(h); Serial.print(“temp:”); Serial.println(t); Serial.print(“light:”); Serial.println(light_intensity); Serial.print(“moisture:”); Serial.println(soil_moisture); delay(1000); } if (Serial.available() > 0) { state = Serial.read(); // if the state is ‘1’ the DC motor will go forward if (state == ‘1’) { digitalWrite(motor1Pin1, HIGH); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, HIGH); digitalWrite(motor2Pin2, LOW); stopflag = 0; soil_moisture_servo.write(0); } else if (state == ‘2’) { digitalWrite(motor1Pin1, HIGH); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, LOW); digitalWrite(motor2Pin2, LOW); stopflag = 0; soil_moisture_servo.write(0); } // if the state is ‘3’ the motor will Stop else if (state == ‘3’) { digitalWrite(motor1Pin1, LOW); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, LOW); digitalWrite(motor2Pin2, LOW); stopflag = 1; delay(500); soil_moisture_servo.write(90); delay(500); } // if the state is ‘4’ the motor will turn right else if (state == ‘4’) { digitalWrite(motor1Pin1, LOW); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, HIGH); digitalWrite(motor2Pin2, LOW); stopflag = 0; soil_moisture_servo.write(0); } } } Filed Under: Electronic Projects, Featured Contributions, Microcontroller Projects
https://www.engineersgarage.com/a-bluetooth-controlled-datalogger-robot/
CC-MAIN-2021-49
refinedweb
1,809
54.93
Visual Studio 11: C++ IntelliSense Code Snippets Introduction One of the new features of Visual Studio 11 is inserting C++ code snippets, i.e. easily adding already made C++ code at a specified insertion point or around a selected block. This is useful and may save programming time when it's necessary to add commonly-used code, for example preprocessor directives (if, else, for, while, switch, try/catch blocks, and so on). Visual Studio 11 comes with a set of C++ code snippets but we can create and use our own, as well. An Example of Code Often Manually Added Let's say we have created a Win32 DLL project, which has to export some classes and functions. One common way to include the same header file, containing exported class definitions or exported function declarations, in the DLL and in client modules is using preprocessor definitions as in the following example: #ifdef MYLIB_EXPORTS #define MYLIB_EXP __declspec(dllexport) #else #define MYLIB_EXP __declspec(dllimport) #endif class MYLIB_EXP CFoo {/*...*/}; MYLIB_EXP void foo(); If we have to implement many DLL projects, each containing many exported symbols, there is a lot of code that has to be manually added. Of course, copy-pasting then modifying by hand is a solution but the Visual C++ code snippets feature offers a better and faster solution. Using Code Snippets Before going further, we have to notice there are two methods to add code snippets: - at an insertion point (expansion snippets) - around a selected block of code (surround-with snippets) Next, we demonstrate step-by-step how to use each of the two methods to automate the inserting of the above sample of code. Using an Expansion Snippet Click in the source editor at the point where you want to insert the code snippet. Figure 1: Set Insertion Point Keeping Ctrl key pressed, push 'K' then 'X'. Alternatively, we can select the "Edit/IntelliSense/Insert Snippet..." menu item. A list of available code snippets appears. Select the desired one ("#ifdef") then hit the Enter key. Figure 2: Insert Snippet The inserted code snippet contains a highlighted default name ("DEBUG"), which can be replaced with our desired one. Figure 3: Default Code Snippet Replace "DEBUG" with "MYLIB_EXPORTS" and hit the Enter key. Figure 4: Complete Code Complete by hand the code betwen "#ifdef" and "#endif" Note: Code snippets may have "shortcuts". In the case of our example, the shortcut is "#ifdef". That means, to insert this snippet we can simply type "#ifdef" in the editor, and then hit the Tab key. Using a Surround-with Snippet In the source editor, select code to surround. Figure 5: Select Code to Surround Keeping the Ctrl key pressed, push 'K' then 'S'. Alternatively, we can select the "Edit/IntelliSense/Surround With..." menu item. A list of available code snippets appears. Select the desired one ("#ifdef") then hit the Enter key. Figure 6: Surround With The inserted code snippet contains a highlighted default name ("DEBUG"), which can be replaced with our desired one. Figure 7: Default Surround Code Replace "DEBUG" with "MYLIB_EXPORTS" and hit the Enter key. Figure 8: Complete Code Quite nice, until now! However, we still have some manually added code. Next page, we create our own code snippet to make all in one shot. There are no comments yet. Be the first to comment!
https://www.codeguru.com/cpp/v-s/tips/article.php/c20151/Visual-Studio-11-C-IntelliSense-Code-Snippets.htm
CC-MAIN-2018-43
refinedweb
553
62.38
Spark¶ Why use mrjob with Spark?¶ mrjob augments Spark‘s native Python support with the following features familiar to users of mrjob: - automatically upload input and other support files to HDFS, GCS, or S3 (see upload_files, upload_archives, and py_files) - run make and other command before running Spark tasks (see setup). - passthrough and file arguments (see Defining command line options) - automatically parse logs to explain errors and other Spark job failures - easily pass through environment variables (see cmdenv) - support for libjars - automatic matching of Python version (see python_bin) - automatically set up Spark on EMR (see bootstrap_spark) - automatically making the mrjob library available to your job (see bootstrap_mrjob) mrjob spark-submit¶ If you already have a Spark script written, the easiest way to access mrjob’s features is to run your job with mrjob spark-submit, just like you would normally run it with spark-submit. This can, for instance, make running a Spark job on EMR as easy as running it locally, or allow you to access features (e.g. setup) not natively supported by Spark. For more details, see mrjob spark-submit. Writing your first Spark MRJob¶ Another way to integrate mrjob with Spark is to add a spark() method to your MRJob class, and put your Spark code inside it. This will allow you to access features only availble to MRJobs (e.g. FILES). Here’s how you’d implement a word frequency count job in Spark: import re from operator import add from mrjob.job import MRJob WORD_RE = re.compile(r"[\w']+") class MRSparkWordcount(MRJob): def spark(self, input_path, output_path): # Spark may not be available where script is launched from pyspark import SparkContext sc = SparkContext(appName='mrjob Spark wordcount script') lines = sc.textFile(input_path) counts = ( lines.flatMap(self.get_words) .map(lambda word: (word, 1)) .reduceByKey(add)) counts.saveAsTextFile(output_path) sc.stop() def get_words(self, line): return WORD_RE.findall(line) if __name__ == '__main__': MRSparkWordcount.run() Since Spark already supports Python, mrjob takes care of setting up your cluster, passes in input and output paths, and otherwise gets out of the way. If you pass in multiple input paths, input_path will be these paths joined by a comma ( SparkContext.textFile() will accept this). Note that pyspark is imported inside the spark() method. This allows your job to run whether pyspark is installed locally or not. The spark() method can be used to execute arbitrary code, so there’s nothing stopping you from using SparkSession instead of SparkContext in Spark 2, or writing a streaming-mode job rather than a batch one. Warning Prior to v0.6.8, to pass job methods into Spark (e.g. rdd.flatMap(self.get_words)), you first had to call self.sandbox(); otherwise Spark would error because self was not serializable. Running on your Spark cluster¶ By default, mrjob runs your job on the inline runner (see below). If you want to run your job on your own Spark cluster, run it with -r spark: Use --spark-master (see spark_master) to control where your job runs. You can pass in spark options with -D (see jobconf) and set deploy mode (client or cluster) with --spark-deploy-mode. If you need to pass other arguments to spark-submit, use spark_args. The Spark runner can also run “classic” MRJobs (i.e. those made by defining mapper() etc. or with MRSteps) directly on Spark, allowing you to move off Hadoop without rewriting your jobs. See below for details. Warning If you don’t set spark_master, your job will run on Spark’s default local[*] master, which can’t handle setup scripts or --files because it doesn’t give tasks their own working directory. Using remote filesystems other than HDFS¶ By default, if you use a remote Spark master (i.e. not local or local-cluster), Spark will assume you want to use HDFS for your job’s temp space, and that you will want to access it through hadoop fs. Some Spark installations don’t use HDFS at all. Fortunately, the Spark runner also supports S3 and GCS. Use spark_tmp_dir to specify a remote temp directory not on HDFS (e.g. --spark-tmp-dir s3a://bucket/path). For more information on accessing S3 or GCS, see Configuring AWS credentials (S3) or Configuring Google Cloud credentials (GCS). Other ways to run on Spark¶ Inline runner¶ Running your Spark job with -r inline (the default) will launch it directly through the pyspark library, effectively running it on the local[*] master. This is convenient for debugging because exceptions will bubble up directly to your Python process. The inline runner also builds a simulated working directory for your job, making it possible to test scripts that rely on certain files being in the working directory (it doesn’t run setup scripts). Note If you don’t have a local Spark installation, the pyspark library on PyPI is a pretty quick way to get one ( pip install pyspark). Local runner¶ Running your Spark job with -r local will launch it through spark-submit on a local-cluster master. local-cluster is designed to simulate a real Spark cluster, so setup will work as expected. By default, the local runner launches Spark jobs with as many executors as your system has CPUs. Use --num-cores (see num_cores to change this). By default, the local runner gives each executor 1 GB of memory. If you need more, you can specify it through jobconf, e.g. -D spark.core.memory=4g. EMR runner¶ Running your Spark job with -r emr will launch it in Amazon Elastic MapReduce (EMR), with the same seamless integration and features mrjob provides for Hadoop jobs on EMR. The EMR runner will always run your job on the yarn Spark master in cluster deploy mode. Hadoop runner¶ Running your Spark job with -r hadoop will launch it on your own Hadoop cluster. This is not significantly different than the Spark runner. The main advantage of the Hadoop runner is that is has more knowledge about how to find logs and can be better at finding the relevant error if your job fails. Unlike the Spark runner, the Hadoop runner’s default spark master is yarn. Note mrjob does not yet support Spark on Google Cloud Dataproc. Passing in libraries¶ Use --py-files to pass in .zip or .egg files full of Python code: python your_mr_spark_job -r hadoop --py-files lib1.zip,lib2.egg Or set py_files in mrjob.conf. Command-line options¶ Command-line options (passthrough options, etc) work exactly like they do with regular streaming jobs (even add_file_arg() on the local[*] Spark master. See Defining command line options. Uploading files to the working directory¶ upload_files, FILES, and files uploaded via setup scripts all should work as expected (except on local masters because there is no working directory). Note that you can give files a different name in the working directory (e.g. --files foo#bar) on all Spark masters, even though Spark treats that as a YARN-specific feature. Archives and directories¶ Spark treats --archives as a YARN-specific feature. This means that upload_archives, ARCHIVES, DIRS, etc. will be ignored on non- yarn Spark masters. Future versions of mrjob may simulate archives on non- yarn masters using a setup script. Multi-step jobs¶ There generally isn’t a need to define multiple Spark steps (Spark lets you map/reduce as many times as you want). However, it may sometimes be useful to pre- or post-process Spark data using a streaming or jar step. This is accomplished by overriding your job’s steps() method and using the SparkStep class: def steps(): return [ MRStep(mapper=self.preprocessing_mapper), SparkStep(spark=self.spark), ] External Spark scripts¶ mrjob can also be used to launch external (non-mrjob) Spark scripts using the SparkScriptStep class, which specifies the path (or URI) of the script and its arguments. As with JarSteps, you can interpolate input and output paths using INPUT and OUTPUT constants. For example, you could set your job’s steps() method up like this: def steps(): return [ SparkScriptStep( script=os.path.join( os.path.dirname(__file__), 'my_spark_script.py'), args=[INPUT, '-o', OUTPUT, '--other-switch'], ), ] Custom input and output formats¶ mrjob allows you to use input and output formats from custom JARs with Spark, just like you can with streaming jobs. First download your JAR to the same directory as your job, and add it to your job class with the LIBJARS attribute: LIBJARS = ['nicknack-1.0.0.jar'] Then use Spark’s own capabilities to reference your input or output format, keeping in mind the data types they expect. For example, nicknack’s MultipleValueOutputFormat expects <Text,Text>, so if we wanted to integrate it with our wordcount example, we’d have to convert the count to a string: def spark(self, input_path, output_path): from pyspark import SparkContext sc = SparkContext(appName='mrjob Spark wordcount script') lines = sc.textFile(input_path) counts = ( lines.flatMap(self.get_words .map(lambda word: (word, 1)) .reduceByKey(add)) # MultipleValueOutputFormat expects Text, Text # w_c is (word, count) counts = counts.map(lambda w_c: (w_c[0], str(w_c[1]))) counts.saveAsHadoopFile(output_path, 'nicknack.MultipleValueOutputFormat') sc.stop() Running “classic” MRJobs on Spark¶ The Spark runner provides near-total support for running “classic” MRJobs (the sort described in Writing your first job and Writing your second job) directly on any Spark installation, even though these jobs were originally designed to run on Hadoop Streaming. Support includes: Jobs will often run more quickly on Spark than Hadoop Streaming, so it’s worth trying even if you don’t plan to move off Hadoop in the forseeable future. Multiple steps are run as a single job¶ If you have a job with multiple consecutive MRSteps, the Spark runner will run them all as a single Spark job. This is usually what you want (more efficient), but it can make debugging slightly more challenging (step failure exceptions give a range of steps, no way to access intermediate data). To force the Spark runner to run steps separately, you can initialize each MRStep with a different jobconf dictionary. No support for subprocesses¶ Pre-filters (e.g. mapper_pre_filter()) and command steps (e.g. reducer_cmd()) are not supported because they require launching subprocesses. It wouldn’t be impossible to emulate this inside Spark, but then we’d essentially be turning Spark into Hadoop Streaming. (If you have a use case for this seemingly implausible feature, let us know through GitHub.) Spark loves combiners¶ Hadoop’s “reduce” paradigm is a lot more heavyweight than Spark’s; whereas a Spark reducer just wants to know how to combine two values into one, a Hadoop reducer expects to be able to see all the values for a given key, and to emit zero or more key-value pairs. In fact, Spark reducers are a lot more like Hadoop combiners. The Spark runner knows how to translate something like: def combiner(self, key, values): yield key, sum(values) into Spark’s reduce paradigm–basically it’ll pass your combiner two values at a time, and hope it emits one. If your combiner does not behave like a Spark reducer function (emitting multiple or zero values), the Spark runner handles that gracefully as well. Counter emulation is almost perfect¶ Counters (see increment_counter()) are a feature specific to Hadoop. mrjob emulates them on Spark anyway. If you have a multi-step job, mrjob will dutifully print out counters for each step and make them available through counters(). The only drawback is that while Hadoop has the ability to “take back” counters produced by a failed task, there isn’t a clean way to do this with Spark accumulators. Therefore, the counters produced by the Spark runner’s Hadoop emulation may be overestimates. Spark does not stream data¶ While Hadoop streaming (as its name implies) passes a stream of data to your job, Spark instead operates on partitions, which are loaded into memory. A reducer like this can’t run out of memory on Hadoop streaming, no matter how many values there are for key: def reducer(self, key, values): yield key, sum(values) However, on Spark, simply storing the partition that contains these values can cause Spark to run out of memory. If this happens, you can let Spark use more memory ( -D spark.executor.memory=10g) or add a combiner to your job. Compression emulation¶ It’s fairly common for people to request compressed output from Hadoop via configuration properties, for example: python mr_your_job.py -D mapreduce.output.fileoutputformat.compress=true -D\ mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec ... This works with -r spark too; the Spark runner knows how to recognize these properties and pass the codec specified to Spark when it writes output. Spark won’t split .gz files either¶ A common trick on Hadoop to ensure that segments of your data don’t get split between mappers is to gzip each segment (since .gz is not a seekable compression format). This works on Spark as well. Controlling number of output files¶ By default, Spark will write one output file per partition. This may give more output files than you expect, since Hadoop and Spark are tuned differently. The Spark runner knows how to emulate the Hadoop configuration property that sets number of reducers on Hadoop (e.g. -D mapreduce.job.reduces=100), which will control the number of output files (assuming your last step has a reducer). However, this is a somewhat heavyweight solution; once Spark runs a step’s reducer, mrjob has to forbid Spark from re-partitioning until the end of the step. A lighter weight solution is --max-output-files, allows you to limit the number of output files by running coalesce() just before writing output. Running your job with --max-output-files=100 would ensure it produces no more than 100 output files (but it could output less). Running classic MRJobs on Spark on EMR¶ It’s often faster to run classic MRJobs on Spark than Hadoop Streaming. It’s also convenient to be able to run on EMR rather than setting up your own Spark cluster (or SSH’ing in). Can you do both? Yes! Run the job with the Spark runner, but tell it to use mrjob spark-submit to launch Spark jobs on EMR. It looks something like this: python mr_your_job.py -r spark \ --spark-submit-bin 'mrjob spark-submit -r emr' \ --spark-master yarn --spark-tmp-dir s3://your-bucket/tmp/ input1 input2 Note that because the Spark runner itself doesn’t know the job is going to run on EMR, you have to give it a couple of hints so that it knows it’s running on YARN (--spark-master) and that it needs to use S3 as its temp space (--spark-tmp-dir).
https://mrjob.readthedocs.io/en/stable/guides/spark.html
CC-MAIN-2022-05
refinedweb
2,449
61.97
Hi All, I was writing some code to animate an object using the dLoc values. But they do not seem to have any effect. The goal is to use the dLoc set of parameters for algorithmic animation that is generated on the fly in a frame change event. Consider this code: It should create a Text3d object in memory, set some parameters, create a new object out of it and then assign dLocZ value. import Blender from Blender import * scn = Scene.GetCurrent() # Create a new text object in memory. txt = Blender.Text3d.New("tx_TEXT3D") txt.setText("A") txt.setExtrudeDepth(0.1) txt.setExtrudeBevelDepth(0.1) txt.setAlignment(Text3d.MIDDLE) # Fetch that new text object into another variable. tempTXT = Blender.Text3d.Get("tx_TEXT3D") tempTXT.setExtrudeDepth(1.0) # Create a new object. tobj = scn.objects.new(tempTXT,"myLetter") tobj.link(tempTXT) # Set The dLocZ value. tobj.dLocZ = 20.0 tobj.makeDisplayList() # Refresh the scene. Window.RedrawAll() scn.update(0) # Display value to see if I am going crazy? myDloc = tobj.dloc print myDloc print tobj.dLocZ Shouldn’t the simple assignment of a value to dLocZ, followed by an update make the object appear in the offset location? What is the secret to making the object refresh after setting the dLoc value? When I run the code, my object is always centered at world origin. Thanks for an help!
https://blenderartists.org/t/dloc-via-code-has-no-effect-in-2-49/483877
CC-MAIN-2021-17
refinedweb
227
50.94
Proposed features/sport namespace Introduction A single location is often used for more than one sport. For example, a sports centre may have swimming, weight lifting, tennis, gymnastics in one location. Occasionally, it would be useful to mark a location has not having a particular sport available. This proposal is intended to coexist with the sport= tagging but allowing additional detail. Proposed by: --TimSC 18:18, 26 February 2008 (UTC) Proposal A node or a way may be tagged as follows: sport:athletics=yes and this will be interpreted as equivalent to sport=athletics In addition, multiple sports may be specified: sport:athletics=yes sport:cricket=yes sport:golf=yes This would indicate that the location had some connection to all these sports. Typically all the sports are available at the location. Alternatively, this could indicate a shop sold goods for all these sports. It is also useful to know if a sport is unavailable at a particular location. For example, some scuba dive sites are unstable for snorkeling. For another example: sport:cricket_nets=yes sport:cricket=no Indicates cricket nets are present but there is no cricket pitch. Node/Way/Area Proposed Rendering Display of sport:X=yes would be equivalent to the rendering of sport=X. If multiple sports are defined, rendering may: - display only one of the sports. The order of precedences is discretionary but alphabetic is easiest. (e.g. athletics would be displayed in preference to zourkhaneh.) - display a multi sport icon - specialist maps would have their own order of selection. e.g. maps for cyclists would show sport:cycling=yes icons before any other. Discussion I like the proposal. It's much better than using separated values. Althought I'd preffer if multiple tags with the same key were allowed. Proposed way is more specific (for example you can't say sport:cricket=no with multiple sport=* tags) but it's not as straightforward as multiple tags with the same key. I'm not sure whether this should be extended to all tags. I think it doesn't make much sense for highways tags, but it could be useful for amenity and shop. --Jttt 07:52, 27 May 2008 (UTC)
https://wiki.openstreetmap.org/wiki/Proposed_features/sport_namespace
CC-MAIN-2018-13
refinedweb
365
56.86
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. TypeError: a float is required I have 2 identical ERP servers running, one is production and the other one is for development purpose. I have an issue about customer invoice (mako file), when i try to print it out i got this error message : <% hargajual = math.floor(amount_base)%>TypeError: a float is required But at the production server, there is no error at all when printing. The mako coding file are 100% identical because i copied it from production to development server. I think this is not about mako file (both again identical), but must be something else. Any idea what could cause this error? import random, math def fun(): hargajual = math.floor(amount_base) return hargajual please specify header file named as import math then create a fun as sample example def fun(self): x=float(amount_base) y=math.floor(x) return y or return math.floor(float(amount_base)) this code is working check it!
https://www.odoo.com/forum/help-1/question/typeerror-a-float-is-required-103247
CC-MAIN-2016-50
refinedweb
187
65.42
@PaulStoffgren the master...or maybe @mortonkopf Help please, I am trying to fix an issue with my LED output to Glediator where every other strip is running in the opposite direction. I currently have it configured as HS-TR in Glediator and Single Pixels output, as I understand this is Horizontal Snake - Top Right, which is where my first LED strip comes in on it's own input wire from OctoWS2811. The first strip is connected through the Octo Ethernet port to Pin 2 on Teensy as I understand followed by pins 14, 7, 8, 6, 20, 21, and 5 for the remaining 7 strips. This first "channel"(as I am referring to it here), according to the documentation is the Orange/and Orange-White pair on the CAT6 cable. Each additional strip has it's own Ethernet cable (pair) back to the Octo board. The *.ino file specifies each strip has 150 LEDs and a total count of 1200 (150 x 8 strips), see code below. Here is where a lot more assumption begins, Octo seems to output in order on all 8 channels, for example "channel" two (pin 14 on Teensy) seems to output the correct signal for the matrix after channel 1 (pin 2), but it is in the opposite direction as if I had a zig zag LED pattern configured, which I do not. Each strip of 150 LEDs is it's own run into the right side of my matrix with no zig zag. As I understand, each line of resolution in the array is a "channel" connected to an output pin on Teeensy via Octo. All of these assumptions are based on trial and error observation over the past week. Is there some code I can include in the Glediator ino file that tells Octo how to send data (parallel, instead of in series...I think), and from there I will need to know how to continue to send more data to additional Octo/Teensy boards in teh matrix, but one step at a time. Code:// Glediator example with OctoWS2811, by mortonkopf// // // You can also use Jinx to record Glediator format data to a SD card. // To play the data from your SD card, use this modified program: // #include <OctoWS2811.h> /* Running 2 OctoWS2811 boards with sync wired together * Can't figure out how to get the sync to work at all * each Octo/Teensy has 1200 LEDs 8 stips x 150 each * Both combined makes 2400 LEDs total and 16 strips * Each strip is it's own data run with no zig zag or snake/ladder * Currently the strips are opposing one another on the next row rather than * all strips moving in teh same orientation left to right * Scrolling text is split half left/right and half right/left */ const int ledsPerStrip = 150; const int NUM_LEDS =1200;(); } void loop() { byte r,g,b; int i;) { return (((unsigned int)b << 16) | ((unsigned int)r << 8) | (unsigned int)g); }
https://forum.pjrc.com/threads/58295-Maximum-delay-for-Teensy-4-0?s=4c74fb5284621ee52bc428620d0c3d86&goto=nextoldest
CC-MAIN-2020-10
refinedweb
495
58.35
Hey in Windows PowerShell 2.0, but the process looks complicated. Can you explain it to me? Is there an easier way? -- CS Hello CS, Microsoft Scripting Guy Ed Wilson here. Right now I have my Zune HD cranked up all the way while listening to an old Rolling Stones recording of a song they sang with Buddy Guy. The cool thing about the Rolling Stones is they have recorded so much music. I have been listening to the Rolling Stones now for the last eight hours while I have been answering [email protected] e-mail and talking to people on my personal Facebook page and on the Scripting Guys’ Twitter page. On Facebook, I have been talking to Brandon, a Microsoft MVP, about doing a couple of guest articles related to his bsonposh module. On Twitter, I have been talking to many people about this week’s series of Hey, Scripting Guy! articles. Portions of today’s Hey, Scripting Guy! post appeared previously in the 2008 Microsoft Press book, Windows PowerShell Scripting Guide. CS, the script from the post, “How Can I Change a Registry Value Named (Default)?”, uses the WMI class that is named StdRegProv. Things are confusing from the beginning. The name of the WMI class is StdRegProv? That looks like the name of a WMI provider, not the name of a WMI class. Most of the WMI classes we use begin with Win32, and the names are more easily understood. The second thing you will notice about the SetDefaultRegistryStringValue.vbs script is that the WMI class is located in the root\default WMI namespace. Interestingly enough, root\default is not the default WMI namespace. Root\cimv2 is the default WMI namespace on every Windows operating system made since Windows NT 4.0. Beginning with Windows Vista, the WMI team also placed a copy of the StdRegProv WMI class in root\cimv2. If all of your computers are Windows Vista and later, you can ignore the version of the StdRegProv in root\default and only use the one from the root\cimv2 WMI namespace. The most confusing thing about the StdRegProv WMI class is that each data type in the registry requires a different method. These methods are documented on MSDN. In this script the SetStringValue method is used to assign a new value to a key. For information about using the StdRegProv WMI class from within Windows PowerShell, refer to How Can I Change My Internet Explorer Home Page. The complete SetDefaultRegistryStringValue.vbs script is shown here. SetDefaultRegistryStringValue.vbs Const HKEY_CURRENT_USER = &H80000001strComputer = "."Set objRegistry = GetObject("winmgmts:\\" & strComputer & "\root\default:StdRegProv")strKeyPath = "ScriptCenter"strValueName = ""strValue = ""objRegistry.SetStringValue HKEY_CURRENT_USER, strKeyPath, strValueName, strValue CS, an easier way to work with the registry by using Windows PowerShell is to use the registry provider. The registry provider provides a consistent and easy way to work with the registry from within Windows PowerShell. Using the registry provider, you can search the registry, create new registry keys, delete existing registry keys, and modify values and access control lists (ACLs) from within Windows PowerShell. Windows PowerShell creates two PSDrives by default. To identify the PSDrives supplied by the registry provider, you can use the Get-PSDrive cmdlet, and pipeline the resulting objects into the Where-Object cmdlet and filter on the provider property while supplying a value that is like the word “registry.” This command is seen here: get-psDrive | where {$_.Provider -like "*registry*"} The resulting list of PSDrives is seen here: Name Provider Root CurrentLocation---- -------- ---- ---------------HKCU Registry HKEY_CURRENT_USERHKLM Registry HKEY_LOCAL_MACHINE Now that you know which registry drives exist (you can add additional registry drives by using the Add-PSDrive cmdlet), you can use the Set-Item cmdlet to assign a value to the default registry key. Suppose you have the registry key, HKEY_CURRENT_USER\Software\ScriptingGuys, that has an empty default value as seen here: To assign a value, you can write a script similar to SetDefaultRegistryStringValue.vbs in VBScript, or you can emulate that methodology in Windows PowerShell. But the easier way is to use the registry provider for Windows PowerShell. Using the HKCU drive in Windows PowerShell, you can use the same cmdlets you use to work with the filesystem. The Set-Item cmdlet writes information. To provide a value for the ScriptingGuys registry key, specify the path, the value, and the type of value you are assigning. To write the string “MyNewValue” to the registry key, use the following command: Set-Item -Path HKCU:\Software\ScriptingGuys -Value "MyNewValue" -Type string The revised registry key now has a value for (Default): CS you said you need to make the change on multiple computers. Using Windows PowerShell 1.0 I normally fell back to the StdRegProv WMI class to do that. But in Windows PowerShell 2.0 you can use the Invoke-Command cmdlet to execute a standard Windows PowerShell command on a remote computer. Because we need to assign a registry value to a key that is in the HKEY_CURRENT_USER\Software\ScriptingGuys hive things could get a bit confusing. This is because I am logged onto my workstation as Nwtraders\Ed. I want to target a computer that has Nwtraders\Administrator logged on. My current user hive on the remote machine is different than the current user that is logged on. To work around this, all we need to do is supply alternate credentials when we invoke the command. The command to do this is seen here (this command is one line) PS C:\> Invoke-Command -computer Win7-Pc -script { Set-Item -Path HKCU:\Software\ScriptingGuys -Value "MyNewValue" –Type string } -Credential nwtraders\administrator The credential dialog box appears and prompts you for the password: After the command has run on the remote computer, we use Remote Desktop to browse the registry as the administrator. As seen in the following image, the registry key has been changed: CS, that is all there is to adding a value to a registry key with the name of (Default). Would love to see how to read remote registry with alternate credentials. You can also set the default key with @=
http://blogs.technet.com/b/heyscriptingguy/archive/2009/11/30/hey-scripting-guy-november-30-2009.aspx
CC-MAIN-2015-32
refinedweb
1,015
63.59
Question has been asked that is similar but all post on here refer to replacing single characters. I'm trying to replace a whole word in a string. I've replaced it but I cant print it with spaces in between. Here is the function replace def replace(a, b, c): new = b.split() result = '' for x in new: if x == a: x = c result +=x print(' '.join(result)) replace('dogs', 'I like dogs', 'kelvin') i l i k e k e l v i n I like kelvin The issue here is that result is a string and when join is called it will take each character in result and join it on a space. Instead, use a list , append to it (it's also faster than using += on strings) and print it out by unpacking it. That is: def replace(a, b, c): new = b.split(' ') result = [] for x in new: if x == a: x = c result.append(x) print(*result) print(*result) will supply the elements of the result list as positional arguments to print which prints them out with a default white space separation. "I like dogs".replace("dogs", "kelvin") can of course be used here but I'm pretty sure that defeats the point.
https://codedump.io/share/dXH9wpwjOOVu/1/replacing-strings-not-characters-without-the-use-of-replace-and-joining-the-strings-the-characters
CC-MAIN-2017-39
refinedweb
209
80.51
can anyone explain to me what return does? I know that return(0) stops the program at any given point, but what does returning others do? can anyone explain to me what return does? I know that return(0) stops the program at any given point, but what does returning others do? Well, return is used to give the return value of a function. When you use return 0, that means that the program was successful. If you don't add a return statement, the compiler will automatically insert a return 0; in the program. OK, I'll try to explain this as simply and accurately as I can. First though, a note on functions: All functions must return something...Anything, even if that something is actually nothing (if a function returns nothing then you use the 'void' return type). Return doesn't end the program unless the call to return is made in the main function. A more accurate description of what the return keyword does is this: The return keyword exits the 'current function' (the function we're returning from), returning back up the call-stack to the 'calling function' (i.e. the code that called the 'current function' we're returning from). I'll try to explain this in more detail, but first take a look at this very quick code snippet as this will form the basis of my explanation: #include<iostream> // here's the declaration of a function which returns a boolean (true or false) value bool somefunction(); // Here's the main function, the entry point of the program // returns an int value int main() { if(somefunction()) return (0); // end the program normally else return(1); // end the program but flag an error condition } // Here's the definition of someFunction bool someFunction() { // we'll just return true here return true; } Now imagine this: From your OS, you compile and run the above program. The main() function is the entry point of the program, so when you run your program, your OS loads the program into memory and execution begins in main. The block of code beginning with the if statement in the first line of main says this: "IF we call somefunction() and its return value is true, THEN we end the program by returning the value 0. ELSE we end returning the value 1". So someFunction is called from the if statement in main. In other words, the code execution jumps from the if statement in main into the body of someFunction. Now we are inside someFunction, so in terms of my definition of the return keyword the 'current function' is someFunction and the 'calling function' is main(). SomeFunction returns 'true'. Now according to my definition of the return keyword, the code execution will return from the 'current function' (somefunction) to the 'calling function' (main) passing the value 'true'. So the return keyword in somefunction ultimately causes the code execution to jump back to the if statement in main. (as this is where somefunction was called from). So now we're back inside the if statement inside main, after the call to someFunction. Because the call to someFunction returned true, the condition of the if statement is met. So the program ends by returning the value 0 to the OS/shell. In terms of my definition of return, from the return in main, the 'current function' is main and the 'calling function' is the OS. So in other words, the execution of the program jumps out of the main function of the program, thus ending the program and handing control back to the OS, passing the OS the value 0. NOTE: In most, if not all OSes; if a program's main function returns an int, 0 indicates that program execution ended normally. Any other value indicates that the program execution ended abnormally. In other words an error occurred. Likewise if the main function of a program returns bool, then conventionally, false indicates that the program ended normally, whereas true indicates that the program ended because of an error condition. And I think that's about it. I've gone fairly in depth on the topic. Some of the terminology I've used might not be 100% accurate. But I 've explained the basic principle as best I can! To sum up: The return keyword tells your program to return from the 'current function' (or the function we're returning from) back up the callstack to the 'calling function' (the code that called the 'current function'). Also, any value specified after the return keyword is returned to the calling function. But only if it's type is the same as the functions return type. So if your function is supposed to return an int, then the value returned by the return keyword must be an int, otherwise your compiler will throw an error message when you compile! Hope that made sense and wasn't too complex! Cheers for now, Jas. I kind of understand, so for this example, what does return T do if its not in a main function? int Add(Time Total[],int MaxSize) { double Hrs, Mins; int T=0,SumMins=0; double SumHrs=0; cout << "Enter amount of hours:" << endl; for (T=1;T<MaxSize;T++){ cout << "Day " << T << " :" << endl; cout.width(10); cout << "Hours: "; cin >> Hrs; cout.width(10); cout << "Minutes: "; cin >>Mins; cin.ignore ( std::numeric_limits<std::streamsize>::max(), '\n' ); Total[T].SetHours(Hrs); Total[T].SetMinutes(Mins); SumHrs=SumHrs+Total[T].GetHours(); SumMins=SumMins+Total[T].GetMinutes(); } //Find Minutes remainder int TotalMins=SumMins%60; //Find Hours From Minutes int SumHM=SumMins/60; cout << endl; cout << "Total Hours: " << SumHrs+SumHM << "." << TotalMins << endl; cout << endl; cout << endl; return T; } I kind of understand, so for this example, what does return T do if its not in a main function? Well, lets look at that in terms of my definition of what return does: return exits the 'current function' and returns to the 'calling function' passing an appropriate value. In this case, the return is in the Add function, so the 'current function' is the Add function, the 'calling function' is whatever bit of code that called the Add() function and the value being returned is T. So at the return statement in the the Add function, code execution jumps back to whatever bit of code called the Add function and passes the value of T. Assuming that your function works (I haven't actually read through the rest of the code, I'm just gonna explain the return thing for now): If you had a void function called process that called the add function like this: void process() { // not sure how you're using the add function so apologies if this is wrong. // the main thing here is the actual function call.. const int maxSize=200; int result; Time myTime[200]; // This is the main thing we're interested in: result = Add(myTime, maxSize); return; } At the line where the Add function is called, the code is basically saying: "Call the Add function, passing myTime and maxSize and assign the returned value to the variable result" So execution jumps from the process() function, into your Add function. The two parameters are passed in and your add function is executed. When execution reaches the return statement, execution jumps back into the process() function passing the value of T. The returned value of T is then stored in the result variable declared in the process function(). Likewise, if you called Add from main: execution will jump from main, into the Add function. The function will be executed until the return keyword is reached. When execution hits the return keyword, the execution jumps back into main passing the value of T. Does that clear things up for you? Cheers for now, Jas. . >Likewise if the main function of a program returns boo The main function doesn't return bool, and suggesting that it does, or that the integer value returned represents a boolean value, is terribly confusing. . Fair point! I was probably oversimplifying my explanation there. But as ever, you've summed things up far more succinctly and accurately than my own fumbling attempt! Thank you Narue! :) >Likewise if the main function of a program returns boo The main function doesn't return bool, and suggesting that it does, or that the integer value returned represents a boolean value, is terribly confusing. I know that, but surpisingly I've seen a few odd programs (emphasis on the odd) that actually use the bool return type for main...Thankfully I don't think that this unusual practice is particularly widespread or particularly well advised for that matter. Certainly not an option I'd choose, but something I have seen used on my travels. So I thought I'd share just in case! I've only seen it a handful of times. The one time I came across it professionally, you can be sure that I changed the return type to int! Bool main?? Weird! I probably should have made a note of that in my post! Or better still, not posted the bit about bool main at all! heh heh! Cheers for now, Jas. >I've seen a few odd programs (emphasis on the odd) >that actually use the bool return type for main... Once again, it's a requirement that main return the int type. bool main() is just as wrong as void main() . Some people might use it, but that doesn't mean you should take it into consideration in your explanations unless you're correcting someone else. ;). ...
https://www.daniweb.com/programming/software-development/threads/247376/return-question
CC-MAIN-2017-26
refinedweb
1,596
61.26
MainMenu.xibfound? In Xcode, go to File > New > Project, select “Cocoa application”. Run it. You get an empty window, and a menu bar for it along the top. This menu bar contains a whole lot of functionality. Where did all this functionality come from?! In short: the default project gives you a MainMenu.xib and an AppDelegate.swift. The AppDelegate class has the @NSApplicationMain annotation, which causes your program to call the NSApplicationMain function, which in turn has some routines which load the MainMenu.xib file, which contains several hundred lines which describe the window, menu, and other things. But how does NSApplicationMain find your MainMenu.xib file, and what does it do with it then? The docs for NSApplicationMain are a bit vague: Creates the application, loads the main nib file from the application’s main bundle, and runs the application. First, how does NSApplicationMain find your MainMenu.xib file? The search procedure begins with key NSMainNibFile in Info.plist. If this key exists, it’s used to load the file. We can emulate this behavior in our main.swift: import Cocoa let myApp: NSApplication = NSApplication.shared() let mainBundle: Bundle = Bundle.main let mainNibFileBaseName: String = mainBundle.infoDictionary!["NSMainNibFile"] as! String mainBundle.loadNibNamed(mainNibFileBaseName, owner: myApp, topLevelObjects: nil) _ = NSApplicationMain(CommandLine.argc, CommandLine.unsafeArgv) However, if the NSMainNibFile key is not present, NSApplicationMain falls back the first nib/xib file which it happens to find in the main bundle. We can’t emulate this behavior easily. If you have nib/xib files in your bundle which you don’t want to use as the main nib, you should hide them away so that NSApplicationMain doesn’t find them. I wrote this because I'm making Vidrio this month. I need to understand Cocoa and Swift. Nib/xib files seem fairly fundamental to the Cocoa loading process. This post is my own, and not associated with my employer.Jim. Public speaking. Friends. Vidrio.
https://jameshfisher.github.io/2017/03/20/how-is-mainmenu-xib-loaded.html
CC-MAIN-2019-18
refinedweb
322
60.01
In this hack, we use the JMegaHal package to learn sentences and generate replies. We will then combine it with the PircBot package to create an IRC bot that can learn from other IRC users and hold wacky conversations with them. The JMegaHal package is supplied in a JAR file, so it is easy to integrate into any Java program, including an IRC bot. Make sure you import org.jibble.jmegahal.*; in any classes that will use it. JMegaHal hal = new JMegaHal( ); // Teach it some sentences. hal.add("Hello, my name is Paul."); hal.add("This is a sentence."); // Note that the more sentences you add, the more sense it will make... // Make JMegaHal generate a sentence. String sentence1 = hal.getSentence( ); // Get another sentence, with the word "jibble" in it (if possible). String sentence2 = hal.getSentence("jibble"); Notice the getSentence method can take a String as an optional argument. If you want to generate random sentences all about pie, then you might want to call hal.getSentence("pie"). This ensures that all generated sentences include the word "pie" somewhere. If JMegaHal has not yet learned anything about pie, it will choose some other random token. One thing to consider with a chat bot is telling it when to respond. If you make it respond to every message, the other users in the channel will very quickly get fed up with it and kick it out. One safe option is to make it respond only to messages that contain its nickname but to learn from all other messages. Combining JMegaHal and PircBot is very easy. If anybody utters the bot's nickname, it should respond with a randomly generated sentence from the JMegaHal object. All other messages that do not contain the bot's nickname will be used to teach it more sentences. In the following simple implementation, the bot is seeded with a starting sentence in the constructor, so it will always be able to respond with this if it has not learned anything else. Create a file called JMegaHalBot.java: import org.jibble.pircbot.*; import org.jibble.jmegahal.*; public class JMegaHalBot extends PircBot { private String name; private JMegaHal hal; public JMegaHalBot(String name) { setName(name); hal = new JMegaHal( ); // Add at least one sentence so the bot can always form a reply. hal.add("What is the weather like today?"); } public void onMessage(String channel, String sender, String login, String hostname, String message) { if (message.toLowerCase( ).indexOf(getNick( ).toLowerCase( )) >= 0) { // If the bot's nickname was mentioned, generate a random reply. String sentence = hal.getSentence( ); sendMessage(channel, sentence); } else { // Otherwise, make the bot learn the message. hal.add(message); } } } Now you just need to create a main method to construct the bot and tell it what nickname to use. The main method will also tell the bot which server to use and which channel to join. This bot can run in multiple channels, but bear in mind that sentences learned in one channel could end up appearing in other channels. Create the fileJMegaHalBotMain.java: public class JMegaHalBotMain { public static void main(String[] args) throws Exception { JMegaHalBot bot = new JMegaHalBot("Chatty");.
http://oreilly.com/pub/h/1994#code
crawl-003
refinedweb
520
66.54
I'm currently consulting for a small engineering firm that has been using a single server for all their domain needs and their management has got sick of not being able to access networked shares from the server when the ageing hardware fails I've procured two new HP ProLiant servers for file serving purposes running Server 2008 R2 Standard, and reconditioned their old hardware to 2 SAN boxes using FreeNAS. Originally while trialling a solution, I was running a DFS namespace with both servers as nodes. The primary server is connected to the SAN via iSCSI and then the replication service copies the files onto the secondary server (to local drives). Obviously it wouldn't be possible (or not recommended) to connect the second server to the same iSCSI volume as the primary server because of split brain etc. It seems pointless to use the SAN in the first place if the primary server replicates data onto the secondary servers local disks, and I can't share the iSCSI target to share the data between servers. I figured using a NAS protocol to connect the two servers to the FreeNAS box at the file level, and configure DFS on the servers to only use the secondary server if the primary server is unavailable. Instead of the servers seeing the drives as local disks (as they would using iSCSI), I was planning to map them as networked drives. Doing this at the file level seems to alleviate the issues with sharing an iSCSI target between two servers. I'm no stranger to SAN/NAS devices and FreeNAS in particular, but using it in a deployment like this is something new to me so I'm not entirely sure if this would work, or how it would perform etc. Is this the correct way to go about this? All the guides for this I can find on the internet are for Virtual Machines, and have some sort of failover manager, so I'm not too confident about following those ideas without having failover management. Thanks!
https://serverfault.com/questions/531079/sharing-storage-between-dfs-nodes-providing-failover-without-failover-manager
CC-MAIN-2021-21
refinedweb
344
57.44
This is seen on svn 137045. [regehr@babel tmp27]$ current-gcc -O -fwrapv small.c -o small [regehr@babel tmp27]$ ./small 0 [regehr@babel tmp27]$ current-gcc -Os -fwrapv small.c -o small [regehr@babel tmp27]$ ./small 2 [regehr@babel gcc-current]$ current-gcc -v Using built-in specs. Target: i686-pc-linux-gnu Configured with: ../configure --program-prefix=current- --enable-languages=c,c++ --prefix=/home/regehr Thread model: posix gcc version 4.4.0 20080623 (experimental) (GCC) #include <limits.h> #include <stdio.h> static inline int lshift_s_s(int left, int right) { if ((left < 0) || (right < 0) || (right >= sizeof(int)*CHAR_BIT) || (left > (INT_MAX >> right))) { /* Avoid undefined behavior. */ return left; } return left << right; } static inline unsigned int lshift_u_u(unsigned int left, unsigned int right) { if ((right >= sizeof(unsigned int)*CHAR_BIT) || (left > (UINT_MAX >> right))) { /* Avoid undefined behavior. */ return left; } return left << right; } static inline int rshift_s_u(int left, unsigned int right) { if ((left < 0) || (right >= sizeof(int)*CHAR_BIT)) { /* Avoid implementation-defined and undefined behavior. */ return left; } return left >> right; } int func_47(unsigned int p_48 ) ; int func_47(unsigned int p_48 ) { int tmp = lshift_u_u(p_48, p_48); int tmp___0 = lshift_s_s(tmp, 1); int tmp___1 = rshift_s_u(1 + p_48, tmp___0); return (tmp___1); } int main(void) { int x = func_47(1); printf("%d\n", x); return 0; } This worked with: Using built-in specs. Target: i386-apple-darwin8.11.1 Configured with: /Users/apinski/src/local/gcc/configure --prefix=/Users/apinski/local-gcc --disable-multilib Thread model: posix gcc version 4.4.0 20080622 (experimental) [trunk revision 137020] (GCC) Fails since 4.1.0, still broken on the trunk. We expand 25 return left << right; to sall %cl, %ecx but we initialized %ecx from movb %dl, %cl so later the comparison against zero fails due to the upper part of ecx being uninitialized. Actually this may be a reload/df problem -- in reload we have ;; bb 4 artificial_defs: { } ;; bb 4 artificial_uses: { u-1(6){ }u-1(7){ }} ;; lr in 1 [dx] 6 [bp] 7 [sp] 20 [frame] ;; lr use 1 [dx] 6 [bp] 7 [sp] ;; lr def 2 [cx] 17 [flags] ;; live in 1 [dx] 6 [bp] 7 [sp] 20 [frame] ;; live gen 2 [cx] ;; live kill 17 [flags] ;; Pred edge 3 [50.0%] (fallthru) (note:HI 14 13 15 4 [bb 4] NOTE_INSN_BASIC_BLOCK) (insn:HI 15 14 65 4 t.c:25 (parallel [ (set (reg/v:SI 2 cx [orig:58 p_48.12 ] [58]) (ashift:SI (reg/v:SI 2 cx [orig:58 p_48.12 ] [58]) (reg:QI 2 cx))) (clobber (reg:CC 17 flags)) ]) 459 {*ashlsi3_1} (nil)) but cx should be live in as it is used by the ashift. Wrong code on primary arch. Simplified testcase (fails at -Os -m32): /* PR target/36613 */ extern void abort (void); static inline int lshifts (int val, int cnt) { if (val < 0) return val; return val << cnt; } static inline unsigned int lshiftu (unsigned int val, unsigned int cnt) { if (cnt >= sizeof (unsigned int) * __CHAR_BIT__ || val > ((__INT_MAX__ * 2U) >> cnt)) return val; return val << cnt; } static inline int rshifts (int val, unsigned int cnt) { if (val < 0 || cnt >= sizeof (int) * __CHAR_BIT__) return val; return val >> cnt; } int foo (unsigned int val) { return rshifts (1 + val, lshifts (lshiftu (val, val), 1)); } int main (void) { if (foo (1) != 0) abort (); return 0; } Blaeh. The bug is in code that is there since the dawn of revision control, under a comment that starts with "... This is tricky ..." and ends with "I am not sure whether the algorithm here is always right ...". One thing after another. The problem is in insn 15 (with Jakubs testcase), that effectively is: p58 <- p63 << p63:QI It so happens that p63 is allocated to $edx and p58 to $ecx. This is all fine and would result in such insn: %ecx = %edx << %dl Then find_reloads comes and determines that the insn as is is invalid: 1) The output and first input have to match and 2) the second input needs to be in So, it generates the first reload to make op0 and op1 match. in = inreg = (reg:SI dx) out = outreg = (reg:SI cx) inmode = outmode = SImode class = GENERAL_REGS type = RELOAD_OTHER perfectly fine. find_reloads will also already set reg_rtx (i.e. the register that is used to carry out the reload) to (reg:SI cx), also quite fine and correct. Then it tries to generate another reload to fit the "c" constraint for operand 2: in = (subreg:QI (reg:SI dx)) out = 0 inmode = QImode class = CREG type = RELOAD_FOR_INPUT Fairly early push_reload will substitute in (still a SUBREG) with the real hardreg: in = (reg:QI dx). Then it tries to find a mergable reload, and indeed finds the first one. That is also correct, the classes overlap, the already assigned reg_rtx is member of that class (CREG) the types of reloads are okay and so on. Okay, so we don't generate a second reload for this operand, but simply reuse the first one, so we have to merge the info of this to-be reload with the one we have already. That for instance would widen the existing mode if our new reload would be wider. See push_reload around line 1369 for what exactly that merging does. Among the things it does, it also overwrites .in: if (in != 0) { ... rld[i].in = in; rld[i].in_reg = in_reg; } This might look strange, but in itself is okay. It looks strange, because our reload now looks like: in = inreg = (reg:QI dx) out = outreg = (reg:SI cx) inmode = outmode = SImode class = CREG type = RELOAD_OTHER reg_rtx = (reg:SI cx) Note in particular that the .in reg is a QImode register, while the inmode (and outmode) are still SImode. This in itself is still not incorrect, if this reload would be emitted as a SImode register move (from dx to cx), as requested by the modes of this reload all would work. I.e. the .in (and hence .out) registers are not to be used as real register references, but merely as container for the hardreg _numbers_ we want to use. Now, if we're going to emit this reload we go over do_input_reload, and right at the beginning we have this: [ here old = rl->in; ] if (old && reg_rtx) { enum machine_mode mode; /* Determine the mode to reload in. This is very tricky because we have three to choose from. ... I am not sure whether the algorithm here is always right, but it does the right things in those cases. */ mode = GET_MODE (old); if (mode == VOIDmode) mode = rl->inmode; So, it determines the mode _from the .in member_ by default, and only uses the mode from the reload when that one is a constant. Argh! This means we are now emitting a QImode move only loading %cl from %dl, whereas we would need to load all of %ecx from %edx. We emit this wrong reload insn, which then later get's removed because we already have another reload %cl <- %dl in the basic block before this one, which we are inheriting then. This just confuses the analysis somewhat, but the problem really is this too narrow reload. This code originally comes from do_input_reload, which in turn has it from emit_reload_insns, and had this comment already in r120 of reload1.c . I see basically two ways to fix this: 1) make push_reload "merge" also the .in member, instead of just overwriting it. By merging I mean that if rld.in and in are both registers (in which case they have the same numbers already, as that is checked by find_reusable_reload) that leave in the larger of the two. Same for .out. 2) Change do_input_reload (and also do_output_reload) to listen to inmode and outmode first, before looking at the mode of .in. The comment above this looks scary and lists some problematic cases, but given it's age I'm not at all sure if these indeed are still a problem. I actually think that inmode and outmode are currently the most precise descriptions of what this reload is about, unlike .in and .out whose significance lies more or less in the reg number only. (2) might be the harder change, as it would require also reinterpreting anything in .in or .out in the mode, in case they don't match. I checked that (1) fixes the testcase, maybe I'm regstrapping that, it's the simpler change. Closing 4.1 branch. Michael, any news? I submitted a patch here: but got no feedback or review. I'll have a look tomorrow ... Subject: Bug 36613 Author: matz Date: Wed Aug 6 15:34:45 2008 New Revision: 138807 URL: Log: PR target/36613 * reload.c (push_reload): Merge in,out,in_reg,out_reg members for reused reload, instead of overwriting them. * gcc.target/i386/pr36613.c: New testcase. Added: trunk/gcc/testsuite/gcc.target/i386/pr36613.c Modified: trunk/gcc/ChangeLog trunk/gcc/reload.c trunk/gcc/testsuite/ChangeLog Do you plan to commit this to 4.3 as well? Subject: Re: [4.2/4.3 Regression] likely codegen bug On Mon, 11 Aug 2008, jakub at gcc dot gnu dot org wrote: > ------- Comment #13 from jakub at gcc dot gnu dot org 2008-08-11 08:12 ------- > Do you plan to commit this to 4.3 as well? Ulrich asked for some time on the trunk (we have built all of our packages against a patched 4.3 tree now with no appearant problems as well). Richard. (In reply to comment #14) > Ulrich asked for some time on the trunk (we have built all of our > packages against a patched 4.3 tree now with no appearant problems as > well). OK, in that case I have no further concern. I'll leave it up to you as release manager to decide when you want it to go into 4.3 ... committed as r138955 into 4_3 branch. Subject: Bug 36613 Author: matz Date: Mon Aug 11 16:22:00 2008 New Revision: 138955 URL: Log: PR target/36613 * reload.c (push_reload): Merge in,out,in_reg,out_reg members for reused reload, instead of overwriting them. * gcc.target/i386/pr36613.c: New testcase. Added: branches/gcc-4_3-branch/gcc/testsuite/gcc.target/i386/pr36613.c Modified: branches/gcc-4_3-branch/gcc/ChangeLog branches/gcc-4_3-branch/gcc/reload.c branches/gcc-4_3-branch/gcc/testsuite/ChangeLog
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=36613
CC-MAIN-2021-10
refinedweb
1,706
71.85
Before I get into the subject of today’s fabulous adventure, I want to congratulate the whole rest of Developer Division on the tremendously exciting product that we are formally launching today. (I’ve done very little actual coding on Visual Studio 2012 and C# 5.0, being busy with the long-lead Roslyn project.) Asynchronous programming support in C# and VB is of course my favourite feature; there are far, far too many new features to mention here. Check out the launch site, and please do send us constructive feedback on what you do and do not like. We cannot respond in detail to all of it, but it is all appreciated. Returning now to the subject we started discussing last time on FAIC: sometimes the compiler can know via “static” analysis (that is, analysis done knowing only the compile-time types of expressions, rather than knowing their possibly more specific run-time types) that an “is” operator, (*) and therefore we add a few additional caveats: - T may not be a pointer type - x may not be a lambda or anonymous method - if x is classified as a method group or is the null literal (**) then the result is false - if the runtime type of x is a reference type and its value is a null reference then the result is false - if the runtime type of x is a nullable value type and the HasValue property of its value is false then the result is false - if the runtime type of x is a nullable value type and the HasValue property of its value is true then the result is computed as though you were checking, another aside!. We produce. Where is that line? The heuristic we actually use to determine whether or not to report a warning when we detect that there is no compile-time conversion from x to T is as follows: - If neither the compile time type of x nor the type T is a value type and T is a class type then we know that the result will always be false (***). (This is case M10 above.) We give a warning. - Otherwise, give up on further analysis and do not produce a warning. This is far from perfect, but it is definitely in the “good enough” bucket. And “good enough” is, by definition, good enough. Of course, this heuristic is subject to change without notice, should we discover that some real user-impacting scenario motivates changing it. (*) And not to determine the answers to other interesting questions, like “is there any way to associate a value of type T with this value?” or “can the value x be legally assigned to a variable of type T?” (**) There are some small spec holes here and around these holes are minor inconsistencies between the C# 5 compiler and Roslyn. We do not say what to do if the expression x is a namespace or a type; those are of course valid expressions. does say that “x is T” is an error if T is a static type; the C# 5 compiler erroneously allows this and produces false whereas the Roslyn compiler produces an error. (***) This is a lie; there is one? In response to the question in your footnotes, one program in which a warning would be suppressed would be: static bool M<T,Z>( T x ) where T : struct where Z : class { return x is Z; // no warning here, even though (Z)x is illegal. } With the definition of M<,>() above, the following code will be true: bool isZ = M<int,object>(42); Assert.IsTrue( isZ ); // passes Awesome post as always. As a side note, could you provide any timeline for Roslyn suport in visual studio 2012 RTM? At last we have an answer to the Clintonian question of what the meaning of the word "is" is. (***) static bool M10E<X>(X x) where X : struct { return x is System.Enum; } The table in ECMA-335, 6th Edition, §II.10.1.7 "Generic Parameters (GenPars)" is somewhat related. The Enum case is the one I had in mind. Nice! — Eric Trying to grasp the deeper meaning of your article, I became a bit wondered about the decision on when the generate warnings. Your criterium nr 3 for generating warnings seems not to address a very common case where warnings (and errors) are actively used by a programmer, or to say the least, by me. The most common thing we do to software is not so much create it, but change it. A change in one part of a program might very well create an obvious bug somewhere else. Obvious, but unseen. These cases fail the nr 3 criterium, but if i could pick only one situation for producing warnings, this would be it.
https://blogs.msdn.microsoft.com/ericlippert/2012/09/12/static-analysis-of-is/
CC-MAIN-2017-04
refinedweb
799
66.47
Steve Best from IBM, for JFS Steve Best: Yes, We did our 1.0.0 release which was production level ready on 6/28/01. 2. What are its biggest differences (good or bad) when compared to ReiserFS and XFS? Steve Best: Feature wise Juan I. Santos Florido, did a very good article comparing the journaling file system being developed for Linux. The article is called "Journal File Systems" was published by the Linux Gazette on 7/2000. Juan used some of the information for his article about JFS by viewing a white paper that Dave Kleikamp and I have wrote describing the layout structures that JFS uses. 3. What are the differences between the Linux version of JFS and the one found on OS/2? Steve Best:. The design goals were to use the proven Journaling File System technology that we had developed for 10+ years in AIX and expand on that in the following areas: Performance, Robustness, and SMP support. Some of the team members for the original JFS designed/developed this File System. So this source base for JFS for Linux is now on the following other Operating Systems: OS/2 Warp Server for e-business 4/99 OS/2 Warp Client (fixpack 10/00) AIX 5L called JFS2 4/01 4. Has JFS made its way to be included as an option on the Linux kernel? Steve Best: Not yet, Our plan is to submit a patch for the 2.5 kernel tree when that opens up and then back port it to the 2.4.x series of the kernel. This plan might change if the 2.5 development tree for the kernel doesn't open up soon. JFS is in the process of being included in several different Linux distributions right now and you will see it being shipped shortly in the next releases that come out in the September 2001 time frame. The ones I know about right now are Mandrake, SuSE, TurboLinux. 5. What is the fragmentation policy of JFS and how is it dealing with it? Steve Best: JFS uses extents, so that reduces the fragmentation of the file system. We do have defragmentation utility that will do defragmentation on the file system. 6. Does JFS has support or plans to support arbiraty meta-data (multiple-stream attributes as some call them)? Steve Best: The source of the OS/2 has extend attributes which I believe are similar to multiple-stream attributes. We still need to move that support over. 7. One of the qualities found on BeOS's BFS is "live queries". Live Queries is a query that sends (automatically) deltas when the result changes. They stay "open" after the first returned result row. Is there a plan for such a support on JFS? Steve Best: I'm not totally familiar with this BeOS's "live queries", but I think this is similar to a change notification mechanism that a file system manager would use to say I want to be notified if a file has been deleted, so if the file system manager was displaying that sub dir it could remove the icon of that file? This type of mechanism would need to be supported by both the virtual file system layer and the file system. Currently I don't believe Linux supports this. JFS did support this type of mechanism in OS/2 so it won't be that hard to add it to JFS if and when Linux would support this. 8. Which part of JFS you would like to change, update, extend or even re-design? What are the future plans for JFS? Steve Best: The design for JFS is proven now on several different operating systems and I currently see no major change in this design. We still have performance tuning to do for JFS for Linux and this will be one of our major areas that we will work on in the coming months. One good part of working in the file system area of Linux right now is that there are 4 journaling file systems being actively developed and all are GPL, so it is possible that each file system can improve by sharing the best design points. The JFS team has worked together for several years and started the port of JFS to Linux in December of 1999. We took the approach of release early and often. Our first release was done on 2/2/2000 and we have done 41 releases so far. In general we do a new release about every 2 weeks and if possible provide patches to problems as they are fixed. We still have new features to add to JFS and will continually enhance this file system. Some of the goals of the JFS team are to have JFS run on all architectures that Linux supports. With the help of the Linux community JFS has run on (ia32, ia64, PowerPC 32 and 64, Alpha, s/390 31 and 64 bit, ARM). The community members are helping us debug some problems that JFS has on SPARC and PA-RISC, so we should have this architectures running shortly. JFS has no architecture specific code included in it. Another goal is have JFS included in the kernel.org source tree. When we started the port the team decided that we weren't going to change any of the kernel and this would allow JFS to be easily integrated into the kernel. Hans Reiser from NameSys, for ReiserFS Hans Reiser: We are the preferred filesystem for SuSE. I have been told by the author of LVM that 90% of his users on his mailing list use ReiserFS. This surprised me, but he was sure it was correct. LVM users are of course not representative of FS users as a whole, as LVM users tend to be sysadmins of large machines who tend to need journaling. If you need journaling, we are the most stable FS available. We have many more users in Germany, where SuSE is dominant, than anywhere else. All filesystems and all OSes have bugs. At this point your chances of hitting a software bug in ReiserFS are far less than your chances of bad disk, bad ram, bad cpu, or bad controller corrupting your data. We have patches available on our website that hopefully Linus will put in the main kernel when he gets back from vacation, but I have to admit that I haven't bothered to put them on my laptop yet :-). We have no outstanding unsolved bugs at this moment, and while I am sure that there are still a few in there somewhere, ReiserFS is getting really quite stable when we can go for weeks with no new bug reports with the number of users we have. I think that we are now finally even a little more stable in 2.4 than we were in 2.2, and whether a ReiserFS user should use 2.4 or 2.2 is not dependent on ReiserFS anymore. As you probably know, there are other layers (VFS, MM, etc.) that are less stable, but I think that things will settle down real soon now. VFS seems to have finally gotten stable in just the last few weeks, and I am sure that the memory management layer will get fixed real fast once Linus is back from vacation in Finland. We have minor feature improvements (relocatable journal, bad block handling, etc.) that are waiting for 2.5.1, and are in Alan Cox's tree while we wait. I suppose it is possible that they could end up in 2.4 before 2.5.1 if 2.5.1 is delayed long enough. To my surprise they are not generating bug reports from the -ac series users. I am not sure whether I should conclude from this that they are bugfree, there may be a lot less -ac users than Linux users generally. With regards to particular unnamed distros.... :-).... Stability is not the issue, ReiserFS is known to be stable by the people who use it. SuSE is known to worry more about stability much more than the unnamed untrusting distros you mention (think of how SuSE waited for 2.4.4 before shipping 2.4 as the default, think gcc...), and we are the SuSE default. I used to think that it was politics that was the reason why positions in discussions of ReiserFS on linux-kernel prior to our acceptance by Linus are predictable by what distro the poster works at, but more and more I am coming to see that the difference is one of style, and that what style the developer embraces is semi-predictable by distro. Different people adopt change at different rates. ReiserFS has at its heart some of the same lust for change that BeFS has. You probably don't realize how scary it is to most old time Unix filesytem developers to talk about adding new semantics to the filesystem namespace like we describe at here, or here, or like BeFS has already done. What many distros want in Linux is simply what Unix has, but free, and nothing much more. SuSE has an exceptional head of R&D, Markus Rex, who understands the deep things before they are something real yet. They then combine this with a quality assurance team lead by Hubert Mantel, that is also exceptional in the industry. The result is that with SuSE you tend to get cutting edge technology that works. I think it is in part because they are so fanatical about quality assurance, and good at it, that they have the confidence to adopt change a bit earlier than others who get burned just changing the compiler for an unchanging language. Ok, ok, you can tell which style I like, but we always have to be careful to not disrespect the other styles. They also work, and have different advantages. Linux could not have developed as fast as it did without the folks who just copied what worked in Unix, and did a damned good job of making it work in Linux. The beauty of Linux is that users can choose the distro that matches their particular style. 2. What are its biggest differences (good or bad) when compared to JFS and XFS? Hans Reiser: JFS provides an easy migration path for IBM's current JFS users who are seeking to migrate to Linux. I think this is the primary objective of the JFS project. It has decent performance, there is nothing bad about JFS, but you should look at the benchmarks before using it for non-migration purposes. XFS is an excellent file system, and there is an important area where XFS is higher performance than we are. ReiserFS uses the 2.4 generic read and generic write code. Using this made for a better integration into 2.4, but it was a performance mistake for large files. ReiserFS does a complete tree traversal for every 4k block it writes, and then it inserts one pointer at a time into the tree, which means that every 4k write incurs the overhead of a balancing of the tree (which means it moves data around. For this reason, XFS has better very large file performance. However, they are a bit slow with regards to medium sized and small files. It seems that Chris Mason implemented an exceptionally good journaling implementation for ReiserFS, with much less overhead than other journaling implementations. He likes to say that there is nothing innovative or interesting about his code, but.... he avoided all of the usual performance mistakes in implementing journaling, and I think that is a form of innovation:-).... XFS is slower than reiserfs for the typical file size distributions on "typical" file systems, and I encourage you to examine our benchmarks, where you will see that they are faster for very large file writes, and slower for typical file sizes. The benchmarks provide a lot more details on this. The upshot is that whether you should use XFS or ReiserFS depends on what you want to use it for. If you want the most widely tested journaling file system for use with "typical" file sizes, then use ReiserFS. If you want to stream multi-media data for Hollywood style applications, or use ACLs now rather than wait for Reiser4, you might want to use XFS. This is going to change. XFS is going to go into Linux 2.5/2.6 (they make changes to the kernel that are considered 2.5 material, and thus are not in 2.4), and I just bet you that by 2.6 they will have improved their "typical" file size performance by the time 2.6 ships. You can be 100% sure that Reiser4's large file performance will be far faster. We are writing the new code now, and, well, I like it....;-) 2.6 is far enough away that you are seeing the first lap of a race to good performance by XFS and ReiserFS, it is too early for the users to know how much our large file performance will increase, and how much their small file performance will increase. I would say that we were lucky to make the code cut-off point for 2.4, except that my guys worked every weekend for 6 months getting the 2.4 port done so that when Linus announced code freeze we could send him a patch immediately. Our performance is better than ext3 () because of the great job Chris did with journaling, but ext3 is written by excellent programmers who do good work. 3. What is the fragmentation policy of ReiserFS and how is it dealing with it? Hans Reiser: This is an area we are still experimenting with. We currently do what ext2 does, and preallocate blocks. What XFS does is much better, they allocate blocknrs to blocks at the time they are flushed to disk, and this allows a much more efficient and optimal allocation to occur. We knew we couldn't do it the XFS way and make code freeze for 2.4, but reiser4 is being built around delayed allocation, and I'd like to thank the XFS developers for taking the time to personally explain to me why delayed allocation is the way to go. 4. Does ReiserFS has support or plans to support arbiraty meta-data (multiple-stream attributes as some call them)? Hans Reiser: Yes, these are what we are doing for DARPA in Reiser4. 5. One of the qualities found on BeOS's BFS is "live queries". Live Queries is a query that sends (automatically) deltas when the result changes. They stay "open" after the first returned result row. Is there a plan for such a support on ReiserFS? Hans Reiser: Someday we should implement it, but it will be post version 4. 6. Which part of ReiserFS you would like to change, update, extend or even re-design? What are the future plans for ReiserFS? Hans Reiser: For version 4 we are gutting the core balancing code, and implementing plugins. We think plugins can do for filesystems what they did for photoshop. We are making it easy to add new types of security attributes to files. We are implementing ACLs, auditing, and encryption on commit, as example security plugins. We are moving from "Balanced Trees" to "Dancing Trees". We support inheritance of file bodies and stat data. This page describes this in detail. These are features we will deliver by Sep. 30 of next year. In the long term, we very much share the BeFS vision of enhancing the file system namespace semantics. We think we have some theoretical advantages in our semantics, you can see our semantics in detail here, but I think a lot of what the BeFS authors have done. I would be curious to hear your personal experiences as to what works and does not work. I understand that they had some performance problems that required them to make some design sacrifices. (You probably know a lot more about this than I, please cut these words if I am wrong.) I think we will have performance that will make it unnecessary to sacrifice such semantic design elegance, but we will do this at the cost of getting our semantic innovations to users later than BeFS did. Right now we only offer a high-performance traditional Unix file system. This is not our goal, adding search engine and database semantic features into the FS namespace is our goal, but we wanted to get good performance for traditional file system usage first before adding the database functionality, and now we find that performance is interesting also, and so it distracts us, and.... version 4 will have some semantic innovations, but most of what we discuss in the whitepaper will wait for a version after that. I can say though that we have laid a nice foundation for that future work. Nathan Scott from SGI, for XFS Nathan Scott: The current stable version of XFS is 1.0.1. There are many people we know of using XFS in production environments on Linux today, so yes, it is production ready. There were some good examples on the XFS list just yesterday - see the "production server" thread - here's one very positive quote, for example: "I certainly think so; XFS runs on my Compaq/Red Hat 6.1 server with a 30 GB, 7200 RPM IDE drive. The server is a file/web/ streaming media server; it runs just fine. Fast as hell with Samba, and the ACL support is great. I use a CVS version from a while back, and that is stable as hell. Never crashed, as a matter of fact, no problems whatsoever."It's very rewarding as a developer to read this sort of stuff! 2. What are its biggest differences (good or bad) when compared to ReiserFS and JFS? Nathan Scott: Each filesystem will offer a different set of features and different performance characteristics, so naturally one should always choose the filesystem most appropriate for their specific workload. I'm not deeply familiar with the implementations of these other filesystems so can't really provide a good contrast. Some of the features of XFS which other filesystems often do not have would be: - Direct I/OXFS also has a fairly extensive set of userspace tools - for dumping, restoring, repairing, growing, snapshotting, tools for using ACLs and disk quotas, and a number of other utilities. We were fortunate to have had XFS around for a number of years on IRIX when the Linux port began, so these tools (and indeed the core XFS kernel code) have been extensively used in the field on IRIX and are very mature. - Fast recovery after unplanned shutdown - Extent-based space management with either bulk preallocation or delayed allocation; this maximizes performance and minimizes fragmentation - Journalled quota subsystem - Extended attributes - Access Control Lists (integrated with the latest Samba too) - XDSM (DMAPI) support for Hierarchical Storage Management - Scalability (64 bit filesystem, internal data structures and algorithms chosen to scale well) 3. What are the differences between the Linux version of XFS and the one found on Irix? Nathan Scott: In the original IRIX implementation, the buffer cache was extensively modified to optimally support various features of XFS, in particular for its ability to do journalling and to perform delayed allocation. This has become a fairly complex chunk of code on IRIX and is very IRIX-centric, so in porting to Linux this interface was redesigned and rewritten from scratch. The Linux "pagebuf" module is the result of this - it provides the interface between XFS and the virtual memory subsystem and also between XFS and the Linux block device layer. On IRIX XFS supports per-user and per-project disk quota ("projects" are an accounting feature of IRIX). On Linux we had to make some code changes to support per-group instead of per-project quota, as this is the way quota are implemented in Linux filesystems like ext2 (and Linux has no equivalent concept to "projects"). There are some more esoteric features of XFS on IRIX that provide file system services customized for specific demanding applications (e.g. real-time video serving), and these have not been ported to the Linux version so far. 4. Has XFS made its way to be included as an option on the Linux kernel? Nathan Scott: No, not at this stage. Although this has been our long-stated goal and we are continually working towards inclusion of XFS in both Linus' tree and the standard Linux distributions. 5. What is the fragmentation policy of XFS and how is it dealing with it? Nathan Scott: XFS was designed to allocate space as contiguously as possible.). The ability for XFS to _not_ fragment the filesystem is such that XFS on IRIX survived for many years without any filesystem reorganisation tool, but eventually the need surfaced in a specific application environment, and a defragmenter was developed. This tool attempts to redo the allocation for a file and if a more optimal block map can be created, switches the file over to using that block map instead. 6. Does XFS have support or plans to support arbitrary meta-data (multiple-stream attributes as some call them)? Nathan Scott: XFS supports "extended attributes" - arbitrary, small key:value pairs which can be associated with individual inodes and which are treated as filesystem metadata. These are used to store ACLs, MAC labels (on IRIX), DMAPI data, etc, and user-defined attributes. In practice they are intended for use to augment the metadata associated with an inode, rather than the more exotic uses that the "non-data fork" is designed for in some filesystems. Multiple data streams within a single file is something else entirely, and XFS does not support that concept. 7. One of the qualities found on BeOS's BFS is "live queries". Live Queries is a query that sends (automatically) deltas when the result changes. They stay "open" after the first returned result row. Is there a plan for such a support on XFS? Nathan Scott: No, XFS does not support anything like this, and I'm not aware of any plans to implement such a feature. 8. Which part of XFS you would like to change, update, extend or even re-design? What are the future plans for XFS? Nathan Scott: There are a number of areas of development in XFS, on both IRIX and Linux. I'll talk about Linux only here. On Linux we are actively working on the IA64 port - the intention is for the 1.0.2 release to support that architecture. There are folk in the open source community working on ports to a variety of other architectures as well (in particular, the PowerPC and Alpha porters and seem to have had a great deal of success). Work is ongoing in the "pagebuf" code which I talked about before - it currently imposes the requirement that the filesystem block size be equal to the pagesize, and that restriction is going to be eased somewhat in a future release. In the longer term, there is plenty of other work planned too - as one example, there is some work earmarked for the write code path in order to do multi-threaded direct IO writes. Obviously, being included in Linus' kernel is an important goal. I know of several distributions that are currently working on supporting XFS natively - Suse employees post to the XFS mailing list quite frequently; Andi Kleen from Suse in particular has been very active in stress testing XFS and fixing problems, and Jan Kara (also from Suse) was instrumental in helping us get the journalled XFS quota support into the base Linux quota tools. Also, the next stable Debian release will contain an XFS kernel patch and the XFS userspace tools. So, there is still plenty of work to be done, but at this stage I'd say we now have a very stable foundation and a good base for moving forward.
http://www.osnews.com/story/69/Interview-With-the-People-Behind-JFS-ReiserFS-and-XFS/
crawl-003
refinedweb
4,021
70.02
Get boolean values from environment variables. Project Description Release History Download Files Get boolean values from environment variables in Python. from env_flag import env_flag # When unset, default to `False`. debug = not env_flag('PRODUCTION') # When unset, use explicit default. is_local = get_bool('IS_LOCAL', default=True) Values are coerced as follows: - When the variable is unset, or set to the empty string, return default. - When the variable is set to a truthy value, return True. These are the truthy values: - 1 - true, yes, on - When the variable is set to the anything else, return False. Example falsy values: - 0 - no - Ignore case and leading/trailing whitespace. Development pip install -r requirements_dev.txt rake test rake lint Contribute - Issue Tracker: github.com/bodylabs/env-flag/issues - Source Code: github.com/bodylabs/env-flag Pull requests welcome! Support If you are having issues, please let us know. License The project is licensed under the two-clause BSD license. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/env-flag/
CC-MAIN-2017-34
refinedweb
174
61.43
System that I updated from OpenCV-3.1-android-sdk to OpenCV-4.5.0-android-sdk I cannot find a function called cvDiv and a class called cvMat. Can you tell me the equivalent ? I am using these includes: #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> In my cpp file was using it like this: CvMat src = srcImg; CvMat mask = CannyImg; cvDiv(&src, &mask, &mask, 256); My partial solution: #include <opencv2/core/core_c.h> -> for cvDiv But what about cvMat. If I just changed it to Mat cvDiv will complain that it cannot handle that class
https://answers.opencv.org/questions/237974/revisions/
CC-MAIN-2022-40
refinedweb
105
63.05
Next Chapter: Binning Data Dealing with NaN Introduction NaN was introduced, at least officially, by the IEEE Standard for Floating-Point Arithmetic (IEEE 754). It is a technical standard for floating-point computation established in 1985 - many years before Python was invented, and even a longer time befor Pandas was created - by the Institute of Electrical and Electronics Engineers (IEEE). It was introduced to solve problems found in many floating point implementations that made them difficult to use reliably and portably. This standard added NaN to the arithmetic formats: "arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special 'not a number' values (NaNs)" 'nan' in Python Python knows NaN values as well. We can create it with "float": n1 = float("nan") n2 = float("Nan") n3 = float("NaN") n4 = float("NAN") print(n1, n2, n3, n4) nan nan nan nan "nan" is also part of the math module since Python 3.5: import math n1 = math.nan print(n1) print(math.isnan(n1)) nan True Warning: Do not perform comparison between "NaN" values or "Nan" values and regular numbers. A simple or simplified reasoning is this: Two things are "not a number", so they can be anything but most probably not the same. Above all there is no way of ordering NaNs: print(n1 == n2) print(n1 == 0) print(n1 == 100) print(n2 < 0) False False False False NaN in Pandas Example without NaNs Before we will work with NaN data, we will process a file without any NaN values. The data file temperatures.csv contains the temperature data of six sensors taken every 15 minuts between 6:00 to 19.15 o'clock. Reading in the data file can be done with the read_csv function: import pandas as pd df = pd.read_csv("data1/temperatures.csv", sep=";", decimal=",") df.loc[:3] We want to calculate the avarage temperatures per measuring point over all the sensors. We can use the DataFrame method 'mean'. If we use 'mean' without parameters it will sum up the sensor columns, which isn't what we want, but it may be interesting as well: df.mean() sensor1 19.775926 sensor2 19.757407 sensor3 19.840741 sensor4 20.187037 sensor5 19.181481 sensor6 19.437037 dtype: float64 average_temp_series = df.mean(axis=1) print(average_temp_series[:8]) 0 13.933333 1 14.533333 2 14.666667 3 14.900000 4 15.083333 5 15.116667 6 15.283333 7 15.116667 dtype: float64 sensors = df.columns.values[1:] # all columns except the time column will be removed: df = df.drop(sensors, axis=1) print(df[:5]) time 0 06:00:00 1 06:15:00 2 06:30:00 3 06:45:00 4 07:00:00 We will assign now the average temperature values as a new column 'temperature': # best practice: df = df.assign(temperature=average_temp_series) # inplace option not available # alternatively: #df.loc[:,"temperature"] = average_temp_series df[:3] Example with NaNs We will use now a data file similar to the previous temperature csv, but this time we will have to cope with NaN data, when the sensors malfunctioned. We will create a temperature DataFrame, in which some data is not defined, i.e. NaN. We will use and change the data from the the temperatures.csv file: temp_df = pd.read_csv("data1/temperatures.csv", sep=";", index_col=0, decimal=",") We will randomly assign some NaN values into the data frame. For this purpose, we will use the where method from DataFrame. If we apply where to a DataFrame object df, i.e. df.where(cond, other_df), it will return an object of same shape as df and whose corresponding entries are from df where the corresponding element of cond is True and otherwise are taken from other_df. Before we continue with our task, we will demonstrate the way of working of where with some simple examples: s = pd.Series(range(5)) s.where(s > 0) import numpy as np A = np.random.randint(1, 30, (4, 2)) df = pd.DataFrame(A, columns=['Foo', 'Bar']) m = df % 2 == 0 df.where(m, -df, inplace=True) df For our task, we need to create a DataFrame 'nan_df', which consists purely of NaN values and has the same shape as our temperature DataFrame 'temp_df'. We will use this DataFrame in 'where'. We also need a DataFrame with the conditions "df_bool" as True values. For this purpose we will create a DataFrame with random values between 0 and 1 and by applying 'random_df < 0.8' we get the df_bool DataFrame, in which about 20 % of the values will be True: random_df = pd.DataFrame(np.random.random(size=temp_df.shape), columns=temp_df.columns.values, index=temp_df.index) nan_df = pd.DataFrame(np.nan, columns=temp_df.columns.values, index=temp_df.index) df_bool = random_df<0.8 df_bool[:5] Finally, we have everything toghether to create our DataFrame with distrubed measurements: disturbed_data = temp_df.where(df_bool, nan_df) disturbed_data.to_csv("data1/temperatures_with_NaN.csv") disturbed_data[:10] df = disturbed_data.dropna() df 'dropna' can also be used to drop all columns in which some values are NaN. This can be achieved by assigning 1 to the axis parameter. The default value is False, as we have seen in our previous example. As every column from our sensors contain NaN values, they will all disappear: df = disturbed_data.dropna(axis=1) df[:5] Let us change our task: We only want to get rid of all the rows, which contain more than one NaN value. The parameter 'thresh' is ideal for this task. It can be set to the minimum number. 'thresh' is set to an integer value, which defines the minimum number of non-NaN values. We have six temperature values in every row. Setting 'thresh' to 5 makes sure that we will have at least 5 valid floats in every remaining row: cleansed_df = disturbed_data.dropna(thresh=5, axis=0) cleansed_df[:7] Now we will calculate the mean values again, but this time on the DataFrame 'cleansed_df', i.e. where we have taken out all the rows, where more than one NaN value occurred. average_temp_series = cleansed_df.mean(axis=1) sensors = cleansed_df.columns.values df = cleansed_df.drop(sensors, axis=1) # best practice: df = df.assign(temperature=average_temp_series) # inplace option not available df[:6] Next Chapter: Binning Data
https://python-course.eu/dealing_with_NaN_in_python.php
CC-MAIN-2020-29
refinedweb
1,045
67.65
Hey, this bug happens on both- Android and iOS for me. When creating a clean project (shared codebase, xamarin.forms) the live player functionality is working great. As soon as I try to add (in my case) the references to Microsoft's App Center with "using Microsoft.AppCenter;" the build fails throwing: The type or namespace name 'AppCenter' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?) The code recognizes the reference and a local debug build is successful. Have you tried to stop and re-start the live player app after adding nugets? Yes, I also restarted the computer and reloaded the projects. I will try to install and uninstall the packages again. Ok so this still doesn't work after reinstalling the packages. Is this error reproducible? Ahh this has been reported with AppCenter NuGets. We have a fix in place currently and will be pushing out a new build soon. Hi @JamesMontemagno Is this fix published? I'm trying to add this nuget to a new project an there's no .AppCenter here. I don't know if I'm missing something. Thanks. After package update 1.12.0 I'm getting error about import package: using Microsoft.AppCenter; using Microsoft.AppCenter.Analytics; using Microsoft.AppCenter.Crashes; Better to use the lower version, it seems like Microsoft issue.
https://forums.xamarin.com/discussion/comment/324977
CC-MAIN-2019-13
refinedweb
225
61.43
I use fckeditor control to add fckeditor on the page. But I can not upload images to userfile folder.the upload page always have error message.xml parser error .why Thanks for help Mark But I can not upload images to userfile folder.the upload page always have error message.xml parser error .why Thanks for help Mark I kept getting a server error 500, but nothing else on the page was breaking, and everything in the installation/configuration seemed fine. (I ended up re-installing everything a couple of times...) So I finally ran the test page and got a yellow screen for a web.config error. It seems that I added System.String in the namespaces area, but the error reported that System.String is not a valid namespace (yeah - I knew that... but put it in for some unknown reason). So I took out System.String and tried the test again: all was fine. I went back to my original installation location and everything went fine there too. Oddly enough, .NET didn't bark at me for the namespace issue. One note - in case you get to this point and uploads still aren't working: instead of using the Upload tab on the image pop-up box,try staying on the first tab, click "Browse Server" and use the upload functionality on that larger pop-up. Let us know if something works or doesn't work. Best of luck to you and best regards - - Chazzz My problem is now with inserting an image into the editor. The source code is pointing to the correct file path but only placeholder shows not the image itself. I think I did all the necessary amendments mentioned in the documentation but to no avail. Here is the source code of an image I've tried to insert: ><img class="" alt="" src="~/UserFiles/Image/Stu_portals.jpg" /> Anyone can help? Thanks
https://ckeditor.com/old/forums/Support/fckeditor-asp.net-2.0
CC-MAIN-2018-47
refinedweb
318
76.32
Details Description? Activity - All - Work Log - History - Activity - Transitions I have isolated most of the scripts, from what I can see is there is no direct event handling messed up, but what could be is that the autoscroll feature does something in that case, do you have a bugreport on the issue? But there is one section from what I can gather which never worked, that was that one //this never worked in our code because version never was defined, I will drop it for now, since no one uses //it anyway //if (agentString.indexOf('msie') != -1) { // if (!(agentString.indexOf('ppc') != -1 && agentString.indexOf('windows ce') != -1 && version >= 4.0) && window.external && window.external.AutoCompleteSaveForm) //} (I commented that out) it sort of was a special context param which when enabled triggered autosave in older ie versions on input params, probably absolutely no one has used it, otherwise they would have stepped on following error: gentString.indexOf('windows ce') != -1 && version >= 4.0) the version var never was defined and probably would have caused an error. But the good thing is this code only was activated via a context param. Nevertheless externalizing all this should be prio #1 it reduces the page size by about 8kbyte on every request!!! and also makes the code more maintainable. Ok I just ran a test since I got the first basics working, the actual savings is 2-3 kbyte per page, which is not too shabby I guess. Never mind. I'll report my thingy in a separate issue. It's not related to your externalization. Sorry for the unclear comment. Ok the isolation so far works here, but I also want a compressed uncompressed version like we have for jsf.js, it is a 60% filesize difference between those two. The second issue is, I have the same problem as with jsf.js if inline rendered already and included in a ppr case we do not habe any suppression on the server side hence the script gets aloaded anew in auto eval cases (mozilla) Ok this issue also fixes the double include of jsf.js and externalizes the oamSubmit functions. Leonardo and Jakob, could you please have a look at the patch. I am not directly committing it yet because it changes too many things in the core, and I do not want to break it. committed a slightly modified version of Werner's patch. We have to review first what happen if tomahawk + mojarra is used. Note in this case oamSubmit.js is not on the classpath. I think if myfaces is used, we could include oamSubmit.js inside our jsf.js, but if mojarra is used, include it on a separate file as part of tomahawk and add it inline as shown. Note that on myfaces core there are two copies of oamSubmit.js: one on api and the other one on impl. The functions on oamSubmit.js: function oamSetHiddenInput(formname, name, value) function oamClearHiddenInput(formname, name, value) function oamSubmitForm(formName, linkId, target, params) Shouldn't we just put them on jsf.js, but change its names to something like <someothernamespace>.oamSetHiddenInput ? In theory, those methods are myfaces specific. If by some reason someone needs them defined, why don't just put a resource with the functions calling the ones with namespace like this: function oamSetHiddenInput(formname, name, value){ <someothernamespace>.oamSetHiddenInput(formname, name, value) } Or why don't create a variable that allows choose between render it inline or use the namespaced functions (default one)?! As with MYFACESTEST-24, we can remove the setUpFactories() workaround for the PartialViewContextFactory. I will also do a snapshot deploy of MyFaces test to verify that this is updated in the repo too and that no-one has build problems because of that. This patch moves oamSubmit.js from impl to shared, thus it will be included in myfaces-impl and in tomahawk20 at build time (a few pom-changes were necessary to make this work, actually). The patch works in all possible scenarios: - MyFaces standalone (oamSubmit.js taken from myfaces-impl) - MyFaces and Tomahawk (oamSubmit.js taken from myfaces-impl) - Mojarra and Tomahawk (oamSubmit.js taken from tomahawk) The only difference is that when Mojarra is used with Tomahawk, oamSubmit.js will always be served as a compressed javascript file (regardless of the project stage), because tomahawk has no control over the resource-loading here. However I wouldn't consider this a problem. If someone thinks we should address this issue, then we just have to move the uncompressed file from internal-resources to resources and make ResourceUtils aware of the project stage. If there are no objections, I will commit this patch soon. Patch applied, this should work now in every possible scenario. No opinion regarding externalizing the scripts. I think it's a good idea. What I did notice a while ago, was that those scripts are very annoying when doing AJAX. I didn't look at it in detail, but it looked like they messed up the event handling.
https://issues.apache.org/jira/browse/MYFACES-2858?focusedCommentId=12898843&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-48
refinedweb
837
64.61
- 23 Jun, 2010 1 commit [svn-r6817] Adding format specifier for Pints and removing instances of custom Aint format specifier defns - Codes that need Aint format spec should use MPI_AINT_FMT_* - 28 May, 2010 1 commit Pass --with-atomic-primitives=auto_allow_emulation to the OPA configure unless the user specifies a particular value. This is a regression introduced in r6223, between 1.2.1p1 and 1.3a1. Reviewed by balaji@. - 27 May, 2010 1 commit A partial merge of r5954 from the bg-threads-1.1 branch. No reviewer. - 24 May, 2010 1 commit Reviewed by buntinas@. - 17 May, 2010 2 commits. - 10 May, 2010 2 commits This was a long-standing TODO. No reviewer. was broken in r6273, prevents CPPFLAGS from being honored (but MPICH2LIB_CPPFLAGS is a workaround). No reviewer. - 06 May, 2010 2 commits is needed for suncc. Reviewed by gropp. appropriate variables are declared for it to work in emulation mode as well. This showed up as an error with the suncc compiler on linux which is not natively supported by OPA. - 04 May, 2010 1 commit error out at configure time, instead of just throwing a warning and failing at make time. - 01 May, 2010 1 commit - William Gropp authored - 28 Apr, 2010 1 commit These channels are old and have been deprecated for a while. Their presence is hampering PMI API development, so I have removed them. These were the only channels seriously using the "process locks" code, so that mess has also been deleted. The only remaining useful functionality (MPIDU_Yield) has been moved to the OS wrappers. - 23 Apr, 2010 1 commit - 17 Apr, 2010 1 commit - 03 Mar, 2010 1 commit or automatically picked by the compiler before looking for it in librt. Reported as a failure on FreeBSD by Aleksej Saushev. This commit also cleans up some of the checks and adds everything to LIBS, instead of LDFLAGS. Fixes ticket #1013. Reviewed by goodell. - 19 Feb, 2010 1 commit It's still possible for some platforms to not like this format (perhaps they don't have symlinks, etc). But this is at least an improvement. No reviewer. - 18 Feb, 2010 1 commit - 13 Feb, 2010 1 commit Reviewed by goodell. - 12 Feb, 2010 1 commit Previously flags to AR were fixed by simplemake to be "cr". With this change this can be controlled by configure/make via the new AR_FLAGS precious variable. On darwin using "Scr" shaves ~10% off the normal build time. This is a minor, nibbling around the edges fix. The real fix is to rework all of the linking in MPICH2 so that parallel builds are possible and only a single invocation of AR/RANLIB occurs for each library that we produce. Reviewed by balaji@. - 11 Feb, 2010 1 commit - Anthony Chan authored [svn-r6268] Added checks for multiple aliases support from C compiler. setbot.c.in is a real item in svn instead of generated from buildiface. Modified mpif.h.in and setbotf.f.in to allow (better) multiple aliases support. - 09 Feb, 2010 1 commit In the compiler wrappers we force the linker to use the "flat namespace" model instead of the default "two-level namespace" model. If we add dynamic library support for MPL and OPA then we can probably drop this option again. Reviewed by balaji@. - 08 Feb, 2010 1 commit No reviewer. - 07 Feb, 2010 1 commit broken, requiring lots of hacks to get it working correctly). We now allow each channel to pick what all common directories it wants configured instead of blindly configuring everything. But we still retain the hack where the setup of the common directories has to run before the device configuration while the actual configuration needs to run after, because the build seems to fail without this. - 05 Feb, 2010 2 commits configured. The problem is that users who are used to the mpich2-1.2.1 behavior continue to use: % mpdboot ... % mpiexec -n 4 ./foo % mpdallexit This is buggy since the mpiexec now belongs to Hydra which completely ignores where the mpd daemons are started and launches everything locally. Disabling mpd at least gets users doing this to notice that something has changed. overwritten. Thanks to Seung for reporting the bug. - 04 Feb, 2010 5 commits duplicated. are eventually dumped into CPPFLAGS. reimplementing the same code. autoconf variables to be passed in as well. environment variable propagation cleaner. - 01 Feb, 2010 1 commit If the <complex> header was unavailable it was still possible to end up with HAVE_CXX_COMPLEX defined. This causes problems for the various predefined MPI reduction operators, which the presence/absence of HAVE_CXX_COMPLEX to conditionally compile case statements. Incorrect values can result in "duplicate case value" errors from the compiler. No reviewer. - 30 Jan, 2010 1 commit - 27 Jan, 2010 2 commits src/openpa location, but rather from the standard lib directory. Reviewed by goodell. of the example programs is successful. Fixes ticket #993. - 06 Jan, 2010 1 commit - 05 Jan, 2010 3 commits This commit moves the existing tracing memory allocation and valgrind code utilities to MPL. This permits other code like gforker to use the trmem code while allowing MPICH2 to remain thread-safe when using tracing. gforker is not updated to the MPL routines by this commit. Reviewed by buntinas@. Reviewed by buntinas@. Necessary for moving other code to MPL. Reviewed by buntinas@. - 01 Jan, 2010 1 commit WRAPPER_LIBS. LIBS are all libraries detected and added by MPICH2. These are maintained internally, so appropriate autoconf checks can use them correctly. WRAPPER_LIBS are only used within mpicc and friends; they are a combination of LIBS and other libraries such as opa and mpl which are only generated at make time. These are maintained separately so autoconf doesn't try to use them for link or run tests.
https://xgitlab.cels.anl.gov/robl/MPICH-BlueGene/-/commits/1fe37d80e3b4e51dd2da6548154bd2971410b7b7/configure.in
CC-MAIN-2022-40
refinedweb
961
67.15
recv - receive a message from a connected socket #include <sys/socket.h> ssize_t recv(int socket, void *buffer, size_t length, int flags); The recv() function shall receive a message from a connection-mode or connectionless-mode socket. It is normally used with connected sockets because it does not permit the application to retrieve the source address of received data. The recv() function takes the following arguments:The recv() function shall return the length of the message written to the buffer pointed to by the buffer argument. For message-based sockets, such as. If no messages are available at the socket and O_NONBLOCK is not set on the sockets file descriptor, recv() shall block until a message arrives. If no messages are available at the socket and O_NONBLOCK is set on the sockets file descriptor, recv() shall fail and set errno to [EAGAIN] or [EWOULDBLOCK]. Upon successful completion, recv() shall return the length of the message in bytes. If no messages are available to be received and the peer has performed an orderly shutdown, recv() shall return 0. Otherwise, -1 shall be returned and errno set to indicate the error. The recv() function shall fail if:The following sections are informative. None. The recv() function is equivalent to recvfrom() with a zero address_len argument, and to read() if no flags are used. The select() and poll() functions can be used to determine when data is available to be received. None. None. poll() , read() , recvmsg() , recvfrom() , select() , send() , sendmsg() , sendto() , shutdown() , socket() , write() , the Base Definitions volume of IEEE Std 1003.1-2001, <sys/socket .
http://manpages.sgvulcan.com/recv.3p.php
CC-MAIN-2017-26
refinedweb
263
61.67
#include <linux/module.h> If table is NULL, get_kernel_syms() returns the number of symbols available for query. Otherwise, it fills in a table of structures: The symbols are interspersed with magic symbols of the form # module-name with the kernel having an empty name. The value associated with a symbol of this form is the address at which the module is loaded. The symbols exported from each module follow their magic module tag and the modules are returned in the reverse of the order in which they were loaded. On success, returns the number of symbols copied to table. On error, −1 is returned and errno is set appropriately. There is only one possible error return: get_kernel_syms() is not supported in this version of the kernel. This system call is present on Linux only up until kernel 2.4; it was removed in Linux 2.6.). There is no way to indicate the size of the buffer allocated for table. If symbols have been added to the kernel since the program queried for the symbol table size, memory will be corrupted. The length of exported symbol names is limited to 59 characters. Because of these limitations, this system call is deprecated in favor of query_module(2) (which is itself nowadays deprecated in favor of other interfaces described on its manual page).
http://manpages.courier-mta.org/htmlman2/get_kernel_syms.2.html
CC-MAIN-2021-17
refinedweb
221
54.73
I have problem to understand threads. Can someone help me to know how i can do rest of this program. Here is program text first: The program will read a file where the first line contains the number of words in the rest of the file, one word on each line. The words should be stored in a table (array), then the k threads find how many times a given word occurs in the table. If your program is called Finde.java shall example. started up this way: java Finde father myfile.txt 8 You will find how many times the word father in the file myfile.txt using 8 threads. Here is thus k = 8 Remember to take into account that k may be 1. When the file is read into shall main - thread share the table for about k equally long parts, and start up k threads to look at each part. When a thread has found the number of occurrences of the word in their part shall report this into a common object (a monitor). Main - thread should wait until all threads have finished searching, retrieving the total number of occurrences of the monitor and finally write this number out. Here is the code i came up for now: import java.io.File; import java.io.IOException; import java.util.Scanner; public class Main { public static void main(String[] args) { CommonObject co = new CommonObject(); Finde f = new Finde(); Scanner file = null; String[] read = new String[267752]; try{ file = new Scanner(new File("myfile.txt")); } catch(IOException e){ e.printStackTrace(); } while(file.hasNext()){ for ( int i = 0; i < read.length; i++){ read[i] = file.next(); } } } } public class CommonObject { private final int K = 8; synchronized void findeWord() { } } public class Finde implements Runnable{ public void run() { } }
https://www.daniweb.com/programming/software-development/threads/494090/thread-read-from-file-and-put-words-in-array-not-in-arraylist
CC-MAIN-2018-13
refinedweb
296
75.1
Throughout this post we are going to learn how to use printing in Ionic 2 apps using Ionic Native and Cordova . If you need to build a mobile app for Android or iOS that needs to print data to PDF or papers using the Ionic framework then lucky for you printing to either of these mobile platforms is a matter of using a Cordova plugin for printing Ionic 2 uses Ionic Native to wrap Cordova plugins so in this tutorial we are going to see an example project that shows you how to implement printing functionality to either print to PDF file or paper . You need to know that printing is only available for Android 4.4.4 (or KiKat) . Make sure you have configured the settings to use the printer on both Android and iOS devices . Next open up your terminal under Linux/MAC systems or the command prompt under Windows and generate a new Ionic 2 project . ionic start ionic2-printing-example blank --v2 cd ionic2-printing-example ionic platform add android If you are developing your app under a MAC OS you can target iOS too ionic platform add iOS Now we need to add the Cordova plugin for printing so go ahead and enter cordova plugin add Next use your favorite code editor to open src/pages/home.ts and then add the following code to test the printing . import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; import {Printer, PrintOptions} from 'ionic-native'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { constructor(public navCtrl: NavController) { } print(){ Printer.isAvailable().then(function(){ Printer.print("").then(function(){ alert("printing done successfully !"); },function(){ alert("Error while printing !"); }); }, function(){ alert('Error : printing is unavailable on your device '); }); } } Then add a button to print in your home.html template <button class="button" (click)="print()">Print</button> Next make sure you have plugged you real mobile device using an USB cable then build and run your app For Android use ionic run android So this is the end of this tutorial .For more information about this plugin visit its GitHub repo . You can also visit the Ionic Native documentation here . You can also find this project in GitHub .<<
https://www.techiediaries.com/mobiledev/how-to-print-with-ionic2-and-cordova/
CC-MAIN-2017-43
refinedweb
369
51.28
Django Gatekeeper 0.1 A rough prelimiary version of Gatekeeper (0.1) is available for download and hacking. It is far from finished. Opinions and bugs are appreciated. The primary change since my first post about this is that decorators are provided for the common case, which make it that much easier to get going with simple comment spam protection. Features - Standard app, with admin page for managing challenges - Easy to add to a view, one line of code (uses decorators for common cases) - No image library needed. Horrendously light weight - Aiming to have some simple stats measured Example Usage from ExampleProject.apps.gatekeeper import keeper ... @keeper.POST_keeper def index(request): ... Notes I am working with the SVN Django versions, so the model is not compatible with the older 0.9 release. Some of the features (notably, 3 failure limit and stats) are incomplete, and the GET_keeper is possibly totally broken and is untested. See Also: - Ian Holsman's Django Captcha App ââ¬" This is a more traditional approach to captchas. - Wordpress Gatekeeper ââ¬" Eric Meyer's wordpress implementation Update: Apparently some people were having trouble with the bz2 tarball so i linked to a .zip instead. Also, its pre-magic removal code, and i dont have time to update it sorry. Lad I can not open the packed file. Maybe CRC is bad. Can you please send the source to my email, please? Thanks. L.B Harish Mallipeddi Yes I cannot open the file too. Can you please upload the latest version?
http://brehaut.net/blog/2005/12/22/django-gatekeeper-01/
crawl-001
refinedweb
255
67.65
Up to [cvs.NetBSD.org] / pkgsrc / graphics / ruby-clutter Request diff between arbitrary revisions Default branch: MAIN Current tag: MAIN Revision 1.13 / (download) - annotate - [select for diffs], Mon Jan 17 15:17:15 2022 UTC (8 months, 1 week ago) by tsutsui Branch: MAIN CVS Tags: pkgsrc-2022Q2-base, pkgsrc-2022Q2, pkgsrc-2022Q1-base, pkgsrc-2022Q1, HEAD Changes since 1.12: +4 -4 lines Diff to previous 1.12 (colored) Revision 1.12 / (download) - annotate - [select for diffs], Tue Oct 26 10:46:51 2021 UTC (10 months, 4 weeks ago) by nia Branch: MAIN CVS Tags: pkgsrc-2021Q4-base, pkgsrc-2021Q4 Changes since 1.11: +2 -2 lines Diff to previous 1.11 (colored) graphics: Replace RMD160 checksums with BLAKE2s checksums All checksums have been double-checked against existing RMD160 and SHA512 hashes Revision 1.11 / (download) - annotate - [select for diffs], Thu Oct 7 14:12:52 2021 UTC (11 months, 2 weeks ago) by nia Branch: MAIN Changes since 1.10: +1 -2 lines Diff to previous 1.10 (colored) graphics: Remove SHA1 hashes for distfiles Revision 1.10 / (download) - annotate - [select for diffs], Sun Aug 29 16:49:12 2021 UTC (12 months, 3 weeks ago) by tsutsui Branch: MAIN CVS Tags: pkgsrc-2021Q3-base, pkgsrc-2021Q3 Changes since 1.9: +5 -5 lines Diff to previous 1.9 (colored) ruby-gnome: update to 3.4.9. Upstream changes (from NEWS): == Ruby-GNOME 3.4.9: 2021-08-10 This is a bug fix release of 3.4.8. === Changes ==== Ruby/GObjectIntrospection * Fixes * Fixed a bug that (({gpointer})) to Ruby conversion breaks a value when pointer value is (({2 ** 32})) or larger. == Ruby-GNOME 3.4.8: 2021-08-09 This is a bug fix release of 3.4.7. === Changes ==== Ruby/GObjectIntrospection * Fixes * Fixed a bug that (({gpointer})) to Ruby conversion breaks a value when pointer value is (({2 ** 32})) or larger. == Ruby-GNOME 3.4.7: 2021-07-30 This is a release that improves virtual function support. === Changes ==== Ruby/GLib2 * Improvements * (({GLib::Error})): Added support for setting (({code})) and (({domain})) automatically. ==== Ruby/GObjectIntrospection * Improvements * Added support for returning object from callback. * Fixes * Fixed a bug that (({GError})) detection doesn't work. == Ruby-GNOME 3.4.6: 2021-07-17 This is a bug fix release for macOS. === Changes ==== Ruby/Pango * Fixes * Fixed a bug that (({require "pango"})) is failed on environment that has multiple font types. [GitHub#1429][Reported by Cameron Gose] === Thanks * Cameron Gose == Ruby-GNOME 3.4.5: 2021-07-07 This is a release that supports implementing virtual functions in Ruby. === Changes ==== Ruby/GLib2 * Improvements * Added (({GError})) domain and code for Ruby. ==== Ruby/GObjectIntrospection * Improvements * (({GObjectIntrospection::BaseInfo#container})): Added. * (({GObjectIntrospection::ObjectInfo#class_struct})): Added. * (({GObjectIntrospection::StructInfo#find_field})): Added. * (({RVAL2GI_VFUNC_INFO()})): Added. * Added support for implementing virtual functions in Ruby. [GitHub#1386][Based on patch by Yuto Tokunaga] You need to define (({virtual_do_#{virtual_function_name}})) method in (({type_register}))-ed class. * Added support for implementing virtual functions of interface in Ruby. [GitHub#985][Reported by Matijs van Zuijlen] [GitHub#1938][Reported by Yuto Tokunaga] * Added support for "transfer everything" UTF-8 return/output value. * Changed to accepted one character for (({gunichar})). [GitHub#1426][Reported by rubyFeedback] * (({GObjectIntrospection::CallableInfo#can_throw_gerror?})): Added. * Added support for (({GError **})) in callback. * Added support for returning (({GList<GObject *>})) from callback. * Changed to return (({[]})) for (({NULL})) list. * Fixes * Fixed a bug that wrong type information is used for output arguments. ==== Ruby/Pango * Added support for (({PangoFT2})). * Added support for (({PangoFc})). * Added support for (({PangoOT})). * Added support for (({PangoCairoFontMaps})). * Updated gem metadata. [GitHub#1428][Patch by Gabriel Mazetto] === Thanks * Yuto Tokunaga * Matijs van Zuijlen * rubyFeedback * Gabriel Mazetto == Ruby-GNOME 3.4.4: 2021-04-22 This is a bug fix release for Windows. === Changes ==== All * Dropped support for CentOS 6. * Dropped support for Ubuntu 16.04. * Dropped support for Ruby 2.4. * Dropped support for Ruby 2.5. * Added support for Ruby 3.0. ==== Document * Improvements * Improved how to use on Heroku. [GitHub#1414][Patch by Juan D Lopez] * Improved README. [Patch by kojix2] ==== Ruby/GLib * Improvements * Added (({RVAL2POINTER()})). * Added (({POINTER2RVAL()})). * Changed to use (({rb_cObject})) instead of (({rb_cData})) as a parent class of typed data. * Changed to use typed data instead of data for all data types. * Added support for Ractor partially. * Required GLib 2.48 or later. * (({GLib::UniChar.compose})): Added. * (({GLib::UniChar.decompose})): Added. * (({GLib::UniChar.canonical_decomposition})): Deprecated. Use (({GLib::UniChar.decompose})) instead. * (({GLib.format_size_for_display})): Deprecated. Use (({GLib.format_size})) instead. * Fixes * Fixed wrong conversions between (({VALUE})) and (({GType})). [GitHub#1386][Patch by Yuto Tokunaga] ==== Ruby/GObjectIntrospection * Improvements * Removed needless transfer check for struct. [GitHub#1396][Reported by Konrad Narewski] * Added support freeing (({GArray})) of raw struct out parameter. [GitHub#1356][Reported by aycabta] ==== Ruby/Pango * Improvements * Added (({Pango::Render::PART_*})) to keep backward compatibility. [GitHub#1311][Reported by rubyFeedback] ==== Ruby/GStreamer * Improvements * Removed needless workaround for (({Gst::ElementFactory#static_pad_templates})). [GitHub#1400][Reported by Justin Weiss] ==== Ruby/Gnumeric * Improvements * Added support for the latest Gnumeric. ==== Ruby/GTK3 * Improvements * (({Gtk::Widget#set_size_request})): Added support for (({width:})) and (({height:})). [GitHub#1406][Reported by rubyFeedback] * (({Gtk::Dialog#set_default_response})): Added support for (({Symbol})). [GitHub#1418][Reported by rubyFeedback] ==== Ruby/GDK4 * Added. ==== Ruby/GTK4 * Added. ==== Ruby/VTE3 * Improvements * Improved description. [GitHub#1406][Reported by rubyFeedback] ==== Ruby/GTK2 * Removed. ==== Ruby/GtkSourceView2 * Removed. ==== Ruby/WebKitGtk2 * Removed. ==== Ruby/VTE * Removed. === Thanks * Konrad Narewski * aycabta * rubyFeedback * Justin Weiss * Yuto Tokunaga * Juan D Lopez * kojix2 Revision 1.9 / (download) - annotate - [select for diffs], Sun Aug 22 17:02:42 2021 UTC (13 months ago) by tsutsui Branch: MAIN Changes since 1.8: +5 -5 lines Diff to previous 1.8 (colored) ruby-gnome: update to 3.4.3. pkgsrc changes: - as a reparation of removal of gtk2 dependent gems in the next 3.4.4, make gtk2, webkit-gtk2, gtksourceview2, and vte gems independent packages and remove them from meta-pkgs/ruby-gnome - pkglint Upstream changes (from NEWS): == Ruby-GNOME 3.4.3: 2020-05-11 This is a follow-up release of 3.4.2. === Changes ==== Ruby/GLib2 * Fixes * Windows: Fixed a link errors. Revision 1.8 / (download) - annotate - [select for diffs], Sat May 2 18:05:04 2020 UTC (2 years, 4 months ago) by tsutsui Branch: MAIN CVS Tags:.7: +5 -5 lines Diff to previous 1.7 (colored) Revision 1.7 / (download) - annotate - [select for diffs], Sat Oct 19 08:25:18 2019 UTC (2 years, 11 months ago) by tsutsui Branch: MAIN CVS Tags: pkgsrc-2020Q1-base, pkgsrc-2020Q1, pkgsrc-2019Q4-base, pkgsrc-2019Q4 Changes since 1.6: +5 -5 lines Diff to previous 1.6 (colored) Revision 1.6 / (download) - annotate - [select for diffs], Fri Oct 11 16:20:23 2019 UTC (2 years, 11 months ago) by tsutsui Branch: MAIN Changes since 1.5: +5 -5 lines Diff to previous 1.5 (colored) Revision 1.5 / (download) - annotate - [select for diffs], Sat Sep 14 18:11:38 2019 UTC (3 years ago) by tsutsui Branch: MAIN CVS Tags: pkgsrc-2019Q3-base, pkgsrc-2019Q3 Changes since 1.4: +5 -8 lines Diff to previous 1.4 (colored) ruby-gnome: Update to 3.3.8, and rename package names to match gems. Also reorganize several dependencies in Makefile and buildlink3.mk. See the following post for details: Upstream changes (from NEWS): == Ruby-GNOME 3.3.8: 2019-09-10 This is a partially GLib 2.62.0 support release. === Changes ==== All * Improvements * Changed our project name to Ruby-GNOME from Ruby-GNOME2. [GitHub#1277][Suggested by kojix2] [GitHub#1291][Patch by kojix2] * Stopped to release (({.tar.gz})) because they are no longer used. ==== Ruby/GLib2 * Improvements * (({GLib.convert})): Changed to set correct encoding. * (({GLib::FILENAME_ENCODING})): Added. * Changed to use the same enum object for the same enum value. * (({GLib::Enum.find})): Added. * (({GLib::Bytes#initialize})): Changed to reuse (({String})) data even if the given (({String})) isn't frozen. * (({GLib::Bytes.try_convert})): Added. * (({GLib::Enum.try_convert})): Added. * (({GLib::Flags.try_convert})): Added. * (({GLib::Type.try_convert})): Added. * (({GLib::MkEnums.create})): Added support for flags to enum definition. [GitHub#1295][Patch by Mamoru TASAKA] ==== Ruby/GIO2 * Fixes * Renamed to (({Gio::Icon#hash})) from (({Gio::Icon.hash})). [GitHub#1293][Reported by Erik Czumadewski] ==== Ruby/GObjectIntrospection * Improvements * Introduced (({try_convert})) protocol. ==== Ruby/CairoGObject * Improvements * (({Cairo::Context.try_convert})): Added. * (({Cairo::Device.try_convert})): Added. * (({Cairo::Pattern.try_convert})): Added. * (({Cairo::Surface.try_convert})): Added. * (({Cairo::ScaledFont.try_convert})): Added. * (({Cairo::FontFace.try_convert})): Added. * (({Cairo::FontOptions.try_convert})): Added. * (({Cairo::Region.try_convert})): Added. === Thanks * kojix2 * Erik Czumadewski * Mamoru TASAKA Revision 1.4, Thu Jun 13 13:47:25 2013 UTC (9 years, 3 months ago) by obache Branch: MAIN CVS Tags: pkgsrc-2013Q2-base, pkgsrc-2013Q2 Changes since 1.3: +1 -1 lines FILE REMOVED Drop ruby-clutter packages. Only support old ruby-1.8 and clutter-0.8, and dead upstream.. Revision 1.2 / (download) - annotate - [select for diffs], Thu Dec 17 11:17:13 2009 UTC (12 years, 9 months ago) by obache.1: +2 -1 lines Diff to previous 1.1 (colored) Use Ruby/GStreamer in Ruby/Gnome2 instead of deprecated ruby gstreamer0.10. Bump PKGREVISION of ruby-clutter-gst. Revision 1.1 / (download) - annotate - [select for diffs], Tue Dec 16 12:22:38 2008 UTC (13 years,.
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/graphics/ruby-clutter/distinfo?sortby=date&only_with_tag=MAIN
CC-MAIN-2022-40
refinedweb
1,520
60.92
Asko Kauppi a écrit : "Guido van Rossum eventually chose a surprising syntax: x = true_value if condition else false_value"I am _NOT_ suggesting Lua to do the same. But bringing this up, since it does seem relevant. And weird. ;) Yes, it reminds me a bit of the dreaded Perl... open LOG, "<$log" or die "Cannot open $log: $!"; print "$msg\n" unless $quiet; The explanation:"In many cases where a conditional expression is used, one value seems to be the 'common case' and one value is an 'exceptional case', used only on rarer occasions when the condition isn't met." Mmm... The examples show: contents = ((doc + '\n') if doc else '') level = (1 if logging else 0)I feel that this syntax isn't very useful when standing on a single line, writing: if doc then contents = (doc + '\n') else content = '' end is still usable/readable.It is more useful in table, indeed, but perhaps we could ask to be able to write: content = (if doc then return (doc + '\n') else return '' end) or something similar, unless there is some semantical blocking. -- Philippe Lhoste -- (near) Paris -- France -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
http://lua-users.org/lists/lua-l/2006-09/msg00628.html
CC-MAIN-2013-20
refinedweb
184
66.07
The first argument to defaultdict's.) from datetime import datetime ts = datetime.strptime('10:13:15 2006-03-07', '%H:%M:%S %Y-%m-%d') SKIPoption.) "r"was added to the input() function to allow opening files in binary or universal-newline.) keykeyword.) Nonefor.) operator.attrgetter('a', 'b')will return a function that retrieves the a and b attributes. Combining this new feature with the sort() method's keyparameter lets you easily sort lists using multiple fields. (Contributed by Raymond Hettinger.) member is also available, if the platform supports it. (Contributed by Antti Louko and Diego Pettenò.) Nonefrom the __reduce__() method; the method must return a tuple of arguments instead. The ability to return Nonewas deprecated in Python 2.4, so this completes the removal of the feature. sys.path, so unless your programs explicitly added the directory to sys.path, this removal shouldn't affect your code. '/'and '/RPC2'. Setting rpc_paths to Noneor an empty tuple disables this path checking. (pid, group_mask). Two new methods on socket objects, recv_buf(buffer) and recvfrom_buf.) sys.subversionvariable,.) The compression used for a tarfile opened in stream mode can now be autodetected using the mode 'r|*'. (Contributed by Lars Gustäbel.) >>>.) use_datetime=Trueto the loads() function or the Unmarshaller class to enable this feature. (Contributed by Skip Montanaro.). See Also: The pysqlite module (), a wrapper for the SQLite embedded database, has been added to the standard library under the package name sqlite3. SQLite is a C library that provides a SQL-language database that stores data in disk files without requiring a separate server process. pysqlite was written by Gerhard Häring and timestamp, trans varchar, symbol varchar, qty decimal, price decimal)''') #: See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://www.thaitux.info/py/doc/python/whatsnew/modules.html
crawl-003
refinedweb
295
61.02
Introduction to HashMap in Java In Java, you can use the array to store data, but whenever there is a requirement to store or retrieve data in key and value fashion, you have to use HashMap for that. Hashmap is a collection in Java that belongs under the hierarchy of the interface called Map. In this article, we will discuss the Hashmap from Java programming’s perspective. Syntax: To use HashMap in your code you must import (import java.util.HashMap package) or its parent class. import java.util.HashMap; import java.util.Map; HashMap<datatypeOfkey, dataytpeOfValue> <name_of_hashMap> =new HashMap<datatypeOfkey, dataytpeOfValue> (); Where datatypeOfkey and dataytpeOfValue can be Integer or String. Example: Map<String, String> newHashMap = new HashMap<>(); How HashMap Works in Java? Hashmap uses hashing techniques to store and retrieve elements. For storage, it uses a linked list which is referred to as buckets. It uses two methods on key: equals()and hashCode() for insert and retrieve operations. While insertion, hashCode determines the bucket for storing. After that, again hashCode checks whether there is already a key with equal hashCode, if yes, value is replaced with the new one. If not, then the new map is created into which value will be stored. While retrieval of data, hashCode determines the bucket for searching. After that, using hashCode() and equals() it gets the value and returns that. It returns null in case no value is present. HashMap Constructor in Java It has four constructors as mentioned below. - HashMap(): It is default one with load factor 0.75 and capacity 16. - HashMap(int <capacity>): Creates HashMap with the capacity defined in its arguments. The load factor is default here. - HashMap(int <capacity>, float <load_Factor>): Creates HashMap with the capacity and load factor defined in its arguments. - HashMap(Map<K, V> m): Creates HashMap as defined in the argument map. Top 13 Methods of HashMap in Java All of the below methods discussed here can be used irrespective of any version of Java. - public value get(Object key): Used to get the value of the corresponding key. - public value put(K key, V value): Inserts the value which is mentioned in the argument for the corresponding key. - public boolean containsKey(Object key): Decision of whether key is present or not, note that, the return type is Boolean. - public boolean containsValue(Object value): Decision of whether the value is present or not, note that the return type is Boolean. - public V remove(Object key): Clears particular key and its value-form HashMap as specified in code. - public void clear(): Clears all key and values from the HashMap as mentioned. - public boolean isEmpty(): Verifies whether HashMap is empty or not. - Object clone(): Mappings of a HashMap is returned by this method which we can use for cloning purpose to another HashMap. - public int size(): Returns the size, means, how many key-value pair is present in a HashMap. - public Set<Map.Entry<K, V>> entrySet(): The set of mapping in HashMap is returned by this method. - public Set<K> keySet(): The set of key which is present in HashMap is returned by this method. - public void putAll(Map <map_name>): Copies whole map content to the other. - Collection values(): You can get a collection of all of the values for a HashMap. Examples of HashMap in Java HashMap is a Map-based collection class that is used for storing Key & value pairs. Let us look at a few examples. Example #1 We will discuss some code examples of HashMap here. You should practice codes by writing yourself and run on java compiler to check the output. You can match the output with the given one for verification. Creation of HashMap and insertion of data in it. Code: import java.util.HashMap; import java.util.Map; public class CreateHashMapExample { public static void main(String[] args) { // Creating a HashMap Map<String, String> newHashMap = new HashMap<>(); // Addition of key and value newHashMap.put("Key1", "Java"); newHashMap.put("Key2", "C++"); newHashMap.put("Key3", "Python"); // Addition of new key and value newHashMap.putIfAbsent("Key4", "Ruby"); System.out.println(newHashMap); } } Output: Example #2 Let us take another example where we take string as key and integer as value. Here we will measure key and its corresponding values in inch as value. Code: import java.util.HashMap; public class CreateHashMapExample2 { public static void main(String[] args) { // Create a HashMap object called measurement HashMap<String, Integer> ms = new HashMap<String, Integer>(); // Add keys and values (Name and phone number of the person) ms.put("S", 35); ms.put("M", 38); ms.put("L", 40); ms.put("XL", 42); for (String key : ms.keySet()) { System.out.println("measurement of " + key + " in inch is: " + ms.get(key)); } } } Output: Example #3 Here we will do multiple things. We will first create a Hashmap, we will then get its values one by one. After that, we will copy all data of the HashMap to a brand new HashMap. After that, we will remove one item and gets their sizes. If the size is lower by one, the decrease of size by removal is confirmed. Code: import java.util.Collection; import java.util.HashMap; import java.util.Map; public class HashMapInJava { public static void main(String[] args) { Map<String, String> newHashMap = new HashMap<>(); // Addition of key and value newHashMap.put("Key1", "Lenovo"); newHashMap.put("Key2", "Motorola"); newHashMap.put("Key3", "Nokia"); newHashMap.put("Key4", null); newHashMap.put(null, "Sony"); System.out.println("Original map contains:" + newHashMap); //getting size of Hashmap System.out.println("Size of original Map is:" + newHashMap.size()); //copy contains of one Hashmap to another Map<String, String> copyHashMap = new HashMap<>(); copyHashMap.putAll(newHashMap); System.out.println("copyHashMap mappings= " + copyHashMap); //Removal of null key String nullKeyValue = copyHashMap.remove(null); System.out.println("copyHashMap null key value = " + nullKeyValue); System.out.println("copyHashMap after removing null key = " + copyHashMap); System.out.println("Size of copyHashMap is:" + copyHashMap.size()); } } Output: Did you notice one thing in the output of HashMap in all of our examples, while we print the key and values? The print is not in sorted order. Hashmap is not like an array so that scan and print need to be sorted, it can pick random based on the hash value it gets. Conclusion You should use HashMap when your code or use case requires the handling of data in key-value pairs. In this article, we have learned about hashmaps in Java with code examples. You should practice writing codes on our own to master this topic. Recommended Article This is a guide to the HashMap in Java. Here we discuss Introduction to HashMap in Java and its Methods along with Code implementation and Output. You can also go through our suggested articles to learn more –
https://www.educba.com/hashmap-in-java/?source=leftnav
CC-MAIN-2020-34
refinedweb
1,118
68.57
Asked by: Draw coordinates in the pictureBox Question All replies So what is your offer? I don't offer anything, your problem is not solvable under the given requirements. btw, we don't paint into a picture box. Cause it is a specialized control for displaying images. We draw simply on the area, we want to. E.g. on a panel: namespace WindowsFormsCS { using System; using System.Drawing; using System.Windows.Forms; public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void panel1_Paint(object sender, PaintEventArgs e) { Panel panel = sender as Panel; Graphics graphics = e.Graphics; for (int x = 0; x < panel.Width; x++) { int y = (int)(panel.Height / 2.0 + Math.Sin((double)x/panel.Width*Math.PI *2.0) * panel.Height / 2.0); graphics.FillRectangle(Brushes.Black, x, y, 3, 3); } } } } You can draw points on the screen as they come in from the serial port but be aware this isn't real time. If your code isn't fast enough to keep up you can lose some data. It might be a good idea to use a little threading if the speed is really fast but try without it first. I wouldn't use a PictureBox though. That is designed for images. Instead create a custom control (derived from Control). To make it easier to work with I'd recommend that you have some sort of Add method that takes the coordinates. When called it adds the value to a list being managed by the control. When the control's OnPaint method is called it simply enumerates the points and draws each one. There are lots of things to consider though. - The serial port data receive event may not occur on the UI thread unless you configured it as such so you might have to Invoke over to the UI thread. - The Add method needs to add the point and then draw the point. It could refresh the control but this seems wasteful. - You may want to limit the # of points displayed otherwise you'll just eat up memory over time. - When the control is covered up and then uncovered it needs to repaint itself. Winforms will handle this and by having the values in a list you can easily enumerate back through them to render the image again. - The more points you have the slower it will render. Michael Taylor Perhaps you should try using a Windows.Forms ChartControl which is what it is for. Also see; Tutorial: Creating a Basic Chart Getting Started with Chart Controls Windows Forms Samples Environment for Microsoft Chart Controls La vida loca Hello, I don't see problem draw that data but I see some situations that can make your work harder. Not clear info from you if you're Master or Slave on bus. This is first important thing because if you're Master you can much better handle that. You need poll Slave for data and better use your time for comm exchange with draw. Question is how quickly external data change so sugestion about not realtime draw is correct, simply your draw output will be delayed due characteristic of your communication. In case you're Slave, no good probably, all is depended to Master's send frequency and missed data due freq with your recv and draw. What is that freq if this is your case? In other side pls try think you're that application and think about logical steps as program need to do. Then you can have better look to real requirements.
https://social.msdn.microsoft.com/Forums/windows/en-US/a570256e-7ff3-45e7-926e-e440240ab2fc/draw-coordinates-in-the-picturebox?forum=winforms
CC-MAIN-2019-51
refinedweb
588
73.68
Whenever I want to upload images with my articles, I make sure they are of the right size first and then I have to check the file sizes and if they are too big, I will have to compress them. For this compression, I use Tinypng. They compress your images to a small size all the while keeping the image looking the same. I've tried some other services as well, but TinyPNG is definitely the best as their compression ratio is quite impressive. In this article I'll show you how I'm planning to automate the image compression process using TinyPNG's developer API. And of-course we are going to using python. Setting up First of all, you need to have a developer key to connect to TinyPNG and use their services. So, go to Developer's API and enter your name and email. Once you've registered, you'll get a mail from TinyPNG with a link and once you click on that, you'll go to your developers page which also has your API key and your usage information. Do keep it mind that for the free account, you can only compress 500 images per month. For someone like me, that's a number I won't really be reaching in a month anytime soon. But if do, you should probably check out their paid plans. PS: That's not my real key :D Once you've the developer key, you can start compressing images using their service. The full documentation for Python is here. You start by installing Tinify, which is TinyPNG's library for compression. pip install --upgrade tinify Then we can start using tinify in code by importing it and setting the API key from your developer's page. import tinify tinify.key = 'API_Key' If you've to send your requests over a proxy, you can set that as well. tinify.proxy = "" Then, you can start compressing your image files. You can upload either PNG or JPEG files and tinify will compress it for you. For the purpose of this article, I'm going to use the following delorean.jpeg image. And I'll compress this to delorean-compressed.jpeg. For that we'll use the following code: source = "delorean.jpeg" destination = "delorean-compressed.jpeg" original = tinify.from_file(source) original.to_file(destination) And that gives me this file: If they both look the same, then that is the magic of TinyPNG's compression algorithm. It looks pretty much identical but it did compress it. To verify that, let's print the file sizes. import os.path as path original_size = path.getsize(source) compressed_size = path.getsize(destination) print(original_size/1024, compressed_size/1024) And this prints, 29.0029296875 25.3466796875 1.144249662878058 The file was original 29 KB and now after compression it is 25.3 KB which is a fairly good compression for such a small file. If the original file was bigger, you will be able to see an even tighter compression. And since this is the free version, there's a limit on the number of requests we can make. We can keep track of that with a built-in variable compression_count. You can print that after every requests to make sure you don't go over that. compressions_this_month = tinify.compression_count print(compressions_this_month) You can also compress images from their URL's and store it locally. You will just do: original = tinify.from_url("") And then you can store the compressed file locally just like before. Apart from just compressing the images, you can also resize them with TinyPNG's API. We'll cover that in the tomorrow's article here. So, That is all for this article. For more programming articles, checkout Freblogg, Freblogg/Python Some articles on automation: Web Scraping For Beginners with Python My semi automated workflow for blogging Publish articles to Blogger automatically Publish articles to Medium automatically This is the 13th article as part of my twitter challenge #30DaysOfBlogging. Seventeen.
https://www.freblogg.com/resize-compress-images-in-python
CC-MAIN-2021-31
refinedweb
663
65.93
Using jQuery With Custom XHTML Attributes And Namespaces To Store Data I was talking to Brian Swartzfager yesterday about his new jQuery plugin for editable tables when I started to think about various ways to store meta-data in an XHTML document. I only recently learned about the jQuery data() method, which totally blew my mind! The data() method allows you to store arbitrary data with an XHTML DOM element. But then, I got to thinking about custom attributes. In XHTML, you can use namespaces to add custom attributes to your XHTML DOM elements. I thought maybe custom attributes would be another way to store custom data. I know that namespaces do cause some problems in jQuery when you are searching for DOM elements; but, as far as setting and getting them, I figured that would be worth an experiment. In the following demo, I am creating IMDB link elements based on existing custom attributes: - <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> - <html - xmlns="" - xmlns: - <head> - <title>Using jQuery With Custom XHTML Attributes</title> - <script type="text/javascript" src="jquery-1.2.6.min.js"></script> - <script type="text/javascript"> - $( - function(){ - var jLI = $( "li" ); - // Loop over each list istem to set up link. - jLI.each( - function( intI ){ - var jThis = $( this ); - var jLink = $( "<a></a>" ); - // Set the link text (trim text first). - jLink.text( - jThis.text().replace( - new RegExp( "^\\s+|\\s+$", "g" ), - "" - ) - ); - // Set the link href based on the IMDB - // attribute of the list item. - jLink.attr( - { - "href": jThis.attr( "bn:imdb" ), - "bn:rel": "Pretty Cool Ladies" - } - ); - // Replace the LI content with the link. - jThis - .empty() - .append( jLink ) - ; - } - ); - } - ); - </script> - </head> - <body> - <h1> - Pretty Cool Ladies - </h1> - <ul> - <li bn: - Maria Bellow - </li> - <li bn: - Christina Cox - </li> - <li bn: - Ani DiFranco - </li> - </ul> - </body> - </html> As you can see, the HREF attribute of the new link is based on the bn:imdb custom attribute of each list item. I am then setting a custom REL attribute (bn:rel) based on the type of node we are looking at. When we run the above code, we can see in FireBug that the XHTML was properly updated: I tested this in FireFox, IE7, and Safari and all work fine. I doubt I would ever go this way instead of just using the data() method; but, it's interesting as an alternate approach for simple values. What's nice about it too is that the custom attributes can be built directly into the XHTML by ColdFusion before jQuery even has a chance to execute. This does give it an advantage over the data() method in specific use cases. found what works best for me is to store any custom variables in the name attribute in JSON format. I don't know if this is a good way to do it, but it's pretty darn customizable, scalable, and consistent. for example, I use the dialog UI fairly liberally in most apps I make. So i'll bind the dialog to anchors with the class "dialog", and in the name attribute write something like: name="{title='My Dialog', width:500, height:300}", etc. then use jquery to parse the JSON and create the dialog dynamically. just another way to store metadata inside an xhtml element while keeping it W3C valid. @Eric, The JSON strategy is nice. This could be used quite elegantly in conjunction with the custom namespace attributes: <div app:..</div> The downsite to storing anything in attributes is that it is only easy when dealing with simple data. If we wanted to start storing node references of what not, then I think the data() method becomes the most viable answer. Encoding metadata in an XHTML attribute is definitely a powerful technique. But like anything in development, you should always weighing the benefits against the potential drawbacks: in this case, defining behavior in your markup. Separating a page's content, style, and behavior leads to more maintainable and reusable code. It usually results in a smaller code footprint (one CSS rule can affect a zillion elements) which decreases page load times. It's a common pattern to use markup to flag elements (eg: form inputs) as having properties (eg: they are required fields) triggering behavior in the page's JavaScript. You can take this to all sorts of crazy places. Given a sufficiently expressive API, you can basically start doing metaprogramming right in your markup. I'm not saying any of this is good or bad per se. (I think metaprogramming is just about the awesomest thing ever.) But it could be argued that putting application logic in your markup might reduce the maintainability of your code. Something to think about. . . @Dave, Another issue with putting business logic inline is that someone even mildly savvy in their browser could start changing your attribute values. Hopefully the processing code has validation in place, but if there are any loopholes, this could open you up to easier malicious behaviors. Of course, with things like GreaseMonkey, there's really nothing client-side that can or even should be considered "Secure". So, really, I guess it's not a security issue. Yeah, that's an issue with my JSON approach. I'm not very familiar with security issues around JavaScript or best practices, but to convert JSON to an object I use eval(), and I've always been taught evaluating in any language is a huge no-no and presents a security risk. @Eric, Don't worry about security from a script-execution standpoint; anyone can execute any script on their own page. This cannot be turned into an XSS (cross-site-scripting) attack. The only security issue I would be concerned about would be to have someone change ID values that are then submitted back to the server and processed without validation. For example, imagine someone changing this: <div product:Cool Product</div> ... to be: <div product:Cool Product</div> ... and really only being charged 5.00 on checkout. A silly example, but you'd be shocked how little security some apps have :D I know this is just an example and all, but why would you have the value of the hrefs on the LI instead of actually creating the a hrefs? If you turn off javascript (or surf around with FireFox + Ad Block + No Script), then your text (which are expecting to be links) are just that... plain text? Plus, search engines have no links to follow, so I guess if you were attempting to be SEO unfriendly, this is one way to do it. @Todd, I just needed something to play with :) That's not an actual valid use case. Well, I still wish you would take a peek at the jquery metadata plugin ( ). I have an internal project (timetracker) that I'm working on and I like it. You mentioned that you didn't like having JSON data all over the place. I still insist that if you use it wisely, it can be done right. I have an example as well ( ) - So, the part that is highlighted with yellow is my metadata JSON for the div span. Because the JSON is on the span, I'm not repeating myself on all the A HREFs within the SPAN. Then, on each href, I have a class that dictates what javascript function I want to kick off, how do I get the data? I get it from the a href's parent... the span. $(this).parent('.actions') is looking for the parent above it with the class of .actions. Neat & Tidy IMHO and more importantly, I'm not repeating myself and I'm making use of data containers. @Todd, Don't get me wrong - I love JSON in general. I am not against using it in the way you have, and as you are saying, when used correctly, it can be awesome. The meta-data plugin seems to go along quite nicely with this. Just a short note - you might be interested in jQuery.trim() ... built-in function to trim a string @Hansjoerg, Thanks - I keep forgetting about that one! I found this article when trying to find how to select elements with namespaced attributes, so in case anyone else does too, I'll note what I've found. It seems that if you want to select all items with the "bn:rel" attribute, you CANNOT do this... jQuery('[bn:rel]') ...because jQuery doesn't accept the colon in the selector. There's a workaround using filter, like so: $('*').filter(function(){return $(this).attr('bn:rel')!==undefined}) Eugh! Horribly ugly, but it's only way I've found that'll work. Since I'm doing a fair bit of work with namespaced attributes at the moment, I think I'll have to turn it into a plugin so I don't have to look at it. :S @Peter, I've never actually tried this, but I believe you can escape special characters in jQuery selectors. Try using something like: jQuery('[bn\\:rel]') ... I *think* it might actually support that. Yeah I tried that and still got an error - in Firefox anyway. (I saw something suggesting it was a browser-specific thing.) @Peter, Hmmm, that's not good. I wish I had a better suggestion. At the very least, if you include a tag selector or something, it would probably speed up your query (* is gonna be the slowest). Yeah, that'd definitely make sense in general. For my specific case the attributes can actually be on almost any tag, so it's simpler to just use * than try to list them all - only needs to be fast running on my machine, since it's a backend UI - so far it's good enough. @Peter, I figured as much... I was just trying to come up with *something* valuable to tell you :)
http://www.bennadel.com/blog/1453-using-jquery-with-custom-xhtml-attributes-and-namespaces-to-store-data.htm
CC-MAIN-2015-06
refinedweb
1,640
71.44
Red Hat Bugzilla – Bug 865744 [RFE]shutdown event options "on_crash" does not work for Windows guest Last modified: 2013-01-08 23:14:54 EST Description of problem: shutdown event options "on_crash" in xm domain config does not work for all Windows guests. Version: xen-3.0.3-142.el5 Steps of Reproduce: 1. change the xm domain config options: on_crash, with values: destroy, restart, preserve, rename-restart respectively. 2. create a windows guest. 3. trigger the guest crash by send the SysRq key to guest: #xm sysrq $domain B Actual results: 1. When mark "Automatically restart" checkbox in guest ("System Properites" > "Advanced" -> "Startup and Recovery Settings"> "System failure"), the guest will reboot no matter which values in xmdomain config file setup. when it is not mark, the guest will hang after crash. Expected results: 1. The guest should response to on_crash option. Additional info: 1. Also try to make windows guest crash in another way, we crash the windows XP guest inside the windows using the method in bug 861273, and get same result. 2. as xen source code show: /$xen_source/tools/xenstat/libxenstat/src/xenstat.c unsigned int xenstat_domain_crashed(xenstat_domain * domain) { return ((domain->state & DOMFLAGS_SHUTDOWN) DOMFLAGS_SHUTDOWN) && (((domain->state >> DOMFLAGS_SHUTDOWNSHIFT) & DOMFLAGS_SHUTDOWNMASK) SHUTDOWN_crash); } This may be a problem caused by different domain info standard between linux guest and windows guest. 3. During the late phase of Xen, this is a low priority task, may be not repaired, but we still think it is good to be record the problem. 4.The issue is already discussed in upstream, see link:
https://bugzilla.redhat.com/show_bug.cgi?id=865744
CC-MAIN-2018-05
refinedweb
259
55.34
0 Hello, I'm new to C and I'm not sure what is wrong with my code here. After I execute, I get this: Debug assertion failed! [program address] File: fgets.c Line: 60 Expression: str != NULL It's a really simple program. I'm trying to assign family data from the in file to the out file with formatting (things commented out), but this is just the start of that program but I can't finish it out since I ran into this problem. I searched the forum first but this same error is either unsolved in C, or in C++. Or the situation is different because the code is more complicated. Anyway, here's my full code: #include <stdio.h> #include <string.h> #include <stdlib.h> void main (void) { FILE *in, *out; char people[81]; int number; printf("This program reads in from familyin.txt, formats its data,\n and writes it out to familyout.txt."); in = fopen ("c: \\familyin.txt", "r"); out = fopen ("c:\\familyout.txt", "w"); fgets (people, 80, in); number = atoi(people); printf("The text from familyin.txt is \n %s \n", people); printf("The numberic value is %i. \n\n", number); fprintf(out, "The text from familyin.txt is \n %s \n", people); /*familyin = fopen ( // struct family { // char name [50]; /* person's name */ // char street [50]; /* street address */ // char csz [50]; /* city, state, zip */ // char relation [30]; /* relation to you */ //// char birthday [11]; /* mm-dd-yyyy */ // }; // struct family people[7]; fclose (in); fclose (out); } Edited by CanofCornInfini: n/a
https://www.daniweb.com/programming/software-development/threads/329966/debug-assertion-failure
CC-MAIN-2018-22
refinedweb
256
67.86
In support of the above functions, it also contains these functions: These functions depend upon the Linux Endian functions __be32_to_cpu(), __cpu_to_be32() to convert a 4-byte big-endian array to an integer and vice-versa. For those without Linux, it should be pretty obvious what they do. The shw_hash_update() and shw_hash_final() functions are generic enough to support SHA-1/SHA-224/SHA-256, as needed. Some extra tweaking would be necessary to get them to support SHA-384/SHA-512. Definition in file shw_hash.c. #include "shw_driver.h" #include "shw_hash.h" #include <asm/types.h> #include <linux/byteorder/little_endian.h> Go to the source code of this file.
http://linux-fsl-imx51.sourcearchive.com/documentation/2.6.31-605.7/shw__hash_8c.html
CC-MAIN-2018-13
refinedweb
108
61.43
Active Directory Role-Based Access Control (preview) Microsoft Azure provides integrated access control management for resources and applications based on Azure Active Directory (Azure AD). With Azure AD, you can either manage user accounts and applications specifically for your Azure based applications, or you can federate your existing Active Directory infrastructure with Azure AD for company-wide single-sign-on that also spans Azure resources and Azure hosted applications. You can then assign those Azure AD user and application identities to global and service-specific roles in order to grant access to Azure resources. For Azure Service Bus, the management of namespaces and all related resources through the Azure portal and the Azure resource management API is already protected using the role-based access control (RBAC) model. RBAC for runtime operations is a feature now in public preview. An application that uses Azure AD RBAC does not need to handle SAS rules and keys or any other access tokens specific to Service Bus. The client app interacts with Azure AD to establish an authentication context, and acquires an access token for Service Bus. With domain user accounts that require interactive login, the application never handles any credentials directly. Service Bus roles and permissions For the initial public preview, you can only add Azure AD accounts and service principals to the "Owner" or "Contributor" roles of a Service Bus Messaging namespace. This operation grants the identity full control over all entities in the namespace. Management operations that change the namespace topology are initially only supported though Azure resource management and not through the native Service Bus REST management interface. This support also means that the .NET Framework client NamespaceManager object cannot be used with an Azure AD account. Use Service Bus with an Azure AD domain user account The following section describes the steps required to create and run a sample application that prompts for an interactive Azure AD user to log on, how to grant Service Bus access to that user account, and how to use that identity to access Event Hubs. This introduction describes a simple console application, the code for which is on Github. Create an Active Directory user account This first step is optional. Every Azure subscription is automatically paired with an Azure Active Directory tenant and if you have access to an Azure subscription, your user account is already registered. That means you can just use your account. If you still want to create a specific account for this scenario, follow these steps. You must have permission to create accounts in the Azure Active Directory tenant, which may not be the case for larger enterprise scenarios. Create a Service Bus namespace Next, create a Service Bus Messaging namespace in one of the Azure regions that have preview support for RBAC: US East, US East 2, or West Europe. Once the namespace is created, navigate to its Access Control (IAM) page on the portal, and then click Add role assignment to add the Azure AD user account to the Owner role. If you use your own user account and you created the namespace, you are already in the Owner role. To add a different account to the role, search for the name of the web application in the Add permissions panel Select field, and then click the entry. Then click Save. The user account now has access to the Service Bus namespace, and to the queue you previously created. Register the application Before you can run the sample application, register it in Azure AD and approve the consent prompt that permits the application to access Azure Service Bus on its behalf. Because the sample application is a console application, you must register a native application and add API permissions for Microsoft.ServiceBus to the "required permissions" set. Native applications also need a redirect-URI in Azure AD which serves as an identifier; the URI does not need to be a network destination. Use for this example, because the sample code already uses that URI. The detailed registration steps are explained in this tutorial. Follow the steps to register a Native app, and then follow the update instructions to add the Microsoft.ServiceBus API to the required permissions. As you follow the steps, make note of the TenantId and the ApplicationId, as you will need these values to run the application. Run the app Before you can run the sample, edit the App.config file and, depending on your scenario, set the following values: tenantId: Set to TenantId value. clientId: Set to ApplicationId value. clientSecret: If you want to log on using the client secret, create it in Azure AD. Also, use a web app or API instead of a native app. Also, add the app under Access Control (IAM) in the namespace you previously created. serviceBusNamespaceFQDN: Set to the full DNS name of your newly created Service Bus namespace; for example, example.servicebus.windows.net. queueName: Set to the name of the queue you created. - The redirect URI you specified in your app in the previous steps. When you run the console application, you are prompted to select a scenario; click Interactive User Login by typing its number and pressing ENTER. The application displays a login window, asks for your consent to access Service Bus, and then uses the service to run through the send/receive scenario using the login identity. Next steps To learn more about Service Bus messaging, see the following topics.
https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-role-based-access-control
CC-MAIN-2018-51
refinedweb
907
52.19
I have been doing quite a bit of analysis of the verify error that I am seeing with Janino compiled code. This is on a Win32 system with JDK 1.4 from Sun. It seems the problem is actually in the Java verifier, but Sun's javac does not trigger it because it generates code that differs from the code generated by Janino. Here is a very small example that triggers the bug. public class Test {"); } } } This class compiles and runs correctly with javac. When compiled with Janino, the following error is generated: $ janinoc Test.java $ java Test Exception in thread "main" java.lang.VerifyError: (class: Test, method: foo sign ature: ()V) Accessing value from uninitialized register 2 The problem appears to be that a use of the local var slot 2 generating the error. The while loop reads slot 2 when it tests the boolean "b". It turn out that the generated code also uses slot 2 is jsr/ret instructions. One can test this slot 2 thing by commenting out the "dummy1" and "dummy2" variables. The following example compiles and does not fail the validate step, the only diff is that the two local vars are commented out. public class Test2 {"); } } } $ janinoc Test2.java $ java Test2 OK Taking a peek at the generated code emitted by javac and janino helps to understand the problem. I have only included the byte code for the foo() method here. // Javac output for Test.java public static void foo(); Code: 0: iconst_5 1: istore_0 2: iconst_4 3: istore_1 4: iconst_1 5: istore_2 (while non-constant expr test) 6: iload_2 7: ifeq 15 (end of method foo) 10: return 11: astore_3 12: goto 6 (catch block, invoke finally block) 15: goto 23 (unhandled exception) 18: astore 4 20: aload 4 22: athrow (finally block) 23: return // Janino output (Test.java, this fails validation) public static void foo(); throws Code: 0: iconst_5 1: istore_0 2: iconst_4 3: istore_1 (b = false) 4: iconst_1 5: istore_2 (while) 6: goto 14 (return) 9: jsr 27 12: return 13: astore_3 (while non-constant expr test) 14: iload_2 15: ifne 9 18: goto 30 (catch block, invoke finally block) 21: astore_0 22: jsr 27 (unhandled exception) 25: aload_0 26: athrow (finally block) 27: astore_2 28: ret 2 30: jsr 27 33: return At first glance, it does not look like anything is wrong with the code emitted by Janino. In fact, one can compare the janino emitted code for Test.java to the janino emitted code for Test2.java: // Janino code for Test2.java (no dummy vars) public static void foo(); throws Code: (b = false) 0: iconst_1 1: istore_0 (while) 2: goto 10 (return) 5: jsr 23 8: return 9: astore_1 (while non-constant expr test) 10: iload_0 11: ifne 5 14: goto 26 (catch block, invoke finally block) 17: astore_0 18: jsr 23 (unhandled exception) 21: aload_0 22: athrow (finally block) 23: astore_2 24: ret 2 26: jsr 23 29: return The janino emitted code for Test.java fails, but the same logic does not fail for Test2.java. The diff is that Test2.java does not use local slot 2 for the boolean in the while expr condition. The boolean "b" in Test2.java uses slot 0 so the use of slot 2 in the jsr/ret instructions does not trigger the validation error. I see a couple of possible approaches to work around this issue. Janino could be modified to emit code that worked more like javac, so that jsr/ret instructions would not be used in these situations. Another possible approach would be to detect when a loop uses a register that will also be used in try/catch/finally blocks. The try/catch/finally blocks could then use some other register to implement the jsr/ret logic. Any advice as to which approach would be better? cheers Mo DeJong // Allocate a LV for the JSR of the FINALLY clause. // // Notice: // For unclear reasons, this variable must not overlap with any of the body's // variables (although the body's variables are out of scope when it comes to the // FINALLY clause!?), otherwise you get // java.lang.VerifyError: ... Accessing value from uninitialized localvariable 4 // See bug #56. // Save the exception object [ of the FINALLY clause] in an anonymous local variable. ... // The exception object local variable allocated above MUST NOT BE RELEASED // until after the FINALLY block is compiled, for otherwise you get // java.lang.VerifyError: ... Accessing value from uninitialized register 7 Fix will go into 2.4.5.
http://jira.codehaus.org/browse/JANINO-56
crawl-002
refinedweb
753
61.67
[WIP] Support exploring archive files attached to reviews. Review Request #11402 — Created Jan. 24, 2021 and updated ReviewBoard currently supports attaching individual files for review, but while some filetypes can be opened in-browser and examined, archive files can only be downloaded. This commit adds a new review UI that allows archives to be browsed in ReviewBoard without downloading the file. Users can navigate the archived files, see metadata, and download individual files. Unit tests for get_js_model_datausing empty archives, simple archives with files, and archives with nested directories. This model will have all of the data that the view will display. The templates will json-encode the return value from your ArchiveReviewUI.get_js_model_data()method and pass that into the constructor for your model here (which is how data gets from the Python on the server into JS on the client). In this case you'd probably have some kind of tree data representing the archive contents. The view then takes the data stored in the model and displays it. Change Summary: Update the Zip archive parsing to use intermediary class hierarchy instead of directly parsing to a Python dictionary Checks run (1 failed, 1 succeeded) flake8 Let's make sure that everything after basehere stays in alphabetical order. I'm sure it's already on your radar, but this is kind of backwards. We should check explicitly for application/x-tarwhen instantiating the TarArchiveFormat, and the elseclause should raise. Inn fact, we might want to have a dict mapping mimetype to the format class so it can just be something like: try: archiveFormat = self.formats[self.obj.mimetype] archive = archiveFormat(self.obj.filename, file_bytes) data['archive_hierarchy'] = archive.to_dict() except KeyError: raise NotImplementedError We can probably leave this out for now. Unless you're planning on using backbone routing to add a bunch of separate URLs that are all served by the same view, we won't need this. Have you given some thought to how wide the indent would be for each directory level contained within the zip? You may have to make the default width of the Name column match the string length of the deepest file with the added indents (i.e. if there are 3 levels of directories then the minimum width of the name column should be the width of the longest filename is this directory level with three indents). If you want the Name column width to be fixed for all zips, tarballs or other archive files then you may need to speak with the mentors to figure out the average number of directory levels in zip files to calculate the average number of indents that will be required). If you are interested in considering an alternate way of displaying the different directory nesting levels you might want to consider subsituting the indent with the relative path for the file as shown below. test.zip file.text directory_name directory_name/other_file.text It seems like there is a lot of room for the table to grow within the File tab. Some questions you may want to think about: -Will the name column have a max width? If yes how will that max width be calculated relative to the window size and file tab size. -How will you display file names (with or without indents) that exceed the name column width? Will you use a scroll bar? -If you use a scroll bar where will that be place to avoid text being covered (either column name or file names)? Do the scroll bars used in the rest of Review Board work with your design? I think this is a good idea. I think a seperate download button will make things a bit clearer for users that haven't used this feature before. You'll just need to take into consideration where this download button should be placed and how easily it can be found at different screen widths. It should match any other tables that exist in review board. An example would be the dashboard that displays all review requests but before you start implementing this it might be worth asking one of the mentors if there is another table that would be a better reference for you to use. Maybe a drop down/popup that appears when you hover over the file name? Change Summary: Add support for downloading individual files with in an archive Checks run (1 failed, 1 succeeded) flake8 Your work looks really interesting. I have some questions that are just for my own knowledge and also to get some clarifications on a few things. I also found a few white spaces that need to be removed. I'm not sure if you're there yet but you may want to update your mockup just so that when you want to start implementing you have a clear design that you can work from. It might save you time instead of having to design and implement at the same time. Keep up the great work! What were you thinking you might use for this extraction? Are you considering a third party service or are you going to code it yourself? For my own understanding, why are mime types guessed here? I've never heard of that before. Why is "wah" added to the end of this? There is some white space on lines 7,8 and 12 that needs to be removed. I think this whitespace needs to be removed Michael, your project seems so interesting for sure! The minor and general slips that I've seen are for the import statements, where you need to sort it alphabetically, e.g ```from reviewboard.reviews.ui.archive import ArchiveReviewUI should come before the ```from reviewboard.reviews.ui.base import register_ui or the from xxx import C,A,B should be from xxx import A,B,C.
https://reviews.reviewboard.org/r/11402/
CC-MAIN-2021-21
refinedweb
971
71.34
This post has been edited by pharylon: 28 April 2013 - 08:26 AM Add new WPF Controls from NuGet to Toolbox?Page 1 of 1 2 Replies - 4904 Views - Last Post: 28 April 2013 - 01:37 PM #1 Add new WPF Controls from NuGet to Toolbox? Posted 28 April 2013 - 08:25 AM So, I've been trying to work more with WPF instead of Winforms lately, and I wanted an autocomplete textbox today. I got WPF Toolkit from NuGet, but I can't figure out how to actually add the new controls to my Toolbox. My understanding is once I add an XMLNS refernce, such as xmlns:local="clr-namespace:MyWpfProject" I should be able to find it by right-clicking the toolbox and choosing "Choose Items..." but I'm not seeing the controls I now supposedly have access to. I know this is pretty basic, but my Google-Fu is failing me here. Replies To: Add new WPF Controls from NuGet to Toolbox? #2 Re: Add new WPF Controls from NuGet to Toolbox? Posted 28 April 2013 - 10:52 AM I usually right-click on project in Solution Explorer, and use "Manage NuGet Packages..." to install what I need. Then right-click to Toolbox (potentially Add Tab first) and click Choose Items... Then navigate to Packages folder inside your solution folder (by using Browse button), and choose package's dll (usually inside lib folder). #3 Re: Add new WPF Controls from NuGet to Toolbox? Posted 28 April 2013 - 01:37 PM I knew it was something easy, but I'd have never figured that out! Thanks for the help! If I may ask you another question, is there an easy way to set the style of this item to look like the system standard? For whatever reason, the WPF Toolkit box use a thicker and more rounded border for the AutoComplete textbox than the standard Windows ones and it really clashes with the rest of my form. Page 1 of 1
http://www.dreamincode.net/forums/topic/319756-add-new-wpf-controls-from-nuget-to-toolbox/
CC-MAIN-2016-22
refinedweb
333
80.11
13 Oct 06:59 2013 Re: Install DESeq2 on windows R 3.0.1/2 刘鹏飞 <liupfskygre@...> 2013-10-13 04:59:20 GMT 2013-10-13 04:59:20 GMT Hi£¬ Thanks Steve. I am not familiar with R and did not keep the sessionInfo() as the error appeared. And I think now the problem is solved by using Rtools to install package locfit from sources. >Can you explain what you mean by "automatically install"? Are you installing using the biocLite function, or? Yes, by using the following on R 3.0.1 source("") biocLite("DESeq2") > I search the CRAN (), but the lattest > version was built on R 2.15. >I'm curious where you are able to find that information from the website? I did not find that information from the website. The error occur in Chinese so I translated into English: locfit was bulit before R 3.0.0. So I serarch the CRAN for the lattest version of windows binary and get locfit_1.5-9.1.zip, after remove the install locfit package, I reinstall the the locfit_1.5-9.1.zip from R by the install from zip file way. After I check the descrption of the locfit package in the R/library/locfit and find the information that it is built on R 2.15.3 (Built: R 2.15.3; i386-w64-mingw32; 2013-10-07 04:27:30 UTC; windows ). which means the lattest version is built before R 3.0.0, the same as the error suggested. >Again -- what do you mean by "automatically installation" of R 3.0.2? source("") biocLite("DESeq2") error : DESeq2 is not available now for R 3.0.2 Now I solve the problem by following steps 1, copy the DESeq2 file in the library of R 3.0.1 to the library R 3.0.2 (several packages need to be modified because the lattest version of them built on R 3.0.2, so in R 3.0.1 I need to change all of them, but in R 3.0.2 I just need to reinstall locfit) 2, Install Rtools R 3.0.x and using the following command to install locfit from source in cmd: Rcmd INSTALL ¡°C:/Users/microbe/Downloads/locfit_1.5-9.1.tar.gz¡± -l ¡°D:\R-3.0.2\library¡± and the description of licfit now is: Built: R 3.0.2; i386-w64-mingw32; 2013-10-10 10:50:00 UTC; windows now after load DESeq2 >sessionInfo() R version 3.0.2 (2013-09-25)] parallel stats graphics grDevices utils datasets methods [8] base other attached packages: [1] DESeq2_1.0.19 RcppArmadillo_0.3.920.1 Rcpp_0.10.5 [4] lattice_0.20-23 Biobase_2.21.7 GenomicRanges_1.12.5 [7] IRanges_1.19.38 BiocGenerics_0.7.5 loaded via a namespace (and not attached): [1] annotate_1.39.0 AnnotationDbi_1.23.27 DBI_0.2-7 [4] genefilter_1.43.0 grid_3.0.2 locfit_1.5-9.1 [7] RColorBrewer_1.0-5 RSQLite_0.11.4 splines_3.0.2 [10] stats4_3.0.2 survival_2.37-4 XML_3.98-1.1 [13] xtable_1.7-1 seems all OK, am I right? 2013/10/11 Steve Lianoglou <lianoglou.steve@...> > Hi, > > On Thu, Oct 10, 2013 at 6:30 PM, ÁõÅô·É <[email protected]> wrote: > > Hi > > I have a problem on installation of DESeq2 on windows. I use R 3.0.1. > > After automatically install, I loaded DESeq2, the error message: > > the package locfit is built before 3.0.0, you should reinstall it. > > Can you explain what you mean by "automatically install"? Are you > installing using the biocLite function, or? > > > I search the CRAN (), but the lattest > > version was built on R 2.15. > > I'm curious where you are able to find that information from the website? > > > How could I solve this problems? > > > > BTW, automatically installation on R 3.0.2 was not available, > > Again -- what do you mean by "automatically installation" of R 3.0.2? > > > I copy the > > DESeq2 file in the library of R 3.0.1, the same problem appers. > > Ca you paste the output of sessionInfo() after this warning is triggered? > > -steve > > -- > Steve Lianoglou > Computational Biologist > Bioinformatics and Computational Biology > Genentech > -- Pengfei Liu, PhD Candidate Lab of Microbial Ecology College of Resources and Environmental Sciences China Agricultural University No.2 Yuanmingyuanxilu, Beijing, 100193 P.R. China Tel: +86-10-62731358 Fax: +86-10-62731016 E-mail: liupfskygre@... If you are afraid of tomorrow, how can you enjoy today! Keep hungry, Keep foolish! Moving forward! [[alternative HTML version deleted]] _______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives:
http://permalink.gmane.org/gmane.science.biology.informatics.conductor/50900
CC-MAIN-2015-18
refinedweb
767
70.19
Share This Document Copy and Paste URL Page Jesus did not know the “day and hour” of his return (Matt. 24:36). However, he knew it would take place before his generation had expired. This clearly precludes a delay spanning two millennia or even a single century. 3 Pages The Christian believers think that the Bible is required to be true. Matt.16:27-28 has a clear meaning that the event of 'end of the world' will happen before the death of some of his apostles. It has not happened. So the question of reliability of Bible, Jesus and God arises. It is necessary that the Church has to give some explanation. So they are trying to find out some other meaning to those words. No believer can accept the truth or fact, if it is against his religious faith.
http://www.scribd.com/doc/7854960/Jesus-predicted-a-firstcentury-return
crawl-002
refinedweb
141
74.49
Oscar Calvo's Blog Random thoughts focused on Modeling, Software Factories, Visual Studio Extensibility, Xbox 360, Windows Media and Windows MCE. Unit testing and the Scientific Method Has it ever happen to you when working on a bug fix that after investing time on a fix you realized... Author: OscarCalvo Date: 06/28/2011 How to: Display modeling diagrams outside Visual Studio In one of Cameron’s blog post we show how to load and save a diagram from within Visual... Author: OscarCalvo Date: 05/24/2010 Not really the last vsvars32 you will ever need...but close I took Peter avise and for a while I've been using Chris's vsvars32, however it had some drawbacks,... Author: OscarCalvo Date: 11/25/2009 How to: Run Expression Encoder 3 under PowerShell remoting James wanned me to explain the set of steps I use to enable Convert-Media under a PowerShell... Author: OscarCalvo Date: 11/06/2009 Announcing Dev10 Beta 2 Visdual Studio 2010 Beta is now available here. Please start pounding it on it and let us know what... Author: OscarCalvo Date: 10/22/2009 Notes on the Synthesis of Form I have been reading Christopher Alexander's "Notes on the Synthesis of Form", if you are into "Test... Author: OscarCalvo Date: 05/22/2009 Visual Studio 10 CTP There are tons of new cool features in Visual Studio Team Architect 10. - New UML Designers -... Author: OscarCalvo Date: 10/28/2008 Powershell As Peter I have became infected with Powershell, and to make it worse, there is console mode editor... Author: OscarCalvo Date: 07/25/2008 Going to TechEd I'll be in TechEd at the Visual Studio Team Architect demo station. See you there. Author: OscarCalvo Date: 05/31/2008 Intro Hello, after working with DevDiv and p&p as vendor for the past 7 years (wow, I can't believe... Author: OscarCalvo Date: 12/27/2007 Dynamically add commands to your VSIP Package The set of commands that you can expose in a Visual Studio Package is fixed and defined by the .ctc... Author: OscarCalvo Date: 12/27/2007 How to enable TV ratings under parental controls in MCE If you don’t have TV Ratings under your parental settings in Windows Media Center, most likely is... Author: OscarCalvo Date: 12/27/2007 A Guidance Package on stereoids With the upcoming release of GAT for Visual Studio Whidbey RTM we have been working on a few new... Author: OscarCalvo Date: 12/27/2007 A generic light weight workspace for CAB If you used to Workspaces in CAB, you will find that CAB Workspaces have two problems: 1. They... Author: OscarCalvo Date: 12/27/2007 A better way to implement the View/Presenter pattern in CAB The current implementation of the MVP(Model View Presenter) pattern in CAB have several issues that... Author: OscarCalvo Date: 12/27/2007 A new Kind of Recipe I have never really liked how you define a Recipe in GAX. I mean, XML is one of the worst languages... Author: OscarCalvo Date: 12/27/2007 The Evil EnvDTE namespace If you have ever written an AddIn for Visual Studio or a Package, you must likely recognize the... Author: OscarCalvo Date: 12/27/2007 The Evil EnvDTE namespace Link Author: OscarCalvo Date: 12/27/2007
https://docs.microsoft.com/en-us/archive/blogs/oscar_calvo/
CC-MAIN-2020-10
refinedweb
551
58.72
User groups and group permissions in newforms adminAug 06, 2008 Django Tweet If you follow strictly Brian Rosner's screencast on converting to newforms-admin, you'll find you suddenly end up with an admin site that takes away User and Group (permissions) editing options. That's no slight against Brian, believe me - it's a great screencast, and I wouldn't have been able to convert my own project so quickly without his help. But I do still need to be able to edit User accounts and Group permissions from the admin. So after a little trial and error, I was able to discover what to do differently: In your project-level urls.py, leave in both of these things: (If you remove 'admin.autodiscover()', you'll probably be greeted with the message "You don't have permission to edit anything" on your admin page.) Be sure to include this in your urlpatterns: from In your project-level admin.py, import a few extra models: (r'^admin/(.*)', site.root), User and Group should be the only models you need for managing user/group permissions, but it's probably worthwhile to poke around in the framework source (django/contrib/auth/models.py) for details. In the same project-level admin.py, where you're probably already registering your applications' model and admin classes, go ahead and register those two models like so: from django.contrib.auth.models import User, Group There's no need to pass an admin class, Django knows what to do with those models. And that's it - you should now have User and Group in your admin, along with whatever other models/admins you've defined for your project. site.register(User) site.register(Group)
http://www.mechanicalgirl.com/post/user-groups-and-group-permissions-newforms-admin/
CC-MAIN-2021-17
refinedweb
289
53.1
I have a folder with filenames bio1_36.tif, bio1_37.tif, bio2_36_tif, bio2_37.tif, and so on. There are 19 bio variables and 36 and 37 at the end refer to the tile numbers. My goal is to mosaic the two tiles for each variable, clip to the size of a polygon layer called “Katanga” and convert the final output to ascii format. I’ve been using the code below however this will only generate the correct output if the two tiles for the same bio variable are specified in the code or these two files are the only input in the workspace folder. How can I do the above procedure in a recursive manner for each bio variable, i.e. setting up a loop? This way I don’t have to create separate folders for each bio variable or specify them in the code. I am new to Python. Below is the code I’ve been using (note: "test" folder only contains one bio variable with the two tiles): import arcpy from arcpy import env from arcpy.sa import* import time start_time = time.clock() env.workspace = "G:/Eck_health_disease/spatial data/WorldClimate data/test" arcpy.env.overwriteOutput = True ImgList = arcpy.ListRasters() print "Processing Mosaic" print "Number of tiles to be Mosaicked:" + str(len(ImgList)) arcpy.MosaicToNewRaster_management(ImgList, env.workspace, "bio1_mos.tif", pixel_type="32_BIT_FLOAT", cellsize="", number_of_bands="1", mosaic_method="", mosaic_colormap_mode="MATCH") Katanga_poly = "G:/Eck_health_disease/spatial data/COD_adm_shp/DRC_Katanga.shp" arcpy.Clip_management("bio1_mos.tif","21.7447528839 -13.4556760785 30.7780862172 -4.99734274513", "bio1_mos_Kat.tif", Katanga_poly, "#", "ClippingGeometry", "MAINTAIN_EXTENT") arcpy.RasterToASCII_conversion("bio1_mos_Kat.tif", "bio1_mos_Kat.asc") print "Task Completed!" print time.clock() - start_time, "seconds" Any help would be appreciated. Thank you! Solved! Go to Solution. I've updated my code to search all the bioX rasters for each iteration. You can iterate your variable numbers: import arcpy from arcpy import env from arcpy.sa import* import time start_time = time.clock() env.workspace = "G:/Eck_health_disease/spatial data/WorldClimate data/test" arcpy.env.overwriteOutput = True print "Processing Mosaic" print "Number of tiles to be Mosaicked:" + str(len(ImgList)) Katanga_poly = r"G:\Eck_health_disease\spatial data\COD_adm_shp\DRC_Katanga.shp" for i in range(1,19): ImgList = arcpy.ListRasters("bio"+str(i)+"*") arcpy.MosaicToNewRaster_management(ImgList, env.workspace, "bio"+str(i)+"_mos.tif", pixel_type="32_BIT_FLOAT", cellsize="", number_of_bands="1", mosaic_method="", mosaic_colormap_mode="MATCH") arcpy.Clip_management("bio"+str(i)+"_mos.tif","21.7447528839 -13.4556760785 30.7780862172 -4.99734274513", "bio"+str(i)+"_mos_Kat.tif", Katanga_poly, "#", "ClippingGeometry", "MAINTAIN_EXTENT") arcpy.RasterToASCII_conversion("bio"+str(i)+"_mos_Kat.tif", "bio"+str(i)+"_mos_Kat.asc") print "Task Completed!" print time.clock() - start_time, "seconds" Hello, Thank you very much for the suggestion. However, this code generate only one mosaic file named *bio1_mos.tif *instead of two when I tried on a test folder which contained bio1_36.tif, bio1_37.tif, bio2_36_tif, bio2_37.tif files. And, all of these files were used in the mosaicking. What I want is to mosaic only bio1_36.tif, bio1_37.tif and create an output named *bio1_mos.tif. *Then, mosaic bio2_36_tif, bio2_37.tif and generate an output file named bio2_mos.tif and do this for all 19 variables. Thank you for your help. Regards, Kemal I've updated my code to search all the bioX rasters for each iteration. Thank you! That does the trick. Can you tell me what "*" does in the updated line "bio"+str(i)+"*" ? Also, range (1,19) goes from 1 to 18 so for 19 variables it needs to be (1,20). I figured that out when I was working with a test folder with four files. Thanks again! It saved a lot of time. The "*" is a wildcard character that basically means any character after the "bio" string, so for this example the ListRasters function will be filtered to all datasets starting with "bio" and the selected parameter number e.g. all "bio1" rasters. Thanks for the explanation. Now, I can use it in other situations. Best, Kemal Hello again, I've also been working on the code below. Maybe you can help me with this if it makes sense. I could get it to generate the bio1_mos and bio2_mos tif files but the processing of mosic to new raster is giving error and the files are empty. Please note that "test" folder contains bio1_36 and bio2_36.tif files. test2 file contains bio1_37 and bio2_37.tif files. Thank you! import arcpy, os, sys from arcpy import env from arcpy.sa import * import time start_time = time.clock() this will overwrite output. Important in testing codes arcpy.env.overwriteOutput = True activate the spatial analyst extention of ArcGIS arcpy.CheckOutExtension("Spatial") path = "G:/Eck_health_disease/spatial data/WorldClimate data/test" bio36 = f=[] for fname in bio36: with open(os.path.join(path, fname), 'r') as fr: f.append(fr.readlines()) path = "G:/Eck_health_disease/spatial data/WorldClimate data/test2" bio37 = f=[] for fname in bio37: with open(os.path.join(path, fname), 'r') as fr: f.append(fr.readlines()) newList=[] mosaic =[] basename=[] ##counter = 1 for i in range(len(bio36)): for j in range(len(bio37)): try: workspace = arcpy.env.workspace = "G:/Eck_health_disease/spatial data/WorldClimate data/bio_mos" basename = os.path.basename(bio36).split("_")[0] r1 = arcpy.sa.Raster(i) r2 = arcpy.sa.Raster(os.path.join(bio37, basename + "_37.tif")) newList = [bio36, bio37] list = ";".join(newList) rasterList = arcpy.ListRasters(NewList) Fn = raster[0:1] outname = basename + "_mos.tif" mosaic = arcpy.MosaicToNewRaster_management(list,workspace,outname,"","32_BIT_FLOAT", "", "1", "","MATCH") mosaic.save(os.path.join(workspace,outname)) except: print "Mosaic failed." print arcpy.GetMessages() print "Task Completed!" print time.clock() - start_time, "seconds"
https://community.esri.com/t5/geoprocessing-questions/mosaicking-only-certain-files-within-a-folder/td-p/618156
CC-MAIN-2022-33
refinedweb
912
53.58
Manually define the back button. var me = this, // navigationview nav = me.getNavigationBar(); // use custom defined back button instead of an abstract default ... Type: Posts; User: Greg Arnott Manually define the back button. var me = this, // navigationview nav = me.getNavigationBar(); // use custom defined back button instead of an abstract default ... Easily so. You can even have both the projects in the /app/ folder as /app/app-OURAPP and /app/app-THEIRAPP. The requires would then have another term in the reference. ie:... initialize: function() { this.callParent(); var me = this, fileup = Ext.create('Ext.ux.Fileup', { itemId : 'fileLoadBtn', ... Apple has some interesting protections set by default. The main one you'll notice is videos being unable to be autoplayed, and must be manually activated by the user. Files and file upload is... Your code requires to be commented in a particular format (jsduck's github wiki describes in detail). The way I prefer is using Sublime Text and the DocBlockr plugin which automatically generates... Settings in app.json. Event Binding: name After 7 months of development, starting 3 months after the last update to this post. A project with over 100 files, each with their own assortment of methods and cross-references.... Literally... * {@link Class#member link text} Thus you'd probably enter: /** * ... * The method to {@link Application#addEvents add events} throughout the application. * ... Use DocBlockr plugin for Sublime Text. DocBlokr adds a documentation comment block based on the following line when you type "/**". For auto-documentation, you'd probably have to switch to jsDoc.... Architect instructions: select textfield to get value from. Config Panel -> "Event Bindings" else drag a "Basic Event binding" from the Toolbox onto textfield enter "change" as the event ... The benefits of sticking with the Adobe Suite definitely diminished for me over time. Overall, I'd say they are a professional set of tools more beneficial for those in a low-to-mid bracket of... Workaround?: Ext.Viewport.on({ scope: this, widthchange: function() { if (Ext.Viewport.element.getWidth() > Ext.Viewport.element.getHeight()) { // fire... When destroy is leading to issues through animation/scrolling, set a halt to the movement, else add a defer to the destroy call to cater for the time the device needs to be ready. Listpaging was regularly offering buckets of grief until I added it as: plugins: [ { xclass: 'Ext.plugin.ListPaging', autoPaging: true, ... I use change as the trigger. "searchField": { change: 'onSearchfieldChange' } onSearchfieldChange: function(textfield, newValue, oldValue, eOpts) {...} This link shows a method of applying a youtube video, with an improved method on the following. On mitchellsimoens ... Create a NavigationView. To go along with your naming, call it Main, ie: Ext.define('kody2.view.Main', { extend: 'Ext.navigation.View', alias: 'widget.main', If you have an Adobe CS subscription, Adobe Edge Inspect and the Adobe Edge Inspect CC browser extension is a nice extension of the weinre toolset. ADB can likewise provide some handy debugging... Don't autoload data to list. Else, use a different store for list, and populate the list by loading this store with decoded values. Else, hide list until decoded values return. //... APPEND/PREPEND: To the container, or to the generated content within the container? // if the generated content of container needs stuff added var numitems =... Have you considered trying something like this: this.on{( focus: function() { field.wrap({ tag: 'form' }); }, blur : function() { Have a look at Ext.Loader. NB: Sencha Touch 2 Format being used here. As this plugin is using "Ex.util" namespace by being defined as "Ext.util.LocalStorageCookie", you need to set that path as a loader config (start of... My thanks for your time and assistance in this matter Gil.
https://www.sencha.com/forum/search.php?s=5c67a75444ec95d78a9766d2e347118d&searchid=20487244
CC-MAIN-2018-17
refinedweb
604
61.02
This post explores how to manage multiple view models across modules in a Prism-based Silverlight application. One powerful feature of Prism is the ability to dynamically load modules. This allows reduction of the XAP file size, as well as encourages a smaller memory footprint as certain modules are not brought into the application until they are needed. A common issue I find developers struggle with is the consistent use of view models. I've seen some elegant solutions that involve storing values in isolated storage to move the values between modules (elegant, but overkill) when in fact a common view model shared between modules would have been fine. I'm going to assume you are familiar with Prism and know how to build an application that dynamically loads modules as they are accessed. You may want to refer to my article Dynamic Module Loading with Silverlight Navigation if you need more background information. Assume you have a project that is dynamically loading modules and injecting them into the shell. For my example, I'm going to assume the application does not require deep linking, so it is not based on the navigation framework (it could be fit easily). In order to facilitate my own navigation, I create a navigation service. The navigation service is a singleton and has the module manager injected to it. It contains an enumeration of controls to display, and a "module map" that maps the control to the module the control lives in. It also exposes a "view changed" event that it fires with the enumeration whenever a new view is navigated to. The logic is quite simple (in this example, my enumeration is passed in as view, and the value of the enumeration maps nicely to the index of an array of modules that host views). MODULEINIT is just a shell based on the module naming convention, for example "MyApp.Module{0}.ModuleInitializer" or similar. The module manager is injected in the constructor. Each time we navigate to a view, we call this so the module can be loaded if it hasn't been already. Prism keeps track of loaded modules and won't try to pull the XAP across the wire more than once. _moduleManager.LoadModule(string.Format(MODULEINIT,_moduleMap[(int)view])); EventHandler<ViewChangedEventArgs> handler = ViewChanged; if (handler != null) { ViewChangedEventArgs args = new ViewChangedEventArgs(_currentView, view); handler(this, args); } As you can see, very straightforward. We remember the current view and send that with the new view to the event. The views themselves can register to the event. If the views are built with a VisualStateGroup to handle swapping them into and out of view, then the logic is simply: if I am the old view, go to my hidden state, else if I am the new view, go to my visible state. Then the views can live in an ItemsControl with only one view showing at a time. For my views, I create a ViewBase class that is, in turn, based on UserControl. This lets me manage some common housekeeping for the views that will be bound to view models. First, I exposed a protected method called _ViewBaseInitialize to call after the component is initialized. This takes in the view model and binds it, as well as wires into the navigation change event. I know some people won't like injecting the view model and there are certainly other ways to marry the view and its model, but for our example this will do. Our logic looks simply like: protected void _ViewBaseInitialize(object viewModel) { Loaded += (o, e) => { MasterModel model = DataContext as MasterModel; model.Navigation.ViewChanged += new EventHandler<ViewChangedEventArgs>(nm_ViewChanged); if (viewModel != null) { model.ModuleViewModel = viewModel; } }; if (currentView.Equals(e.OldView)) { VisualStateManager.GoToState(this, "HideState", true); } else if (page.Equals(e.NewView)) { VisualStateManager.GoToState(this, "ShowState", true); } } You'll note the introduction of the MasterModel. This view model is bound at the shell level, so it is available to all of the views hosted in that shell. Typically, a shell as a Grid or similar item as the root layout panel, so in my bootstrapper I handle wiring up the master view model: protected override DependencyObject CreateShell() { Container.RegisterType<NavigationManager>(new ContainerControlledLifetimeManager()); NavigationManager nm = Container.Resolve<NavigationManager>(); Container.RegisterType<MasterModel>(new ContainerControlledLifetimeManager()); Shell shell = Container.Resolve<Shell>(); Application.Current.RootVisual = shell; nm.NavigateToView(NavigationManager.NavigationPage.Login); return shell; } Because the constructor of the master model takes in a "navigation manager" instance, it receives the instance we just configured. Likewise, the shell will receive the instance of the view model and bind that to the data context of its main grid. For all practical purposes, the MasterModel definition looks like this: public class MasterModel : IViewModel { public NavigationManger Navigation { get; set; } public MasterModel(NavigationManager nm) { Navigation = nm; } public IViewModel SubModel { get; set; } } You can see that it takes the navigation manager as well as a sub model where needed. This is key: it doesn't have knowledge about the modules it will be interacting with, so it only knows about the IViewModel. We'll get more specific in a bit. The master module can hold things like settings, static lists, authentication, etc, to pass down to sub modules as needed. Now let's get down to an actual module that will use this. Let's say I have a module called ModuleUserManager and it has a view model called UserManagerModel. It needs a token for security that is stored in the master module. First, let's extend the ViewBase to make it easier to grab a model. We can't type the view base because we are basing our controls on UserControl, which isn't typed. We can, however, type methods. I added this method as a simple helper: protected T _GetViewModel<T>() where T: IViewModel { MasterModel model = DataContext as MasterModel; return model == null ? null : (typeof(T).Equals(typeof(MasterModel)) ? model as T : model.ModuleViewModel as T); } Remember, we bound the master to the shell, so the hierarchical nature of data-binding means the data context will continue to be that until I override it. We'll override it in our view, but at this level we still see the master model. The method simply casts the data context and then either returns it the master model is being requested, or access the ModuleViewModel property and returns that (also typed). This was set earlier in the call to the _ViewBaseInitialize. Now my view can be based on ViewBase. Simply add a user control, go into the XAML, reference the namespace for the view base and then switch from UserControl to ViewBase. The XAML will look like this: <vw:ViewBase x: <Grid DataContext="{Binding ModuleViewModel}"/> </vw:ViewBase> In this example, we've taken the user control and exposed it as a ViewBase because the partial class must have the same base in both XAML and code behind. I've also bound the grid to the "module view model" so it has access to the local view model. In the code behind, we pass the module-specific view model to the initializer: public partial class UserView : ViewBase { public UserView() { InitializeComponent(); _ViewBaseInitialize(new UserManagerModel()); } } Note that the master model is going to be shared. Even though I dynamically loaded the module and the view with Prism, the data context for the control itself remains bound to the "master" (shell) and therefore can reference all of the properties I've set in previous screens. My new, local view model gets bound as a property to this master view so the local properties can be accessed in the XAML. What if I have a property called Token in my master that I need to reference in my local view model? I can simply add a hook into the loaded event (so I'm sure all of the binds, etc have taken place) and then use my new helper method. I'll add this after the _ViewBaseInitialize call: Loaded += (o, e) => { MasterModel master = _GetViewModel<MasterModel>(); UserManagerModel userModel = _GetViewModel<UserManagerModel>(); userModel.Token = master.Token; }; That's it. Obviously there are more elegant ways to bind the view models together other than the code-behind, but I tend not to be the fanatic purist some others might be when it comes to simple actions and tasks that coordinate views and view models. If that task has "awareness" of an event triggered in the view and the view model, I don't see the issue with tapping into that action to glue things together. The final thought I'll leave you with is the possibility of the sub modules being a stack. In this scenario, each view would push the view model onto the stack. Then, if you clicked the back button, the previous view could pop its view from the stack and restore the state it was in. This way the master model would help coordinate undo/redo functionality without having knowledge the specific models it collects. Oh wait, even better... After posting this, I realized there was even a better way to class the base view. If I do this: protected void _ViewBaseInitialize<T>(Action<MasterModel,T> onLoaded) where T: IViewModel, new() { // must fire when loaded, as this is when it will be in the region // and ready to inherit the view model Loaded += (o, e) => { MasterModel model = DataContext as MasterModel; model.Navigation.ViewChanged += new EventHandler<ViewChangedEventArgs>(nm_ViewChanged); if (typeof(T) != typeof(MasterModel)) { model.ModuleViewModel = new T(); } if (onLoaded != null) { onLoaded(model, model.ModuleViewModel as T); } }; } Then I simply can simply change my derived view to this: public UserView() { InitializeComponent(); _ViewBaseInitialize<UserManagerModel>((masterModel, userModel) => { userModel.Token = masterModel.Token; }); } Even better - now I just call the base method with the type I want and let it new it up, then get a strongly typed delegate to move my data when needed. A sample project/solution would be nice! Including the source of your example would definitely go a long way in making your examples easier to follow. Thanks for the feedback! Many times the blog posts are related to solutions that are for customers who I cannot release the source code for ... and I don't always have enough time to build a specific project just for the post. I do appreciate your understanding for those times that I am unable to provide more explicit examples!
http://csharperimage.jeremylikness.com/2009/12/mvvm-composition-in-silverlight-3-with.html
CC-MAIN-2016-07
refinedweb
1,712
53.21
Introduction Using ClosureBuilder briefly covered Closure Library's debug loader. When you use the uncompiled source, requiring a namespace with goog.require() fetches that namespace by appending an additional <script> tag whose src attribute is a URL to the script that provides that namespace. But how does Closure Library know which files provide which namespaces? Included in Closure Library is a default dependency file named deps.js. This file contains one line for every JavaScript file in the library, each specifying the file location (relative to base.js), the namespaces it provides, and the namespace it requires. In debug mode, a goog.require() statement looks to see if the require namespace was specified and, if so, fetch the files that provide it and all of its dependencies. However, Closure Library only comes with a dependency file for the namespaces in the library. If you plan to write your own namespaces (and you likely will), you can use depswriter.py to make a dependency file for the namespaces you create. Dependency file syntax Dependency files simply register the location of namespaces and the namespaces they depend on, using a function defined in base.js named goog.addDependency(). Here's a simple example from deps.js: goog.addDependency('dom/forms.js', ['goog.dom.forms'], ['goog.structs.Map']); This registers the file dom/forms.js, which provides one namespace, goog.dom.forms, and requires one namespace, goog.structs.Map. Note that the path, dom/forms.js, is relative to the directory that base.js is in. Using DepsWriter DepsWriter automates the process of writing dependency files by scanning files to find namespaces provided and required with goog.require and goog.provide. In this example, there are two files, each with a namespace, one of which depends on the other, and both of which depend on namespaces in Closure Library. The goog.provide and goog.require statements at the top might look like this: goog.provide('myproject.foo'); goog.require('goog.array'); goog.require('goog.dom'); goog.require('myproject.bar'); goog.provide('myproject.bar'); goog.require('goog.string'); myproject.foo depends on myproject.bar. We need to create a single dependency file so that the debug mode loader knows where these files exist. The dependency file specifies paths relative to base.js. So if base.js and the files that provide myproject.foo and myproject.bar are served at the following URLs http://<server>/closure/goog/base.js http://<server>/myproject/foo.js http://<server>/myproject/bar.js then the relative paths to foo.js and bar.js are ../../myproject/foo.js and ../../myproject/bar.js. Each .. means "parent directory". The expression ../.. climbs up out of goog and then out of closure. Generating a dependency file with depswriter.py We need DepsWriter to write a file that registers both these paths with goog.addDependency(). DepsWriter allows us to specify a directory to scan for .js files, along with a custom prefix, with the --root_with_prefix flag. $ closure-library/closure/bin/build/depswriter.py \ --root_with_prefix="myproject ../../myproject" \ > myproject-deps.js This command scans the myproject directory for .js files and writes out a goog.addDependency line for each. The path will be the prefix (here, ../../myproject) combined with the relative path from the root. It assumes we're running from the directory above both the closure-library and myproject directories created in the ClosureBuilder tutorial. In this case we've written the result into myproject-deps.js with the > operator. You can instead specify an output file using the --output_file flag. DepsWriter produces the following output: // This file was autogenerated by closure/bin/build/depswriter.py. // Please do not edit. goog.addDependency('../../myproject/bar.js', ['myproject.bar'], ['goog.string']); goog.addDependency('../../myproject/foo.js', ['myproject.foo'], ['goog.array', 'goog.dom', 'myproject.bar']); This file expresses the same dependency information as the goog.provide() and goog.require() statements in the source code. Using a generated dependency file in your project To include a generated dependency file, just include a <script> tag for it, after base.js. After the tag, you can require your newly specified namespaces: <!doctype html> <html> <head> <script src="/closure/goog/base.js"></script> <script src="/myproject/myproject-deps.js"></script> <script> goog.require('myproject.foo'); </script> </head> <body> <!-- content --> <script> myproject.foo.start(); </script> </body> </html> This code loads myproject.foo (and myproject.bar, by dependency), then calls myproject.foo.start() just before the closing <body> tag. Note that you should only link to base.js and your dependency files in the debug version of your application. When using compiled JavaScript, just include the script tag to load the compiled file.
https://developers.google.com/closure/library/docs/depswriter
CC-MAIN-2018-22
refinedweb
768
53.68
This exercise will clarify the meaning of Leibniz notation by exploring the original insights of Leibniz using modern computing tools. As a beginning, let's summarize Leibniz's understanding of his operators d and ∫. This information is taken from A History of Mathematics: An Introduction by Katz, 3rd ed, from Section 16.2, titled "Gottfried Wilhelm Leibniz". The basic idea out of which the calculus of Leibniz developed was the observation that if A,B,C,D,E was an increasing sequence of numbers and L,M,N,P was the sequence of differences, then E−A=L+M+N+P. This is a crude form of the fundamental theorem of calculus. To clarify, let's do an example. Suppose we have an increasing sequence x = [1,3,5,7,9]. Then the corresponding sequence of differences is dx = [2,2,2,2]. This is because 3−1=2, 5−3=2, 7−5=2 and 9−7=2. The sequence of differences is formed by taking the difference between each two subsequent entries in the list x. Note that there is one less difference than there are numbers in the original list. Exercise: Given x=[1,4,7,10,19], compute the associated list of differences dx. Solution: dx = [3,3,3,9]. The difference operation (which Leibniz denoted eventually as d) occurs as a primitive in many scientific programming libraries. In this assignment the library we will be concerned with is called numpy which is short for Numerical Python. This is how you compute differences using numpy: import numpy as np x = [1,3,5,7,9] dx = np.diff(x) print(dx) x = [1,4,7,10,19] dx = np.diff(x) print(dx) As we proceed through the exercises please change the values in the cells and run them on your own. Be aware that you can immediately receive help on any command by executing it in a cell, with a '?' character at the end. For example, the following command will show you the documentation for np.diff: np.diff? #execute me! In Leibniz's conception of the calculus the distinction between a variable and a sequence of numbers is somewhat blurry. He eventually considers dx to be a small increment when he presents his ideas on the difference quotient. But his original ideas on differences are motivated by the Harmonic Triangle, from which it is clear that sequences of differences and partial sums were much on his mind. It is often fruitful to think of both dx and ∫ x as sequences rather than numbers. Without this understanding, some of Leibniz's formulations (e.g. d ∫ x = x) are difficult to comprehend. Leibniz's ideas evolved over time and our modern notation is slightly different even from his refined ideas. Here is an accessible summary and links to further resources. For us every variable will denote a sequence. For example the letter x will represent some sequence of numbers, such as x=[1,2,3,4]. Similarly dx = [1,1,1] will also be a sequence. If we write y=x2 then we mean y to be the sequence of squares of elements of x. Here is an illustration: x = np.array([1,2,3,4]) y = x**2 y In the above cell we wrote y=x**2, which is what we might write to express the dependence of a single number y on a single number x. But in reality this line of code stands for a computation on lists. What it says is that each element of the list y comes from squaring the corresponding element of the list x. We usually call this style of programming vectorized code or sometimes array programming. We now give a sequence of examples to show how vectorization works in numpy. #Vectorization example x = np.array([0,1,2,3]) y = np.array([3,4,5,6]) print("x = {}".format(x)) print("y = {}".format(y)) print("x + y = {}".format(x+y)) print("x*y = {}".format(x*y)) print("x/y = {}".format(x/y)) c = 7 print("if c = {} then c*y = {}".format(c,c*y)) Vectorized programming not only makes for cleaner expressions, it is also efficient. This is because backend libraries are highly optimized to use hardware parallelism and other numerical tricks to make vectorized operations execute quickly. Vectorized notation is also highly compatible with Leibniz notation. In a way you can think of Leibniz's original notation as being itself a kind of vectorized code. In the notes above we took the "differences" of a list y which depended on a list x in a functional way. Can we go one step further and compute the "differences of the differences"? Yes! We can also talk about the differences in y just as we did with respect to x. In the above example, we had y = [1,4,9,16]. It is easy to see that dy = [3,5,7]. We use the worksheet to automatically compute dy using numpy. y = [1,4,9,16] dy = np.diff(y) dy Now we "go to town" and compute differences of differences of differences in the following cell. #Iterated differences... x = np.array([1,2,3,4,5,6,7,8,9,10]) y = x**3 - 2*x**2 + 1 print("y = {}".format(y)) dy = np.diff(y) print("dy = {}".format(dy)) ddy = np.diff(dy) print("ddy = {}".format(ddy)) dddy = np.diff(ddy) print("dddy = {}".format(dddy)) We will later see that these "differences" are very similar to derivatives. The fact that dddy (or d3y) is constantly 6 is essentially saying that the 3rd derivative of the cubic polynomial y=x3−2x2+1 is a line of slope 6. Leibniz referred to the sequence of inputs x as "abscissae" and the sequence y which functionally depends on x as the "ordinates". Nowdays these terms are rarely used. We might instead say that x is the independent variable, and y is the dependent variable. But for fun I will continue to say abscissae and ordinates here, especially since it means less writing. Sometimes we want to have a lot of numbers in our ordinate. There are two functions built into numpy which make this easy to create. They are arange() and linspace(). The arange() command takes three inputs: a starting point a, and ending point b, and the step size Δx. What is returned is a list of Δxb−a numbers, which begin at a and increase in increments of Δx. Here is an example: x = np.arange(1,10,0.5) x You can see that the abscissa x consists of 0.510−1=18 numbers. The list begins at a=1 and proceeds by steps of size Δx=0.5. The linspace operator is similar to arange, but instead of taking a step size Δx as an argument, it instead takes the desired number n of points. Let's do an example: x = np.linspace(1,10,18, endpoint=False) x The output of the above example is the same as the one before: 18 evenly spaced numbers beginning at 1 and increasing by increments of Δx=0.5. The endpoint=False argument tells the function that I do not want the endpoint 10 to be included in the abscissa list. (Try deleting that part and rerunning the command.) Whether we use linspace or arange is mostly a matter of preference. In this document I will tend to use arange. You should be familiar with plotting pairs of abscissae and ordinates to make a visual description of an equation (aka a graph). Below you can see how to use the matplotlib library to turn the lists x and y into a familiar graph. %matplotlib inline import matplotlib.pyplot as plt x = np.arange(0,10,0.1) y = x**3 plt.plot(x,y) plt.title(r"The graph $y=x^3$") plt.show() Now that we have a little familiarity with numpy and vectorization, let's revisit Leibniz's central insight, which we mentioned in the first cell. Here it is as expressed in Katz: Leibniz considered a curve defined over an interval divided into subintervals and erected ordinates yi over each point xi in the division. If one forms the sequence {dyi} of differences of these ordinates, its sum ∑idyi, is equal to the difference yn−y0 of the final and initial ordinates. That's a bit of a mouthfull. The first sentence just says that we have some list x of abscissae and some set y of ordinates and there is a functional relationship, such as y=x**2. The second sentence says that if we sum the elements of the list dy then we just get the difference between the first and last elements of the list y. Let's see if that's actually true... x = np.arange(0,10,0.1) y = x**2 dy = np.diff(y) sum(dy) == y[-1]-y[0], sum(dy) The above cell describes a case in which we partition the interval [0,10] into 100=1/1010−0 subintervals, and let the list x represent the left endpoint of each respective subinterval. We then "erect the ordinates y" where y=x2 (understood as a vectorized expression). We then compute the differences for the list y as dy and sum those up. Using the notation y[-1] to describe the last element of y and y[0] to describe the first, we see from the output (True) that Leibniz's rule holds in this case. We also see that the sum of the differences happens to be 98.01. If you experiment (which you should) by changing the increment Δx=0.1 to other values, you will see that the equation continues to hold. In particular it holds as Δx→0+, or in other words as the size of the partition goes to infinity. To see that Δx doesn't matter to the truth of the equation, you have to observe that the sum "telescopes", meaning most of the summands cancel out. We'll walk through that argument now. First notice that we can express the list x as x =[0,Δx,2Δx,3Δx,…,(n−1)Δx]. Hopefully this way of thinking is familiar from your calculus class. Because y=x2, we must have y =[0,Δx2,(2Δx)2,(3Δx)2,…,((n−1)Δx)2]. Finally it must be true that sum(dy)=Δx2−0+(2Δx)2−Δx2+(3Δx)2−(2Δx)2+⋯+((n−1)Δx)2−((n−2)Δx)2. Now we will argue that this sum "telescopes", meaning most of the things that are written cancel out. Note that in the sum each term appears exactly once positively and once negatively except for a=0 and ((n−1)Δx)2, which each appear only once. Therefore everything except these terms cancels out (and a=0 so it can go as well). sum(dy)=((n−1)Δx)2. Because n=Δxb−a, we have that (n−1)Δx is simply b−Δx. In the above example, b−Δx=10−0.1=9.9 and sum(dy)=9.92=98.01, as indicated. In general, sum(dy)=(b−Δx)2. And as Δx→0+ this becomes b2. If a had not happened to be zero then we would have had sum(dy)=(b−Δx)2−a2. Because f(x)=x2 is continuous, the limit as Δx→0+ is b2−a2. By making Δx→0+, we have, in a way, turned dy from a finite list of differences into an infinite list of very tiny differences. What we just showed, a little bit rigorously, is that Leibniz's sum rule can be true even if the list of differences dy is infinite. The basic idea is very similar to the Fundamental Theorem of Calculus. The second of Leibniz's insights about calculus can be described like this (Katz again): Similarly, if one forms the sequence {d∑yi}, where ∑yi=y0+y1+⋯+yi, the difference sequence {d∑yi} is equal to the original sequence of the ordinates. As usual, thinking of y as a list of numbers, we will use ∫y to denote the new sequence of partial sums of the entries of y. To illustrate, we use the corresponding operator in numpy, which is cumsum (cumulative sum). x = np.array([1,2,3,4,5]) y = x**3 print("Here is x = {}".format(x)) print("Here is y = {}".format(y)) print("Here is ∫ y = {}".format(np.cumsum(y))) You can see that ∫ y is a sequence of the same length as y. Each element ∑yi (Katz notation) in ∫ y is the sum of the elements in y of index at most i. Leibniz's second insight is the observation that ∫ and d are inverse operators. That is, d ∫ y = ∫ dy = y. Let's try it out in numpy, using np.diff for d and np.cumsum for ∫. x = np.array()))) There has been an annoying technical snafu here: Each element in the sequence ∫ dy is off by one. That is because Leibniz assumes that all ordinate sequences begin with zero. Let's fix that and try again. x = np.array([0)))) This time we were successful. Note that something is still a little strange. The sequence for y begins with a 0, but this has been left out of the last two expressions. It was inevitable that something be omitted for the simple reason that the difference operator results in a list which is one shorter than the initial list. With this one caveat, it is not hard to see why Leibniz's 2nd insight is true. Analyzing d ∫ y we see that that (in Katz's notation) the ith element of the sequence d ∫ y is the ith element of the sequence y: ∑yi−∑yi−1=yi. Note, however, that y0 cannot be computed in this way, and so the initial element of the sequence y is forgotten. Going the other way, we consider the ith element of the sequence ∫ dy = {∑dyi} in Katz's notation. Using Leibniz's first insight, we see that ∑dyi=yi−y0. If y0=0, as Leibniz always assumes, then we again arrive at yi. This proves the statement (in Leibniz's terms) d ∫ y = ∫ dy = y. Technically we should make a note to omit the first element of y on the right hand side of these equations. To make things work out numerically, we will often have to remove the first element of our ordinate sequences in numpy. The syntax for doing this is the following. y = np.array([1,2,3,4,5,6,7]) print("y[1:] = {}".format(y[1:])) This kind of array manipulation is called array slicing. More sophisticated slices are possible, even when y is multidimensional, but we will not use them here. dx = np.diff(x) fig,axes = plt.subplots(1,2) axes[0].plot(x[1:],dy/dx) axes[0].plot(x[1:],dy) axes[1].plot(x,3*x**2) plt.show() # We see that ∫ dy/dx dx = y f = np.cumsum(dy/dx*dx) fig,axes = plt.subplots(1,2) axes[0].plot(x[1:],f) axes[1].plot(x,y) plt.show() #Inverse relationship of ∫ and d dx = 0.5 x = np.arange(0,5,dx) y = x**2 # We see that ∫ dy = d∫ y = y print(np.cumsum(np.diff(y))==np.diff(np.cumsum(y))) print(np.cumsum(np.diff(y)) == y[1:]) #Calculus of differentials #Sum rule dx = 0.1 x = np.arange(0,10,dx) y = np.sin(x) z = x**2 fig,axes = plt.subplots(1,2) axes[0].plot(x,d(y+z)) axes[1].plot(x,d(y)+d(z)) plt.show() #Leibniz Rule def d(Y): return np.hstack((np.array([0.000001]),np.diff(Y))) fig,axes = plt.subplots(1,2,figsize=(13,5)) axes[0].plot(x,d(y*z),label="d(yz)") axes[0].plot(x,d(y)*z+y*d(z),label="dy*z+y*dz") axes[0].legend() axes[1].plot(x,(d(y*z)/dx),label="d(yz)/dx") axes[1].plot(x,(d(y)/dx)*z+y*(d(z)/dx),label=r"$\frac{dy}{dx}z+y\frac{dz}{dx}$") axes[1].legend() plt.show()
https://share.cocalc.com/share/05c45582e6b9a7bb0591acc9d9c25ebc68c0f9b9/Calculus/Leibniz.ipynb?viewer=embed
CC-MAIN-2019-26
refinedweb
2,711
66.54
Spec: Feature: Cart Total namespace MvcMusicStore.InterfaceTests.Features { Given I have the Home Page open [Given(@"I have the Home Page [Given(@"I select a genre from the left")] public void GivenISelectAGenreFromTheLeft() { NextPage = CurrentPage.SelectGenre(Default.Genre.Name); } All page objects extend the PageBase class, so I’ve added a partial class for PageBase (PageLibraryPage [Given(@"I select an album from the genre page")]: [Given(@"I select an album from the genre page")] public abstract partial class PageBase : CommonBase { //... public TPage As<TPage>() where TPage : PageBase, new() { return (TPage)this; } } [When(@"I add the album to my cart")] [Then(@"the cart has a total of (d+)")] ‘Add. Nice post! I really liked the step-by-step description. Very well written and easy to follow through the whole post. Love your implementation of the PageObject pattern too, makes the code so much readable. I had some problems finding the steps in the source code thought. Just a convention that you might consider; many people put the step classes in separate folders (called .. wait for it … Steps). That’ll leave your features and the generated in one place and the steps code that you write in a separate place. Thanks for a great post. Thanks Marcus. I’ll probably take you up on the suggestion to split the steps into a separate folder. I’ve been also starting to do more research lately into how other people have managed the organization of individual step definition methods. I currently have them organized based on the Feature file they were added to support, but I could see also grouping them into step definition files by the page they interact with, or for the more complex ones, maybe some other form of workflow categorization. Wonderful article, this has helped me immensely. Would love your input on the journey I’ve taken so far, getting started with C# UI testing. What gaps do you see that I can improve on? Were any steps unnecessary? 1. Make scripts with Selenium IDE to test a website 2. Export scripts to C#/Selenium RC and get them to work with NUnit 3. Add parameterized NUnit tests to handle multiple user types signing into the website 4. Refactor tests to fit into SpecFlow framework 5. ??? Thanks so much for your guidance! Ben @Ben: I think the next step is to take the framework you have in place with SpecFlow for your application and start building tests for new features you haven’t implemented yet. So instead of testing what is already there, build tests that define the behavior you are expecting to build in, then build just enough interface and code to make those tests pass. This is BDD (Behavior driven development), a subset of TDD. Another option would be to try recreating the same tests in a Ruby, Cucumber, Watir stack using the same page object pattern that I outlined above. Switching stacks to work on a similar theoretical level can provide some excellent contrast and potentially uncover some things with C# you wouldn’t have otherwise tried. A good set of posts on Page Objects from a Ruby perspective starts here: Hi Eli, great work! Helped me a great deal to jump-start our UI-Testing framework. Best regards, Markus PS: Do you happen to know why the BeforeScenario()-Methode “fires” multiple-times per Scenario? To prevent Firefox to open multiple-times you added this code that looks to me like a workaround. Or is it a feature 😉 ? @Markus: Thanks, I’m glad you found it useful. My BeforeScenario and AfterScenario calls are defined in the base class and my step files inherited from that base class, so this looks like multiple BeforeScenario definitions to SpecFlow. SpecFlow treats these hooks are global, so in most situations you would use tag filtering to limit which hooks it calls. Given the inheritance model I was using would still end up having the same hooks called multiple times, I added the workaround to artificially limit it to one browser and so on. There’s more information on the hooks and using tags for filtering here: Great Post Eli, very helpful indeed! (I’m working through all your Selenium and CI blogs) One question concerning real world deployment. In order to do integration tests on my website I need to have different scenarios in the database, which requires adding and removing records to setup the required data. Given that tests should be able to run in any order, how do you best recommend resetting the database between each test. Also, do I do the given stage (“given there are 3 records”) via a direct call to the database, and then use selenium to browse the site with that now ready? Many thanks! Richard Thanks Eli, this was a really great post! You mentioned that one possibility for organising your step definitions would be to group them by the page they interact with. I was wondering, if you go this route perhaps you could make the base test class generic and specify the page object type as the generic parameter. That way you could make the CurrentPage property typed and avoid the casting.. although your .As method certainly makes the casting more readable. Thanks again for a great post! Regards, John Hi I really liked the way you are doing. How can we handle multiple browser testing means any point of time single instance of Web Driver should be invoked depending browser browser i provide since i need to integrate with CI. Please share. Thanks Hi Eli, really like this series of articles, I’ve recently been doing something similar though using bamboo rather than Jenkins and AWS elastic beanstalk for production, it’s interesting as you can push your changes to a load balanced cluster. We have been splitting our spec flow tests by entity which is quite close to page. I’d like to draw your attention to coypu which compliments spec flow from a UI perspective. We also created a mock repository layer to ensure that when we ran the tests we could be sure of the state the data was in. I hope this info is useful to you. Andy Hi Eli, Nice and neat step by step instructions. Do you have any idea of making a YouTube video? Venky Hi, thanks for the details in article. I am using selenium/Specflow with c# for automation, is there any tool/library that would help to read content from the PDF. Please suggest Thanks Hi Markus Is there a way to use Nunit to create separate tests using different parameters for the same journey? ie. same scenario in the feature file but a different test using different input parameters. MSTest allows the different tests to be listed but NUnit doesn’t. Regards Ricky
http://blogs.lessthandot.com/index.php/EnterpriseDev/application-lifecycle-management/using-specflow-to/
CC-MAIN-2017-43
refinedweb
1,133
62.27
Posted 13 Mar 2012 Link to this post Posted 15 Mar 2012 Link to this post RadMessageBox has title and message ui elements inside it which have specific fonts sizes set. If you need to modify the fonts or the font sizes you can create a new control template for the RadMessageBoxControl class. In order to avoid writing it from scratch you can copy the default RadMessageBoxControl style and modify its control template. You can do this by extracting the style from the Telerik.Windows.Controls.Primitives.dll assembly with the reflector tool. The themes are located in a Resources folder inside the assembly. You can also use the control template from this help article. The special namespace looks like this: xmlns:special="clr-namespace:Telerik.Windows.Controls.MessageBox;assembly=Telerik.Windows.Controls.Primitives" Posted 16 Mar 2012 Link to this post Posted 21 Mar 2012 Link to this post Here is where you can download the .net reflector tool. Once you have it installed, double click on the primitives assembly and it should open in reflector. If it does not open, right-click on the assembly and choose Open With from the windows menu and choose reflector (you may have to browse for the executable). After the assembly is open, look at the tree view on the left, select the primitives assembly and drill down to the resources folder. All themes are in the Telerik.Windows.Controls.Primitives.g.resources file. You are looking for the themes/messagebox.xaml
http://www.telerik.com/forums/messagebox-default-font
CC-MAIN-2017-04
refinedweb
250
64.61
Hi there. I asked a question <> on Stackoverflow: > (Pdb) import brain.utils.mail > (Pdb) import brain.utils.mail as mail_utils > *** AttributeError: module 'brain.utils' has no attribute 'mail' > > I always thought that import a.b.c as m is roughly equivalent to m = sys.modules['a.b.c']. Why AttributeError? Python 3.6 I was pointed out <> that this is a somewhat weird behavior of Python: > The statement is not quite true, as evidenced by the corner case you met, namely if the required modules already exist in sys.modules but are yet uninitialized. The import ... as requires that the module foo.bar is injected in foo namespace as the attribute bar, in addition to being in sys.modules, whereas the from ... import ... as looks for foo.bar in sys.modules. Why would `import a.b.c` work when `a.b.c` is not yet fully imported, but `import a.b.c as my_c` would not? I though it would be vice versa. Using `import a.b.c as my_c` allows avoiding a number of circular import issues. Right now I have to use `from a.b import c as my_c` as a workaround, but this doesn't look right. The enhancement is to treat `import x.y.z as m` as `m = importlib.import_module('x.y.z')`. I don't see how this will break anything.
https://bugs.python.org/msg291376
CC-MAIN-2017-47
refinedweb
227
71.82
The sudoku game is something almost everyone plays either on a daily basis or at least once in a while. The game consists of a 9×9 board with numbers and blanks on it. The goal is to fill the blank spaces with suitable numbers. These numbers can be filled keeping in mind some rules. The rule for filling these empty spaces is that the number should not appear in the same row, same column or in the same 3×3 grid. Wouldn’t it be interesting to use the concepts of OpenCV, deep learning and backtracking to solve a game of sudoku? In this article, we will build an automatic sudoku solver using deep learning, OpenCV image processing and backtracking. Using CNN and MNIST dataset Now, since this game involves numbers let us make use of the simple MNIST dataset and build a neural network on it. We will use Keras to build this neural network. Import libraries and load the data from sklearn.model_selection import train_test_split from tensorflow.keras import backend as K from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Convolution2D, Lambda, MaxPooling2D from keras.utils.np_utils import to_categorical from keras.datasets import mnist (trainx,trainy),(testx,testy)=mnist.load_data() plt.imshow(trainx[0], cmap=plt.get_cmap('gray')) plt.show() Data pre-processing Let us pre-process the data by reshaping them and normalizing the data. I have also converted the target column into categorical values. trainx = trainx.reshape(60000, 28, 28, 1) testx = testx.reshape(10000, 28, 28, 1) trainx=trainx/255 testx = testx / 255 trainy=np_utils.to_categorical(trainy) testy = np_utils.to_categorical(testy) num_classes=testy.shape[1] Build a sequential CNN and train Here we will build a convolutional neural network with only three layers since we are using MNIST data and it is easy to converge. model = Sequential() model.add(Convolution2D(32, kernel_size=3, activation='relu', input_shape=(28, 28, 1))) model.add(Convolution2D(64, kernel_size=3, activation='relu') model.add(Flatten()) model.add(Dense(10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(trainx,trainy, validation_data=(testx, testy), epochs=5) Now that the model has been trained, we can save the weights of the model and use it for the sudoku game. model.save('sudokucnn.h5') Using OpenCV for image processing After we have built the model and saved it, let us now move on to the image processing part of the puzzle. For this, I have selected a random puzzle from the internet. You can download the above image with this link. Once you have this image, let us use OpenCV and read this image. import cv2 import numpy as np import matplotlib.pyplot as plt puzzle= cv2.imread(‘sudoku.png’) gray_puzzle= cv2.cvtColor(puzzle, cv2.COLOR_BGR2GRAY) plt.imshow(gray_puzzle,cmap='gray') The next step here is to reduce the background noise in the image and set all the blanks to 0 value since the machine needs to understand where the numbers have to be filled. To do this, we will use the OpenCV gaussian blur and the inverse binary threshold. Using the inverse binary threshold, the destination pixels will be set to zero if the condition that the source pixel is greater than zero is satisfied. destination = cv2.GaussianBlur(gray_puzzle,(1,1),cv2.BORDER_DEFAULT) source_px,threshold_val = cv2.threshold(gray_puzzle, 180, 255,cv2.THRESH_BINARY_INV) plt.imshow(threshold_val,cmap='gray') As the image shows, all the destination pixels have been converted to grayscale. Now, we have the source and destination pixels but the image grids are not being read. This is important because the machine needs to understand that there are 9 3X3 grids present within the main grid. To do this, we will use the probabilistic hough transform and warping methods of Opencv. grid = cv2.HoughgridP(threshold_val,1,np.pi/180,100,set_min=100,set_max=10) for lines in grid: hor_1,ver_1,hor_2,ver_2 = lines[0] cv2.line(image,(hor_1,ver_1),(hor_2,ver_2),(0,255,0),2, cv2.LINE_AA) cv2.imwrite('tranform.jpg,image) hough_image = cv2.imread(transform.jpg',0) final = cv2.imread(‘transform.jpg') plt.imshow(final, cmap='gray') The final step of image processing is to warp the image so that the complete top-down view can be obtained. This is an important step so that the machine understands how to read the image and where it should start reading the image from. To do this, we need to contour the image and then find the width and height of the image as follows. cont,value = cv2.findContours(hough_img,cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) area = cont[0] max_area = cv2.contourArea(area) for cont in cont: if cv2.contourArea(cont) > max_area: area = cont max_area = cv2.contourArea(cont) eps = 0.01*cv2.arcLength(area,True) final_cont = cv2.approxPolyDP(area, eps, True) def take_point(point): grid_rect = np.zeros((4, 2), dtype = "float32") summation = point.sum(axis = 1) grid_rect[0] = point[0] grid_rect[2] = point[2] diff = np.diff(point, axis = 1) grid_rect[3] = point[3] grid_rect[1] = point[1] return grid_rect def warping(image, point): grid_rect = take_point(point) (len1, len2, breadth1, breadth2) = grid_rect width_1 = np.sqrt(((breadth1[0] - breadth2[0]) ** 2) + ((breadth1[1] - breadth2[1]) ** 2)) width_2 = np.sqrt(((len2[0] - len1[0]) ** 2) + ((len2[1] - len1[1]) ** 2)) tot_width = max(int(width_1), int(width_2)) height_1 = np.sqrt(((len2[0] - breadth1[0]) ** 2) + ((len2[1] - breadth1[1]) ** 2)) height_2 = np.sqrt(((len1[0] - breadth2[0]) ** 2) + ((len1[1] - breadth2[1]) ** 2)) tot_height = max(int(height_1), int(height_2)) destination = np.array([ [0, 0], [0, tot_height - 1], [tot_width - 1, tot_height - 1], [tot_width - 1, 0]], dtype = "float32") M = cv2.getPerspectivelen2ansform(grid_rect, destination) warped = cv2.warpPerspective(image, M, (tot_width, tot_height)) return warped Once the image is warped we can save this and move on to the predictions. warped_img = wapring(threshold_val,final_cont) Making predictions Once we have the image and model ready, we can start loading them and making the predictions on the image. sudoku_model= load_model(sudokucnn.h5') Now, the first step is to write a function that will predict all the existing numbers on the grid. def existining_pred(output_img): classes = sudokumodel.predict_classes(output_img) if classes == [[0]]: return 0 elif classes == [[1]]: return 1 elif classes == [[2]]: return 2 elif classes == [[3]]: return 3 elif classes == [[4]]: return 4 elif classes == [[5]]: return 5 elif classes == [[6]]: return 6 elif classes == [[7]]: return 7 elif classes == [[8]]: return 8 elif classes == [[9]]: return 9 Next, we need to extract the positions of the individual cells. This can be done as follows: def getcells(warped_img): warped_img = cv2.resize(img,(252,252)) cells = [] wt = warped_img.shape[1] ht = warped_img.shape[0] cell_wt = wt//9 cell_height = ht//9 hor_1,hor_2,ver_1,ver_2 = 0,0,0,0 for i in range(9): ver_2 = ver_1 + cell_height hor_1 = 0 for j in range(9): hor_2 = hor_1 + cell_wt current_cell = [hor_1,hor_2,ver_1,ver_2] cells.append(current_cell) hor_1 = hor_2 ver_1 = ver_2 return cells Now, we will predict the numbers by writing the logic to understand the rules of the game as shown below. def predictnums(cell,img): position = [] img = cv2.resize(img,(252,252)) img = img[cell[2]+2:cell[3]-3,cell[0]+2:cell[1]-3] cont,value = cv2.findContours(img.copy(), cv2.RElen2_len2EE, cv2.CHAIN_APPROX_SIMPLE) if len(cont) != 0: for c in cont: x,y,w,h = cv2.boundinggrid_rect(c) if (w < 15 and x > 2) and (h < 25 and y > 2): position.append((x,y,x+w,y+h)) break if position == []: result = 0 if position: img1 = img[(position[0][1]):(position[0][3]),(position[0][0]):(position[0][2]) img1 = cv2.resize(img,(28,28)) img1 = img1.reshape(1,28,28,1) result = existining_pred(img1) return result Now, we will use the transformed image that was created earlier and extract the sudoku digits and print the board as per the extracted values. def sudoku_dig(warped_img): cell_digits,num = [],0 cells = getcells(warped_img) for cell in range(len(cells)): num = predictnums(cells[cell],warped_img) cell_digits.append(num) n = 9 cell_digits = [cell_digits[i:i+n] for i in range(0, len(cell_digits), n)] return cell_digits puzzle = sudoku_dig(warped_img) def get_board(board_img): for i in range(len(board_img)): if i % 3 == 0 and i != 0: print("- - - - - - - - - - - - - ") for j in range(len(board_img[0])): if j % 3 == 0 and j != 0: print(" | ", end="") if j == 8: print(board_img[i][j]) else: print(str(board_img[i][j]) + " ", end="") get_board(puzzle) Backtracking for making predictions For solving this format of the board, we need to first let the machine find empty spaces and check for the valid numbers that can be put in the right spaces. Empty spaces are those that have 0 as the value. def check_empty(board_img): for i in range(len(board_img)): for j in range(len(board_img[0])): if board_img[i][j] == 0: return (i, j) return None After identification of this, let us now use backtracking to solve the sudoku puzzle. Backtracking is basically a recusrive algorithm that tests all possible paths that can be taken to give the best possible solution. def backtrack(board_img, digits, position): for i in range(len(board_img[0])): if board_img[position[0]][i] == digits and position[1] != i: return False for i in range(len(board_img)): if board_img[i][position[1]] == digits and position[0] != i: return False board_img_x = position[1] // 3 board_img_y = position[0] // 3 for i in range(board_img_y*3, board_img_y*3 + 3): for j in range(board_img_x * 3, board_img_x*3 + 3): if board_img[i][j] == digits and (i,j) != position: return False def get_solution(board_img): check = check_empty(board_img) if not check: return True else: x, y = check for i in range(1,10): if backtrack(board_img, i, (x, y)): board_img[x][y] = i if get_solution(board_img): return True board_img[x][y] = 0 return False get_solution(puzzle) print_board(puzzle) As you can see above, the final solution has the numbers in the grid that were obtained with backtracking. Conclusion In this article, we learned how to solve a game of sudoku using simple concepts of deep learning, OpenCV and backtracking. This model can be used on harder puzzles as well as sufficient training..
https://analyticsindiamag.com/solve-sudoku-puzzle-using-deep-learning-opencv-and-backtracking/
CC-MAIN-2020-45
refinedweb
1,672
57.77
table of contents NAME¶ uselocale - set/get the locale for the calling thread SYNOPSIS¶ #include <locale.h> locale_t uselocale(locale_t newloc); uselocale(): - Since glibc 2.10: - _XOPEN_SOURCE >= 700 - Before glibc 2.10: - _GNU_SOURCE DESCRIPTION¶¶¶ VERSIONS¶ The uselocale() function first appeared in version 2.3 of the GNU C library. CONFORMING TO¶ POSIX.1-2008. NOTES¶. EXAMPLES¶ See newlocale(3) and duplocale(3). SEE ALSO¶ locale(1), duplocale(3), freelocale(3), newlocale(3), setlocale(3), locale(5), locale(7) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
https://manpages.debian.org/testing/manpages-dev/uselocale.3.en.html
CC-MAIN-2022-21
refinedweb
113
61.93
As you know, the best way to concatenate two strings in C programming is by using the strcat() function. However, in this example, we will concatenate two strings manually. Concatenate Two Strings Without Using strcat() #include <stdio.h> int main() { char s1[100] = "programming ", s2[] = "is awesome"; int length, j; // store length of s1 in the length variable length = 0; while (s1[length] != '\0') { ++length; } // concatenate s2 to s1 for (j = 0; s2[j] != '\0'; ++j, ++length) { s1[length] = s2[j]; } // terminating the s1 string s1[length] = '\0'; printf("After concatenation: "); puts(s1); return 0; } Output After concatenation: programming is awesome Here, two strings s1 and s2 and concatenated and the result is stored in s1. It's important to note that the length of s1 should be sufficient to hold the string after concatenation. If not, you may get unexpected output.
https://cdn.programiz.com/c-programming/examples/concatenate-string
CC-MAIN-2020-40
refinedweb
142
63.19
Build Your Own Subscription Blog with Shopify. Watch for code changes With Theme Kit installed and both our theme ID and Password ready, we need to run the watch command from the command line. First, cd into your theme’s directory. Next, run the following commands to open the theme in your browser and to watch for code changes. Remember to replace xxx with your myshopify URL (minus the https://), <password> with your password, and <theme-id> with your Shopify theme ID: theme open -s xxx.myshopify.com -p <password> -t <theme-id> --hidepb theme watch -s xxx.myshopify.com -p <password> -t <theme-id> --allow-live Note the additional flags: --hidepb: hides the annoying Preview Bar --allow-live: some understandable friction to let you know that you’re editing the live theme (in case you aren’t aware!) I would suggest running the above command sequence as an Alfred workflow (or similar) for convenience. While you can store theme credentials in a config.yml file, I wouldn’t risk accidentally exposing them — for example, via GitHub (which would be a security flaw). And with that done, let’s dive into the code side of things. Step 4: Create the Theme Wrapper ( theme.liquid) We’ll start with theme.liquid file, because not only does it have some specific requirements, but it’s one of the most important files in a Shopify theme. Put simply, this file is the theme’s wrapper: anything marked up in theme.liquid will appear on every page. You’ll want to start off the markup like this: <!doctype html> <html> <head> <!-- head markup --> {{ content_for_header }} </head> <body> <!-- header markup --> {{ content_for_layout }} <!-- footer markup --> </body> </html> You’ve likely noticed from the code above that to output something with Liquid you’ll need to use double curly brackets ( {{ }}). On that note, there are two things that have already been outputted. These are required, and Theme Kit will throw an error if either of them are missing from your theme.liquid: {{ content_for_header }}: a code injection of everything that’s required to make features like Shopify Analytics work {{ content_for_layout }}: injects the relevant template (e.g. blog.liquid), all of which are stored in /templates Remember, Theme Kit is watching. Whenever you save your file, Theme Kit will know and deploy the changes to your remote theme (although you’ll have to refresh your browser to see them). Step 5: Loop over the Articles ( blog.liquid) During this next step, we’ll dive into blog.liquid and loop over all of our articles. If you haven’t created any yet, head to Online Store > Blog posts and create a blog along with some articles, remembering to set their visibility to visible (the default is hidden). You’ll find said blog at https://<store-name>.myshopify.com/blogs/blog-handle/. The default Shopify blog is at /blogs/news/. Pasting the code below into blog.liquid will list all of the articles from the current blog, displaying each’s article title wrapped in an <a href> element that links to the relevant article: {% for article in blog.articles %} <a href="{{ article.url }}">{{ article.title }}</a> {% endfor %} Further reading: Step 6: Output the article (article.liquid) During this step, we’ll write the code for article.liquid. This will output the article, but if the user isn’t a logged-in, paying customer, it’ll be blurred and a Get access button will take the user to /cart/ (and after that, the checkout). Create a “Product” First, we’ll need to create a “Product” that offers access to the blog for a one-time fee (Shopify can do this natively) or on a subscription basis (Shopify subscription app required). Navigate to Products > Add product and name it something like “Premium blog access”. Naturally, ensure that you uncheck the Track quantity and the This is a physical product checkboxes. Click Save and then make note of the Product ID from the URL. Pre-write some logic Use the following code to check if the “item” is in the cart already, replacing “ID” with your Product ID. We’ll check for the existence of the accessInCart variable later to ensure that users can’t accidentally add the item to the cart twice: {% for item in cart.items %} {% if item.product.id == "ID" %} {% assign accessInCart == "y" %} {% endif %} {% endfor %} Similarly, use the following code to check if the customer (assuming that they’re logged in) has access already. We’ll check for the existence of the hasAccess variable later to ensure that logged-in customers aren’t being shown the Get access button or being restricted from viewing the content. Again, remember to replace “ID” with your Product ID: {% if customer %} {% for order in customer.orders %} {% for line_item in order.line_items %} {% if line_item.id == "6816002113696" %} {% assign hasAccess == "y" %} {% endif %} {% endfor %} {% endfor %} {% endif %} To make the code a little more DRY (don’t repeat yourself), include both of these code snippets in theme.liquid if you’d like to include a Get access button anywhere other than article.liquid. Putting the code snippets in theme.liquid ensures that the accessInCart and hasAccess variables can exist within all /templates. Note: you’ll also want to include the following “is the customer logged in or not?” logic in theme.liquid so that customers can log in or log out from any page or template: {% if customer %} <a href="/account/logout/">Logout</a> {% else %} <a href="/account/login/">Login</a> {% endif %} Output the article Next, the following code will output the article but add a .blurred class if the customer doesn’t have access to the blog (or isn’t logged in and therefore access can’t be verified): <article{% unless hasAccess %} class="blurred"{% endunless %}> {{ article.content }} </article> Include the following code in your CSS to enable the blurring: .blurred { opacity: 0.5; filter: blur(0.5rem); user-select: none; // Prevents text selection pointer-events: none; // Prevents click events } As a bonus, you might want to use JavaScript cookies or localStorage to allow the reading of [x] articles, and then apply the above .blurred class only after those articles have been read. This improves SEO by allowing the articles to be indexed and improves conversions by offering limited access. Create a “Get access” button Finally, here’s the Get access button logic and markup: {% unless hasAccess %} <a href="/cart/{% unless accessInCart %}add/?id=ID&return_to=/cart/{% endunless %}">Get access</a> {% endunless %} Once again, remember to replace “ID” with your Product ID. Further reading: Step 7: Build out the Rest of Your Theme Alas, the final step is one that you’ll have to take alone: to build out the rest of your theme. In Step 1 we created some .liquid files in /templates. Some of these (for example login.liquid and cart.liquid) are essential to maintaining the “Premium blog access” functionality. Consult the official Shopify themes documentation, which will not only walk you through the basics of creating a Shopify theme, but also each individual .liquid template (here’s some sample code for login.liquid, for example). Enjoy developing the rest of your Shopify theme! Learn the basics of programming with the web's most popular language - JavaScript A practical guide to leading radical innovation and growth.
https://www.sitepoint.com/build-shopify-subscription-blog/
CC-MAIN-2021-39
refinedweb
1,207
65.42
⚡️ Webhooks¶ You can push data to Tingbot using webhooks. Here is an example that displays SMS messages using If This Then That. See our tutorial video to see how to set up IFTTT with webhooks. import tingbot from tingbot import * screen.fill(color='black') screen.text('Waiting...') @webhook('demo_sms') def on_webhook(data): screen.fill(color='black') screen.text(data, color='green') tingbot.run() @ webhook(webhook_name…)¶ This decorator calls the marked function when a HTTP POST request is made to the URL. To avoid choosing the same name as somebody else, you can add a random string of characters to the end. The POST data of the URL is available to the marked function as the dataparameter. The data is limited to 1kb, and the last value that was POSTed is remembered by the server, so you can feed in relatively slow data sources. You can use webhooks to push data to Tingbot, or to notify Tingbot of an update that happened elsewhere on the internet.
http://tingbot-python.readthedocs.io/en/latest/webhooks.html
CC-MAIN-2017-13
refinedweb
165
66.13
Pygame is a Python wrapper for the Simple DirectMedia Layer (SDL). Pygame focuses on bringing the world of graphics and game programming to programmers in an easy and efficient way. Typically, Pygame projects are small, simple, two-dimensional or strategy games . In Chapter 5, I'll give you a close look at a few existing Pygame-based game engines, including Pyzzle, a Myst -like engine; PyPlace, a two-dimensional isometric engine; and AutoManga, a cell -based anime-style animation engine. This book's CD comes with a copy of Pygame in the \PYTHON\PYGAME folder. The most recent versions can be found online at. The Windows binary installer on the CD has versions for Python 2.3 and 2.2, there is a Mac .sit for older Mac versions and a version for the Mac OSX, and the RPM binary has been included for the Red Hat operating system. Pygame actually comes with the most recent and standard UNIX distributions, and can be automatically built and installed by the ports manager. On Windows, the binary installer will automatically install Pygame and all the necessary dependencies. A large Windows documentation package, along with sample games and sample code, is available at the Pygame homepage at. Pygame also requires an additional package, called the Numeric Python package, in order to use a few of its sleeker and quicker array tools. This package can be found in the Python section of the accompanying CD. At this time, the creators of Numeric Python are working NOTE SDL SDL is considered an alternative to Direct X especially on Linux machines. As a multimedia and graphics library, SDL provides low-level access to a computer's video, sound, keyboard, mouse, and joystick. SDL is similar in structure to a very rudimentary version of Microsoft's Direct X API, the big difference being that SDL is open source, supports multiple operating systems (including Linux, Mac, Solaris, FreeBSD, and Windows), and has an API binding to other languages, including Python. SDL is written in C and available under the GNU Lesser General Public License. Sam Lantinga, who worked for both Loki Software and Blizzard entertainment, is the genius behind SDL. He got his start with game programming in college by porting a Macintosh game called Maelstrom to the Linux platform. Sam was working on a Windows port of a Macintosh emulator program called Executor and figured that the code he was building to extract the emulator's graphics, sound, and controller interface could be used on other platforms. Late in 1997 he went public with SDL as an open-source project, and since then SDL has been a contender. The Mac OS X tar includes Python 2.2, Pygame1.3 (hacked for Macs), PyOpenGL, and Numeric. There are still some bugs and issues with SLD compatibility on pre-OS X and post-OS X, and a simple installer for the Mac that should fix most of these issues is planned for when Python 2.3 is released. NOTE CAUTION Do not use stuffit to untar the pack age on Mac OS X. Stuffit will trun cate some of the larger filenames. Pygame is distributed under the GNU LGPL, license Version 2.1. See Figure 4.1 for a shot of Pygame installation. Pygame itself is fairly easy to learn, but the world of computer games and graphics is often unforgiving to beginners . Pygame has also suffered criticism for its lack of documentation. This lack of documentation leads many new developers to browse through the Pygame package, looking for information. However, if you browse through the package, you will find an overwhelming number of classes at the top of the index, making the package seem confusing. The key to starting with Pygame is to realize that you can do a great deal with just a few functions, and that you may never need to use many of the classes. The first step towards using Pygame after it has been installed is to import the Pygame and other modules needed for development into your code. Do the following: import os, sys import pygame from pygame.locals import * Keep in mind that Python code is case-sensitive, so for Python, Pygame and pygame are totally different creatures . Although I capitalize Pygame in this book's text, when importing the module, pygame needs to be in all lowercase letters . First import a few non-Pygame modules. You'll use the os and sys libraries in the next few examples for creating platform independent files and paths. Then import Pygame itself. When Pygame is imported, it doesn't actually import all of the Pygame modules, as some are optional. One of these modules, called locals , contains a subset of Pygame with commonly used functions like rect and quit in the easy-to-access global namespace. For the upcoming examples the locals module will be included so that these functions will be available as well. NOTE TIP The Pygame code repository is a community-supported library of tools and code that utilizes Pygame. The source code is managed by Pygame, but submissions are from users of the library. The repository holds a number of useful code snip petseverything from visual effects to common game algorithmsand can be found at . The most important element in Pygame is the surface. The surface is a blank slate, and is the space on which you place lines, images, color , and so on. A surface can be any size, and you can have any number of them. The display surface of the screen is set with: Pygame.display.set_mode() You can create surfaces that have images with image.load() , surfaces that contain text with font.render() , and blank surfaces with Surface() . There are also many surface functions, the most important being blit() , fill() , set_at() , and get_at() . The surface.convert() command is used to convert file formats into pixel format; it sets a JPEG, GIF, or PNG graphic to individual colors at individual pixel locations. NOTE TIP Using surface.convert is impor tant so that SDL doesn't need to convert pixel formats on-the-fly . Converting all of the graphic images into an SDL format on-the-fly will cause a big hit to speed and performance. My_Surface = pygame.image.load('image.jpeg') as is converting an image: My_Surface = pygame.image.load('image.jpeg').convert() A conversion only needs to be done once per surface, and should increase the display speed dramatically. Drawing on the display surface doesn't actually cause an image to appear on the screen. For displaying, the pygame.display.update() command is used. This command can update a window, the full screen, or certain areas of the screen. It has a counterpart command, pygame.display.flip() , which is used when using double-buffered hardware acceleration. NOTE CAUTION The convert() command will actu ally rewrite an image's internal format. This is good for a game engine and displaying graphics, but not good if you are writing an image-conversion program or a program where you need to keep the original format of the image. Creating a window in which Pygame can run an application is fairly easy. First you need to start up Pygame with an initialize command: pygame.init() Then you can set up a window with a caption by using Pygame's display command: My_Window = pygame.display.set_mode((640, 480)) This code run by itself (the code is included as the My_Window.py example in this chapter's code section on the CD) creates a 640x480-pixel window labeled Pygame Window, just like in Figure 4.2. Of course, the window accomplishes nothing, so it immediately disappears after showing up on the screen. The most used class in Pygame probably the rect() class, and it is the second most important concept in Pygame. rect() is a class that renders a rectangle: My_Rectangle = pygame.rect() rect() comes with utility functions to move, shrink, and inflate itself; find a union between itself and other rects; and detect collisions. This makes rect() an ideal class for a game object. The position of a rect() is defined by its upper-left corner. The code that rect s use to detect overlapping pixels is very optimized, so you will see rect s used in sprite and other sorts of collision detection. For each object, there will often be a small rect() underneath to detect collisions. In order for Pygame to respond to a player or user event, you normally set up a loop or event queue to handle incoming requests (mouse clicks or key presses). This loop is a main while loop that checks and makes sure that the player is still playing the game: still_playing = 1 while (still_playing==1): for event in pygame.event.get(): if event.type is QUIT: still_playing = 0 The for event line uses pygame.event.get() to get input from the user. Pygame understands basic Windows commands, and knows that QUIT is equivalent to pressing the X at the top right corner of a created window. The pygame.event function is used to handle anything that needs to go into the event queuewhich is basically input from any sort of device, be it keyboard, mouse, or joystick. This function basically creates a new event object that goes into the queue. The pygame.event.get function gets events from the queue. The event members for pygame.event are QUIT. Quit or Close button. ACTIVEEVENT. Contains state or gain. KEYDOWN. Unicode key when pressed. KEYUP. Uncode key when released. MOUSEMOTION. Mouse position. MOUSEBUTTONUP. Position mouse button releases. MOUSEBUTTONDOWN. Position mouse button pressed. JOYAXISMOTION. Joystick axis motion. JOYBALLMOTION. Trackball motion. JOYHATMOTION. Joystick motion. JOYBUTTONUP. Joystick button release. JOYBUTTONDOWN. Joystick button press. VIDEORESIZE. Window or video resize. VIDEOEXPOSE. Window or video expose. USEREVENT. Coded user event. These are normally used to track keyboard, mouse, and joystick actions. Let's say you wanted to build in mouse input handling. All mouse input is retrieved through the pygame.event module. if event.type is MOUSEBUTTONDOWN: # do something Pygame also has a number of methods to help it deal with actual mouse position and use; these are listed in Table 4.1. You can check the state of a mouse or keyboard event by using pygame.mouse.get_pos() or pygame.key.get_pressed() , respectively. Pygame has great built-in functions for graphics. These functions revolve around the idea of the surface, which is basically an area that can be drawn upon. Let's say you wanted to fill the background in My_Window.py with a color. First grab the size of the window: My_Background = pygame.Surface(My_Window.get_size()) This creates a surface called My_Background that's the exact size of My_Window. Next convert the surface to a pixel format that Pygame can play with: My_Background = My_Background.convert() And finally, fill the background with a color (set with three RGB values): My_Background.fill((220,220,80)) Now let's do some drawing over the background surface. Pygame comes with a draw function and a line method, so if you wanted to draw a few lines, you could do this: pygame.draw.line (My_Background, (0,0,0,),(0,240),(640,240), 5) pygame.draw.line (My_Background, (0,0,0), (320,0), (320,480), 5) Pygame's draw.line takes five parameters. The first is the surface to draw on, the second is what color to draw (again in RGB values), and the last is the pixel width of the line. The middle parameters are the start and end points of the line in x and y pixel coordinates. In this case, you draw the thick lines crossing in the exact center of the window, as shown in Figure 4.3. The easiest way to display the background and lines is to put them into a draw function: def draw_stuff(My_Window): My_Background = pygame.Surface(My_Window.get_size()) My_Background = My_Background.convert() My_Background.fill((220,220,80)) pygame.draw.line (My_Background, (0,0,0,),(0,240),(640,240), 5) pygame.draw.line (My_Background, (0,0,0), (320,0), (320,480), 5) return My_Background Then call the function within the loop that exists (and illustrated as code sample My_Window_3.py on the CD): My_Display = draw_stuff(My_Window) My_Window.blit(My_Display, (0,0)) pygame.display.flip() Blitting ( Block Image Transfering ) is practically synonymous with rendering, and specifically means redrawing an object by copying the pixels of said object onto a screen or background. If you didn't run the blit() method, nothing would ever get redrawn and the screen would just remain blank. For those of you who must know, blit isn't a made-up wordit's short for "bit block transfer." In any game, blitting is often a process that slows things down, and paying attention to what you are blitting, when you are blitting, and how often you are blitting will have a major impact on your game's performance. The key to a speedy graphics engine is blitting only when necessary. The blit method is very important in Pygame graphics. It is used to copy pixels from a source to a display. In this case, blit takes the pixels plotted in My_Display (which took the commands from draw_stuff ) and copies them to My_Window . The blit method understands special modes like colorkeys and alpha, it can use hardware support if available, and it can also carry three-dimensional objects in the form of an array (using blit_array() ). In this example, blit is taking My_Display as the input and rendering it to My_Window , and it uses the upper-left corner (pixel 0,0) to key up the surface. The pygame.display.flip() command is Pygame's built-in function for updating the entire display (in this case, the entirety of My_Window ) once any graphic changes are made to it. NOTE TIP In Windows, you can add a single "w" to the end of a Python file (so that instead of ending it in py, it ends in pyw) to make the pro gram open up a window without opening up the interpret console, that funny -looking DOS box. Image loading is an oft-needed function in games; in this section I'll show you the steps for loading an image in Pygame. After importing the necessary modules, you need to define a function for loading an image that will take an argument. The argument will be used to set the colorkey (the transparency color) of the image; it looks like this: def load_image(name, colorkey=None): Colorkey blitting involves telling Pygame that all pixels of a specific color in an image should be transparent. This way, the image square doesn't block the background. Colorkey blitting is one way to make non-rectangular , two-dimensional shapes in Pygame. The other common trick is to set alpha values using a graphics program like Adobe Photoshop, as illustrated in Figure 4.4 and explained in the following sidebar. To turn colorkey blitting on, you simply use surface.set_colorkey(color) . The color fed to surface.set_colorkey is three-digit tuple (0,0,0) with the first number being the red value, the second green, and the third blue (that is, rgb). NOTE Colorkey versus Alpha Both colorkey and alpha are techniques for making parts of a graphic transparent when traveling across the screen. In Pygame, most 2D game objects and sprites are rect s, and are rectangular in shape. This means you need a way to make part of the rectangle transparent, so that you can have circular, triangular , or monkey -shaped game pieces. Otherwise you would only be capable of displaying square pieces over a background. Alpha is one technique for making parts of an image transparent. An alpha setting causes the source image to be translucent or partially opaque . Alpha is normally measured from 0 to 255, and the higher the number is the more transparent the pixel or image is. Alpha is very easy to set in a graphic editor (like Adobe Photoshop), and Pygame has a built-in get_alpha() command. There is also per-pixel alpha where you can assign alpha values to each individual pixel in a given image. When using a colorkey technique (sometimes called colorkey blitting ) you let the image renderer know that all pixels of one certain color are to be set as transparent. Pygame has a built-in colorkey(color) function that takes in a tuple in the form of RGB. For instance, set_colorkey(0,0,0) would make every black pixel in a given image transparent. You'll use both techniques in this chapter. The load_image function in this section uses set_colorkey() , while the load_image command in the Monkey_Toss.py graphics example later on in the chapter uses get_alpha . The module needs to know where to grab the image, and this is where the os module comes into play. You'll use the os path function to create a full pathname to the image that needs to be loaded. For this example, say that the image is located in a "data" subdirectory, and then use the os.path.join function to create a pathname on whatever system (Mac, Windows, UNIX) that Python is running on. fullname = os.path.join('data', name) Being able to fail gracefully is important in programming. Basically, you always need to leave a back door, or way out of a program, for if an error occurs. You'll find that try/except or try/finally constructs are very common. Python offers a try/except/else construct that allows developers to trap different types of errors and then execute appropriate exception-handling code. try/except actually looks just like a series of if/elif/else program flow commands: try: execute this block except error1: execute this block if "error1" is generated except error2: execute this block if "error2" is generated else: execute this block This structure basically allows for the execution of different code blocks depending on the type of error that is generated. When Python encounters code wrapped within a try-except - else block, it first attempts to execute the code within the try block. If this code works without any exceptions being generated, Python then checks to see if an else block is present. If it is, that code is executed. If a problem is encountered while running the code within the try block, Python stops execution of the try block at that point and begins checking each except block to see if there is a handler for the problem. If a handler is found, the code within the appropriate except block is executed. Otherwise, Python jumps to the parent try block, if one exists, or to the default handler (which terminates the program). A try/except structure is used to load the actual image using Pygame's image.load. Do this through a try/except block of code in case there is an error when loading the image: try: image=pygame.image.load(fullname) except pygame.error, message: print 'Cannot load image:', name raise SystemExit, message Once the image is loaded, it should be converted. This means that the image is copied to a Pygame surface and its color and depth are altered to match the display. This is done so that loading the image to the screen will happen as quickly as possible: image=image.convert() The next step is to set the colorkey for the image. This can be the colorkey provided when the function was called, or a -1 . If the -1 is called, the value of colorkey is set to the top-left (0,0) pixel. Pygame's colorkey expects an RGBA value, and RLEACCEL is a flag used to designate an image that will not change over time. You use it in this case because it will help the speed of the image being displayed, particularly if the image must move quickly. if colorkey is not None: if colorkey is -1: colorkey = image.get_at((0,0)) image.set_colorkey(colorkey, RLEACCEL) The final step is to return the image object as a rect (Like I've said, Pygame is based on rect s and surfaces) for the program to use: return image, image.get_rect() The full code snip for load_image is listed here and also on the CD, as Load_Image.py : def load_image(name, colorkey=None): fullname = os.path.join('data', name) try: image=pygame.image.load(fullname) except pygame.error, message: print 'Cannot load:', name raise SystemExit, message image=image.convert() if colorkey is not None: if colorkey is -1: colorkey = image.get_at((0,0)) image.set_colorkey(colorkey, RLEACCEL) return image, image.get_rect() Pygame has, of course, methods for dealing with text. The pygame.font method allows you to set various font information attributes: My_Font = pygame.font.Font(None, 36) In this case, you set up a My_Font variable to hold Font(None, 36) , which establishes no particular font type ( None , which will cause a default font to be displayed) and a 36 font size ( 36 ). Step 2 is to choose what font to display using font.render : My_Text = font.render("Font Sample", 1, (20, 20, 220)) The arguments passed to font.render include the text to be displayed, whether the text should be anti-aliased (1 for yes, 0 for no), and the RGB values to determine the text's color. The third step is to place the text in Pygame's most useful rect() : My_Rect = My_Text.get_rect() Finally, you get the center of both rect() s you created and the background with Python's super-special centerx method (which is simply a method for determining the exact center of something), and then call the blit() method to update: My_Rect.centerx = My_Background.get_rect().centerx background.blit(My_Text, My_Rect) A Pygame game loop is usually very straightforward. After loading modules and defining variables and functions, you just need a loop that looks at user input and then updates graphics. This can be done with only a few lines of code. A typical event loop in a game would look something like this: while 1: for event in pygame.event.get(): if event.type == QUIT: #exit or quit function goes here return screen.blit(MY_Window, (0, 0)) pygame.display.flip() The pygame.event module looks for user input, and pygame.blit and pygame.display keep the graphics going. Let's say, for example, that you wanted to look specifically for up or down arrow keys for player control. To do so, you could simply add elif statements to the event loop: while 1: for event in pygame.event.get(): if event.type == QUIT: #exit or quit function goes here return # Add to listening for arrow keys In the event queue elif event.type == KEYDOWN: If event.key == K_UP # do something If event.key == K_DOWN # do something screen.blit(MY_Window, (0, 0)) pygame.display.flip() Originally computers were simply incapable of drawing and erasing normal graphics fast enough to display in real-time for purpose of a video game. In order for games to work, special hardware was developed to quickly update small graphical objects, using a variety of special techniques and video buffers. These objects were dubbed sprites . Today sprite usually refers to any animated two-dimensional game object. Sprites were introduced into Pygame with Version 1.3, and the sprite module is designed to help programmers make and control high-level game objects. The sprite module has a base class Sprite , from which all sprites should be derived, and several different types of Group classes, which are used as Sprite containers. When you create a sprite you assign it to a group or list of groups, and Pygame instantiates the sprite game object. The sprite can be moved, its methods can be called, and it can be added or removed from other groups. When the sprite no longer belongs to any groups, Pygame cleans up the sprite object for deletion (alternately, you can delete the sprite manually using the kill() method). The Group class has a number of great built-in methods for dealing with any sprites it owns, the most important being update() , which will update all sprites within the group. Several other useful group methods are listed in Table 4.2. Groups of sprites are very useful for tracking game objects. For instance, in an asteroid game, player ships could be one group of sprites, asteroids could be a second group, and enemy starships a third group. Grouping in this way can make it easy to manage, alter, and update the sprites in your game code. Memory and speed are the main reasons for using sprites. Group and sprite code has been optimized to make using and updating sprites very fast and low-memory processes. Pygame also automatically handles cleanly removing and deleting any sprite objects that no longer belong to any groups. Updating an entire screen each time something changes can cause the frames -per-second rate to dip pretty low. Instead of updating the entire screen and redrawing the entire screen normally, an engine should only change the graphics that have actually changed or moved. The engine does this by keeping track of which areas have changed in a list and then only updating those at the end of each frame or engine cycle. To help out in this process, Pygame has different types of groups for rendering. These methods may not work with a smooth-scrolling, three-dimensional, realtime engine, but then again, not every game requires a whopping frame-rate. Pygame's strength lies elsewhere. Besides the standard Group class there is also a GroupSingle , a RenderPlain , a RenderClear , and a RenderUpdates class (see Figure 4.5). GroupSingle can only contain one sprite at any time. Whenever a sprite is added to GroupSingle , any existing sprite is forgotten and set for deletion. RenderPlain is used for drawing or blitting a large group of sprites to the screen. It has a specific draw() method that tracks sprites that have image and rect attributes. RenderPlain is a good choice as a display engine for a game that scrolls through many backgrounds but not any rects , like scrolling games where the player stays in a consistent area of the screen and the background scrolls by to simulate movement. RenderClear has all the functionality of RenderPlain but also has an added clear() method that uses a background to cover and erase the areas where sprites used to reside. RenderUpdates has all the functionality of RenderClear , and is also capable of tracking any rect (not just sprites with rect attributes) for rendering with draw() . Sprites also have built-in collision detection. The spritecollide() method checks for collisions between a single sprite and sprites within a specific group, and will return a list of all objects that overlap with the sprite if asked to. It also comes with an optional dokill flag, which, if set to true, will call the kill() method on all the sprites. A groupcollide() method checks the collision of all sprites between two groups and will return a dictionary of all colliding sprites if asked to. Finally, the spritecollideany() method returns any single sprite that collides with a given sprite. The structure of these collision methods is: pygame.sprite.spritecollide(sprite, group, kill?) ->list pygame.sprite.groupcollide(group1, group2, killgroup1?, killgroup2?) -> dictionary pygame.sprite.spritecollideany(sprite, group) -> sprite Here is an example of a collision that checks to see whether My_Sprite ever collides with My_Player , and removes the offending My_Sprite sprite: for My_Sprite in sprite.spritecollide(My_Player, My_Sprite, 1): #What happens during the collision plays out here When using Pygame sprites, you need to keep a few things in mind. First, all sprites need to have a rect() attribute in order to use the collide() or most other built-in methods. Second, when you call the Sprite base class to derive your sprite, you must call the sprite_init_() method from your own class_init_() method. Python being a pseudoobject-oriented language, normally game classes are created first, then specific instances of game objects are initiated from the created classes. Let's walk through creating an example class, a banana : class Banana: # _init_ method # banana method # banana method 2 # banana method 3 def main My_Banana = Banana() This is roughly how a class works. The Banana class needs at least an _init_ method, and will likely contain many more. After the class is created, simply call the class to create an instance called My_Banana in the main loop. Since an _init_ method is mandatory, let's take a look at what that method would look like first: class Banana(pygame.sprite.Sprite): def _init_(self): pygame.sprite.Sprite._Init_(self) The Banana class is set up as a Pygame sprite. When you define the _init_ method, you must specify at least one parameter that represents the object of the class for which the method is called. By convention, this reference argument is called self . You may want to add other specifications to the _init_ method. For instance, you may wish to specify an image / rect and load up a graphic. You may also want to tie the Banana class to the screen: class Banana(pygame.sprite.Sprite): def _init_(self): pygame.sprite.Sprite._Init(self) self.Image, self.rect = load_png('banana.png') screen = pygame.display.get_surface() After defining _init_ , you may also want to add methods that define the object's position on the screen, and update the object when necessary: class Banana(pygame.sprite.Sprite): def _init_(self): pygame.sprite.Sprite._Init(self) self.Image, self.rect = load_png('banana.png') screen = pygame.display.get_surface() def Bannana_Position(self, rect) # Funky math here # that defines position on screen return position def Banana_Update(self) # Code that updates the banana Pygame is simply a wrapper around SDL, which is a wrapper around operating system graphic calls. Although programming is much easier when using Pygame than when using SDL, Pygame removes you pretty far from the code that actually does the work, and this can be limiting in a number of ways. Probably the most significant drawback to Pygame, however, is the fact that the library needs so many dependencies in order to function. Obviously, Pygame needs Python and SDL to run, but it also needs several smaller libraries, including SDL_ttf, SDL_mixer , SDL_image , SDL_rotozoom , and the Python Numeric package for the surfarray module. Some of these libraries have their own dependencies. UNIX packages generally come with package and dependency managers that make managing dependencies a controllable problem in UNIX. But on Windows systems, it can be difficult to distribute a game without creating a collection of all the needed files the game requires to run. Luckily, there are Python tools to help build Windows executables. I mentioned a few of these in Chapter 3, in particular a tool called Py2exe. Pete Shinners, the Pygame author, actually wrote a tutorial on how to use Py2exe to package a Python Pygame for Windows. The tutorial comes with a sample distutils script and can be found at. Finally, although hardware acceleration is possible with Pygame and fairly reliable under Windows, it can be problematic because it only works on some platforms, only works full screen, and greatly complicates pixel surfaces. You also can't be absolutely sure that the engine will be faster with hardware accelerationat least not until you've run benchmark tests. In this section you'll use the Pygame load_image function with game loops to create a simple two-dimensional graphics-engine game example. The steps you need to take are as follows : Import the necessary libraries. Define any necessary functions (such as load_image ). Define any game object classes (sprites, game characters ). Create a main event loop that listens for events. Set up Pygame, the window, and the background. Draw and update necessary graphics ( utilizing groups and sprites). I envision a monkey-versus-snakes game, where the monkey/player throws bananas at snakes to keep them at bay. The steps for coding this example are explained in each of the following sections, the full source code can be found on the CD as Monkey_Toss.py , and Figure 4.6 gives you a preview of the game. Importing has been covered ad nauseum already, so I will not bore you with the details. Simply start with this code: # Step 1 - importing the necessary libraries import pygame, os import random from pygame.locals import * These libraries should be familiar to you with the exception of the random module. Python comes equipped with random , and we will be using the random. randrange method to generate random numbers . NOTE Random Library The random.randrange method generates a random number (an integer) within the range given. For instance, this snippet prints a number between 1 and 9: import random Print ( random.randrange(1, 10)) Simple enough. Note that random.randrange prints up to the highest number given, but not the actual highest digit. Random numbers are used so often in games that you will often encounter random number functions like this: Def DiceRoll(): Dice1 = random.randrange( 1, 7) Print "You rolled %d" % (dice1) Return dice1 You will be using random's randrange() and seed() methods to produce random numbers for the Monkey_Toss.py example. You will be using a version of load_image in this game example, but you will switch from using colorkey and look instead for alpha values in the graphics. You have the graphics already built with alpha channels and stored in a data directory next to the game code (and also on the CD). This means you need to alter a few lines of code from Load_Image.py : def load_image(name): fullname = os.path.join('data', name) try: image = pygame.image.load(fullname) # Here instead of the colorkey code we check for alpha values if image.get_alpha is None: image = image.convert() else: image = image.convert_alpha() except pygame.error, message: print 'Cannot load image:', fullname raise SystemExit, message return image, image.get_rect() You will also define a very short function to help handle keystroke events from the player. We will call this function AllKeysUp : def AllKeysUp(key): return key.type == KEYUP First you will define a sprite class. The class needs, of course, an _init_ method: class SimpleSprite(pygame.sprite.Sprite): def __init__(self, name=None): pygame.sprite.Sprite.__init__(self) if name: self.image, self.rect = load_image(name) else: pass When initiating, you set SimpleSprite to load the given image name and become a rect() . Normally, you would include error code in case the name isn't passed or something else goes wrong, but for now you will just use Python's pass command ( pass is an empty statement that can be used for just such a situation). You will also give your SimpleSprite a method to set up its surface: def set_image(self, newSurface, newRect=None): self.image = newSurface if newRect: self.rect = newRect else: pass Normally you would set up each pass default and also include at least a base method for updating the sprite, but for now let's keep it easy. For this engine, as I said, I envisioned a monkey versus snakes game, and since you are writing in Python, start with the Snake_Grass class: class Snake_Grass: def __init__(self, difficulty): global snakesprites global block for i in range(10): for j in range(random.randrange(0,difficulty*5)): block = SimpleSprite("snake.png") block.rect.move_ip(((i+1)*40),480-(j*40)) snakesprites.add(block) def clear(self): global snakesprites snakesprites.empty() There are two methods in this class, one to initiate the object and one to clear it. The clear() method simply uses empty() to clear out the global snakesprites when it is time. The _init_ method takes in the required self and also a measure of difficulty, ensures snakesprites and block are created, and then starts iterating through a for loop. The outer for loop iterates through a second inner for loop that creates a random number of "blocks," each of which contains a square snakesprites loaded with the snake.png graphic. These sprites are created and moved into stacks on the game board using a bit of confusing math ( block.rect.move_ip(((i+1)*40),480-(j*40)) ). Don't worry too much about the math that places these sprites on your 480 pixel-wide surface; instead, realize that when initiated with an integer representing difficulty, a Snake_Grass object will create a playing board similar to that in Figure 4.7. The placement of the snakesprites and the height of the rows are random so that a differ ent game board surface is produced each time the game is run. Define the player sprite next; this will be Monkey_Sprite . You want the Monkey_Sprite to possess the ability move in the game, so you need to define a number of methods to define and track movement: class Monkey_Sprite(pygame.sprite.Sprite): def __init__(self, game): # For creating an Instance of the sprite def update(self): # Update self when necessary def check_crash(self): # Check for collision with other sprites def move(self): # How to move def signal_key( self, event, remainingEvents ): # Respond to player If they me to do something def check_land(self): # See If reach the bottom of the screen That's a lot of methods, but in actuality, the Monkey_Sprite is fairly uncomplicated once you take the time to walk through each method. Lets start with _init_ : def __init__(self, game): pygame.sprite.Sprite.__init__(self) self.image, self.rect = load_image('monkey.png') self.rightFacingImg = self.image self.leftFacingImg = pygame.transform.flip( self.image, 1, 0) self.direction = 0 self.increment = 25 self.oldPos = self.rect self.game = game self.listenKeys = {} First you load the image into a rect() that will represent the Monkey_Sprite game object, monkey.png, on the game board surface. Then you set a number of variables. The rightFacingImg is the normal state of the graphic, and the leftFacingImg is the graphic rotated 180 degrees using the Pygame's handy transform.flip() method. The self.direction value is a Boolean value that will either have the Monkey_Sprite traveling left (represented by a 0) or right (represented by a 1). Set self.increment to 25, representing 25 pixels that the Monkey_Sprite will travel with each update. The next three settings are all set for the methods that follow and use them. Update is the next method: def update(self): self.check_land() self.move() if self.direction == 0: self.image = self.rightFacingImg else: self.image = self.leftFacingImg self.check_crash() Update first checks, using the check_land method, to see whether the Monkey_Sprite has reached the bottom of the screen. You haven't defined check_land yet, but you will momentarily. Then update moves the Monkey_Sprite with the move method, which you also have yet to define. It then checks which direction Monkey_Sprite is facing and makes sure the graphic being used is facing the correct way. Finally, update calls check_crash , which also needs to be defined, and checks to see whether there have been any sprite collisions. The check_land method simply looks to see if the Monkey_ Sprite has crossed a particular pixel boundary on the game board surface, which is defined by the self.rect.top and self.rect.left variables. If it has, then we know that the Monkey_Sprite needs to start back over at the top of the screen. def check_land(self): if (self.rect.top == 640) and (self.rect.left == 1): self.game.land() The move method uses the defined increment value you set in _init_ to move the sprite across the screen in the direction you've set. If the sprite goes outside the game window (>640 or <0 pixels), you make the sprite switch and travel back across the screen in the opposite direction: def move(self): self.oldPos = self.rect self.rect = self.rect.move(self.increment, 0) if self.rect.right > 640: self.rect.top += 40 self.increment = -25 self.direction = 1 if self.rect.left < 0: self.rect.top += 40 self.increment = 25 self.direction = 0 The check_crash method uses Pygame's built-in group methods and pygame.sprite.spritecollide() to check if the Monkey_Sprite ever collides with anything in the crash list, which in this case includes any snakesprites . If there is a crash, Monkey_Sprite will call the game.crash() method, which we will define momentarily. def check_crash(self): global snakesprites crash_list = pygame.sprite.spritecollide(self, snakesprites, 0) if len(crash_list) is not 0: self.game.crash(self) Only one more method is associated with the Monkey_ Sprite , signal_key , which is simply a listener for keyboard events. def signal_key( self, event, remainingEvents ): if self.listenKeys.has_key( event.key ) \ and event.type is KEYDOWN: self.listenKeys[event.key]( remainingEvents ) Once a MonkeySprite object is loaded, it will appear in the top-left corner of the game board surface and travel across the screen, as shown in Figure 4.8. When it hits the edge of the screen, it drops a little and then heads back in the opposite direction. If the Monkey_Sprite ever touches a snakesprite or the bottom of the screen, he will start back at the top again. Now you have monkeys and snakes. You need one more actor, a banana, which the Monkey_Sprite objects will throw at and destroy the snake objects with. This means you need methods for the banana to update and move and check for collisions: class Banana(pygame.sprite.Sprite): def __init__(self, rect, game): def update(self): def move(self): def check_hit(self): Initializing the banana sprite works much like the other _init_ methods. There will be an incremental value that defines how many pixels the banana moves when updated, and the sprite that represents the banana will load up a rect() and fill it with the fruit.png file. Finally, you will need some code to check with the master game object for when the banana collides or moves off the screen: def __init__(self, rect, game): pygame.sprite.Sprite.__init__(self) self.increment =16 self.image, self.rect = load_image("fruit.png") if rect is not None: self.rect = rect self.game = game Updating and moving are also set up like the other classes. The banana moves according to its increment value and checks are required to see if the banana collides with any sprites or moves off of the game board surface: def update(self): self.move() def move(self): self.rect = self.rect.move(0, self.increment) if self.rect.top==480: self.game.miss() else: self.check_hit() Finally, the check_hit method looks for any collisions with snakesprites just like with the Monkey_Sprite : def check_hit(self): global snakesprites collide_list = pygame.sprite.spritecollide(self, snakesprites,0) if len(collide_list) is not 0: self.game.hit() There is still one more class to writethe most important and lengthy game object. You are actually going to put the game controls and variables into a game class called MonkeyToss . We need MonkeyToss to be able to handle a number of different things, but mostly keyboard events, collisions, and actions for when sprites move off the screen. This gives MonkeyToss several different methods: class MonkeyToss: def __init__(self, charGroup): def crash(self, oldPlane): def land(self): def drop_fruit(self): def miss(self): def signal_key( self, event, remainingEvents ): def hit(self): The master game class initializes pretty much everything else you need as far as game mechanics. First, it takes in the game sprites and assigns them to the charGroup group. Then it defines the game difficulty that the rest of the classes use. The specifc keybard key the sprite needs to respond to is the spacebar, which when pressed will fire the drop_fruit method. Finally the snake, monkey, and banana ( fruit ) are all initialized : def __init__(self, charGroup): self.charGroup = charGroup self.difficulty = 2 self.listenKeys = {K_SPACE: self.drop_fruit} self.snake = Snake_Grass(self.difficulty) self.monkey = Monkey_Sprite(self) self.charGroup.add( [self.plane] ) self.fruit = None The crash method is called by our Monkey_Sprite when it collides with a snakesprite . When the Monkey_Sprite collides with a snakesprite , it needs to be destroyed with the kill() method and then a new Monkey_Sprite should be instantiated to start over and be assigned to the sprite group: def crash(self, oldMonkey): self.monkey.kill() self.monkey = Monkey_Sprite(self) self.charGroup.add ([self.monkey]) The land method is also called by the Monkey_Sprite when it reaches the bottom of the screen. For this sample the method is identical to the crash method, but in a real game, the landing might create a new field of snakes, or pop the player to a different area of the game entirely. def land(self): self.monkey.kill() self.monkey = Monkey_Sprite(self) self.charGroup.add([self.monkey]) The drop_fruit method is called when the spacebar is pressed, and Monkey_Sprite attempts to drop fruit on a snake. Drop_fruit assigns self.fruit an instance of the Banana class and adds it to the active sprite group: def drop_fruit(self): if self.fruit is None: self.fruit = Banana(self.monkey.rect, self) self.charGroup.add([self.fruit]) Code must be created for when the dropped fruit falls past the end of the screen; for our purposes the sprite can just call the kill() method on itself: def miss(self): self.fruit.kill() self.fruit = None For keyboard events, define a signal_key method: def signal_key( self, event, remainingEvents ): if self.listenKeys.has_key( event.key ): self.listenKeys[event.key]() else: self.monkey.signal_key( event, remainingEvents ) The last part is the code that handles sprite collision. This bit is fairly complex. First you need to keep track of all the snakesprites , and then all of the sprites in the group, by creating My_Group . Then you call colliderects[] , which returns true if any rect in the group collides: def hit(self): global snakesprites My_Group = pygame.sprite.Group() colliderects = [] Following colliderects[] is a for loop that basically checks to see if the bottom of the fruit rect and the top of the monkey rect collide, and if so adds them to the collide list: for i in range(3): for j in range((self.fruit.rect.bottom+16-self.monkey.rect.top)/16): rect = Rect((self.fruit.rect.left-32+i*32, self.fruit.rect. bottom-j*16),(25,16)) colliderects.append(rect) Then, for each collision, you need to destroy the given fruit and make sure the sprite group is updated: for rect in colliderects: sprite = SimpleSprite() sprite.rect = rect My_Group.add(sprite) list = pygame.sprite.groupcollide(My_Group, snakesprites, 1,1) self.fruit.kill() self.fruit = None That's quite a lot of work, but, happily, defining the classes comprises the bulk of this sample's code, and you are past the halfway point of coding. Now onwards! To create a main loop, you normally define a main function containing a while loop: def main(): while 1: # do stuff if __name__ == "__main__": main() This ensures that main() is called and your while loop keeps running during the course of the game. As good coding practice, initialize a few variables inside of main() : global screen global background global snakesprites global block You are also going to take advantage of a Pygame clock feature and use random's seed method to set a random number seed. Since you are going to be experiencing movement and time, you'll be setting an oldfps variable to help keep track of time and loop iterations: clock = pygame.time.Clock() random.seed(111111) oldfps = 0 Finally, the while loop. You want to make sure time is recorded by using clocktick() and updating with each iteration. Any keyboard events are queued, so that QUIT, the Escape key, or the KEYUP , which is set to be the Spacebar, can be responded to: while 1: clock.tick(10) newfps = int(clock.get_fps()) if newfps is not oldfps: oldfps = newfps oldEvents = [] remainingEvents = pygame.event.get() for event in remainingEvents: oldEvents.append( remainingEvents.pop(0) ) upKeys = filter( AllKeysUp, remainingEvents ) if event.type == QUIT: return elif event.type == KEYDOWN and event.key == K_ESCAPE: return elif event.type == KEYDOWN or event.type == KEYUP: game.signal_key( event, upKeys ) You can initialize Pygame using the init() method within main() . Then you use display.set_mode() to configure the game surface to 640x480 pixels, and the game caption to be "Monkey Toss". You then use your load_image method to load up the surface background and initialize blitting and flipping: pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption('Monkey Toss') background, tmp_rect = load_image('background.png') screen.blit(background, (0, 0)) pygame.display.flip() For drawing, you start by initializing all of your sprites and sprite groups in main() : allsprites = pygame.sprite.RenderUpdates() snakesprites= pygame.sprite.RenderUpdates() block = None game = MonkeyToss(allsprites) The code that does all the work lies at the end of the while loop, which clears the sprite groups then updates and redraws each changed rect() : allsprites.clear( screen, background) snakesprites.clear(screen, background) allsprites.update() changedRects2 = allsprites.draw(screen) changedRects3 = snakesprites.draw(screen) pygame.display.update(changedRects2+changedRects3) The finished product and the full source code and data files can be found in Chapter 4's file on the CD. Obviously, quite a bit could be added to this program. Check out the complete game sample at the end of this chapter for a few ideas!
https://flylib.com/books/en/1.77.1.45/1/
CC-MAIN-2020-05
refinedweb
8,186
64.91
The game has been a source of income these days for most of the industry. But here comes the aim of today’s discussion. We will discuss the framework that is available for making games, i.e., pyglet and pygame. We will also discuss the differences between them. So keep reading till the end to get more fun. Game programming is getting popular these days, and you would be surprised if it is really for only gaming purposes. So let me tell you that nowadays game has been a medium of learning. Various Ed-tech companies are using games to make learning easier and more fun. This is one of the main reasons for the popularity of game programming these days. The gaming industry has expanded its wings towards the advertisement industry as well. Various companies are using gaming tools to advertise their product. Overview on Pyglet in Python Pyglet is easy to use and a powerful library for developing visually rich GUI applications like games, multimedia, etc., on Windows, Mac OS, and Linux. This library is written in the pure python programming language. It supports many features like windowing, user interface event handling, Joysticks, OpenGL graphics, loading images, videos, and playing sounds and music. Advantages of pyglet - No external installation requirements. - pyglet has the advantage of multiple windows and multi-monitor desktops. - It can load images, sound, music, and video in almost any format. - It allows us to use it for both commercial and other open-source projects. Syntax to Install pyglet pip install pyglet Lets us look at the demonstrated example. import pyglet screen_window = pyglet.window.Window(900,700) if __name__ == "__main__": pyglet.app.run() OUTPUT:- As you can see just a few lines of code lands up with a blank screen Explanation of the code - Importing pyglet - using pyglet.window.Window function we are creating the size of the blank screen bypassing the parameters. - Calling the pyglet.app.run() function so that it runs. Overview on Pygame in Python Pygame is a Python implementation of the SDL library which is written in C. As SDL is already progressed with multiple functions, pygame has provided easy access to these functions. Syntax to install pygame pip install pygame Lets do some code to understand the functionalities. import pygame as py py.init() bg_colour = (200, 150, 22) scr = py.display.set_mode((600, 600)) py.display.set_caption('Welcome to PythonPool') scr.fill(bg_colour) py.display.flip() run = True while(run): for ev in py.event.get(): if ev.type == py.QUIT: run = False py.quit() OUTPUT:- As you can see, to create a blank window, we had to write so many lines of code in pygame. Explanation of the code - Import pygame library - To create a pygame window object, use pygame.display.set_mode() method. It requires two parameters that define the width and height of the window. - Window properties can be altered, such as the window’s title can be set using the set_caption() method. -. Pyglet Vs Pygame in python FAQs: Pyglet Vs Pygame 1. Is Pyglet better than Pygame? Ans:- What I don’t like about Pygame is that it’s impossible to create rock-solid frame rates, which might have to do with the fact that it’s still based on SDL1, and it seems like there is no option to turn VSync on. You have to update the smallest parts of the screen, and remembering what has changed can be tedious. There is no such issue with pyglet. This is an advantage of pyglet over pygame. But at the end of the day, it also depends on user requirements as well. 2. Pyglet vs Pygame – Which has better Performance? Ans:- Speed-wise, Pyglet is definitely faster than pygame and has better performance, and nowadays speed is always a concern when developing with a game. Conclusion You must have a lot of fun while understanding the basics of pyglet and pygame. In today’s tutorial, we had discussed in detailed the differences between pyglet and pygame. I would recommend using pyglet as it has more features than pygame. It is simpler for python lovers. But if you are a beginner, try to understand pygame first and then move on to the pyglet. It can also differ as per user requirements. Till then keep reading our articles and keep pythoning geeks!. Thanks for the article. It is a useful comparison. I think you need to swap the 4th row in the table, though, since the left column refers to pygame and the right column refers to pyglet. Thank you for reporting a mistake. I’ve updated the article accordingly.
https://www.pythonpool.com/pyglet-vs-pygame/
CC-MAIN-2021-43
refinedweb
769
68.06
Introduction SAP Cloud Platform Integration version 2.40 improves message mapping with multi mapping feature. Multi mapping enables you to use more than one message in the source or target of your mapping. Let us understand multi mapping further with a simple integration flow. Clicking on the resource link (MultiMappingDemo.mmap in this case) opens the message mapping editor It already has one message in the source and target message structures. Click on the pen icon to Edit message for source and/or target This opens up the source and target messages editor. If you have already worked with it, you might observe the below changes - Add icon ‘+’ has been introduced, to add more messages - Occurrence table column has been introduced, to change the occurrence of message - Action column has been introduced with - Icon to Replace the message - Icon to Delete the message Occurrence You can use the occurrence column to change the occurrence of the message. By default, the occurrence ‘1..1’ shall be selected. If you select the occurrence ‘0..1’ or ‘1..n’, the multi mapping for this message shall be enabled. For example, set the message occurrence to “0..1” click on OK header action. Go back to message mapping editor. You will now observe the root element of the message is enclosed with ns0:Messages and ns0:Message1 elements as shown in the below screenshots. This indicates multi mapping. The similar structure could also be observed in the target side as well. Multiple Messages Multi mapping is also enabled when you select more than one messages as the source or target in your mapping. Consider the same example of source and target message editor. Click on ‘+’ icon at Source Message and select a schema file. Once the schema is added, you observe the same in source messages list shown below Now, click OK. It will take you to the message mapping editor. You can observe that elements for multi mapping has been populated in the source message with ns0:Messages root element and ns0:Message1 for existing EmployeeDetails element along with ns0:Message2 for newly added schema’s Levels element Now, you can perform the required mapping definition for the multi mapping elements as per your scenario. Note: Multi mapping support for EDMX is not enabled. Hi Deepak Thanks for sharing this update. This functionality was one of the few gaps that made us still needing to develop some integration in Eclipse. Glad to know this is now available. One question on the multimapping feature – when using a 1-N mapping, the mapping requires the source to have the Messages & Message1 elements. In PI, this is automatically handled during runtime, e.g. the actual payload do not need to have those elements. However in CPI, this doesn’t seem to be automatically handled, and we have to add a script before the multimapping step to wrap the payload with those elements. Any idea if this could be enhanced to handle it automatically? Regards Eng Swee hello, i meet the same issue (Messages & Message1 can not handled in CPI)when use multi mapping, could you please give some details about how to handled this using script? Thank you Hi Eng Swee, Thanks for the feedback. Yes, Currently CPI doesn’t handle it automatically, you need to handle it via script step. We will consider this in next upcoming releases. Regards, Deepak Hi, We are having a doubt in multi mapping in HCI scenario like, in source side we have two structure(ns0:Message1 and ns0:Message2) and target side ns0:Message1 structure. Now we have to map ns0:Message1 source field to ns0:Message1 target field and ns0:Message2 source field to ns0:Message1 target field. Here we can able to pass only ns0:Message1 fields, we are not able to pass ns0:Message2 fields for mapping to the target structure message. While doing the simulation we are getting error. Here attached the mapping and error screen shots for your reference, Please provide the solution, Early response is valuable. Thanks. We are also facing the same issue and the same error message. Hi Kalpana / All, If anybody know the solution for this above issue,could you please share the solution here. Error Msssage: Cannot produce target element /ns0:Messages/ns0:Message1/SendRequest.. Thanks Ravi Issue Resolved: Namespace Issue: Added the namespace,issue resolved. <ns0:Messages xmlns:ns0=””> Multi-Mappings Can you clarify where exactly you have added this namespace to resolve the Issue? As you can see, it’s the namespace of the <Messages> element. This is the root of the source and target Messages structures. Hi All, I am trying to create multimapping, but I get the error as target element cannot be created. Though I have used required namespace. My source is an edmx file, can that be a reason for this. What is the other way to solve the issue in case of edmx file where I have multiple target structures. Thanks, Vaishali
https://blogs.sap.com/2018/04/26/sap-cloud-platform-integration-multi-mapping-in-message-mapping/
CC-MAIN-2020-50
refinedweb
830
65.01
User account creation filtered due to spam. GDB compiled with x86_64-w64-mingw32-gcc (GCC) 4.5.0 20090726 (experimental) doesn't work (refuses to load symbols for any executable). This is happening because is_regular_file in gdb/source.c appears to be mis-optimized (disabling optimization for that one file produces a working GDB). The source reads: /* Return True if the file NAME exists and is a regular file */ static int is_regular_file (const char *name) { struct stat st; const int status = stat (name, &st); /* Stat should never fail except when the file does not exist. If stat fails, analyze the source of error and return True unless the file does not exist, to avoid returning false results on obscure systems where stat does not work as expected. */ if (status != 0) return (errno != ENOENT); return S_ISREG (st.st_mode); } Oprimized code is: 00000000000005d0 <_is_regular_file>: 5d0: 48 81 ec 98 00 00 00 sub $0x98,%rsp 5d7: 48 8d 54 24 20 lea 0x20(%rsp),%rdx 5dc: ff 15 00 00 00 00 callq *0x0(%rip) # 5e2 <_is_regular_file+0x12> 5de: R_X86_64_PC32 __imp___stat64 5e2: 85 c0 test %eax,%eax 5e4: 75 1d jne 603 <_is_regular_file+0x33> 5e6: 0f b7 44 24 66 movzwl 0x66(%rsp),%eax 5eb: 25 00 f0 00 00 and $0xf000,%eax 5f0: 3d 00 80 00 00 cmp $0x8000,%eax 5f5: 0f 94 c0 sete %al 5f8: 48 81 c4 98 00 00 00 add $0x98,%rsp 5ff: 0f b6 c0 movzbl %al,%eax 602: c3 retq 603: ff 15 00 00 00 00 callq *0x0(%rip) # 609 <_is_regular_file+0x39> 605: R_X86_64_PC32 __imp___errno 609: 83 38 02 cmpl $0x2,(%rax) 60c: 0f 95 c0 setne %al 60f: 48 81 c4 98 00 00 00 add $0x98,%rsp 616: 0f b6 c0 movzbl %al,%eax 619: c3 retq Without optimization: 0000000000000e89 <_is_regular_file>: e89: 55 push %rbp e8a: 48 89 e5 mov %rsp,%rbp e8d: 48 83 ec 60 sub $0x60,%rsp e91: 48 89 4d 10 mov %rcx,0x10(%rbp) e95: 48 8d 45 c0 lea -0x40(%rbp),%rax e99: 48 89 c2 mov %rax,%rdx e9c: 48 8b 4d 10 mov 0x10(%rbp),%rcx ea0: e8 00 00 00 00 callq ea5 <_is_regular_file+0x1c> ea1: R_X86_64_PC32 _stat ea5: 89 45 fc mov %eax,-0x4(%rbp) ea8: 83 7d fc 00 cmpl $0x0,-0x4(%rbp) eac: 74 16 je ec4 <_is_regular_file+0x3b> eae: 48 8b 05 00 00 00 00 mov 0x0(%rip),%rax # eb5 <_is_regular_file+0x2c> eb1: R_X86_64_PC32 __imp___errno eb5: ff d0 callq *%rax eb7: 8b 00 mov (%rax),%eax eb9: 83 f8 02 cmp $0x2,%eax ebc: 0f 95 c0 setne %al ebf: 0f b6 c0 movzbl %al,%eax ec2: eb 17 jmp edb <_is_regular_file+0x52> ec4: 0f b7 45 c6 movzwl -0x3a(%rbp),%eax ec8: 0f b7 c0 movzwl %ax,%eax ecb: 25 00 f0 00 00 and $0xf000,%eax ed0: 3d 00 80 00 00 cmp $0x8000,%eax ed5: 0f 94 c0 sete %al ed8: 0f b6 c0 movzbl %al,%eax edb: c9 leaveq edc: c3 retq It appears that unoptimized code calls _stat, which jumps to _stat64i32, which has this: extern __inline__ int __attribute__((__cdecl__)) _stat64i32(const char *_Name,struct _stat64i32 *_Stat) { struct _stat64 st; int ret=_stat64(_Name,&st); // calls _imp___stat64 _Stat->st_dev=st.st_dev; _Stat->st_ino=st.st_ino; _Stat->st_mode=st.st_mode; _Stat->st_nlink=st.st_nlink; _Stat->st_uid=st.st_uid; _Stat->st_gid=st.st_gid; _Stat->st_rdev=st.st_rdev; _Stat->st_size=(_off_t) st.st_size; _Stat->st_atime=st.st_atime; _Stat->st_mtime=st.st_mtime; _Stat->st_ctime=st.st_ctime; return ret; } whereas the optimized code calls into _imp__stat64 directly and doesn't perform the _stat64 -> _stat64i32 transformation. In the optimized case, immediately after _imp___stat64 returns: (gdb) p/x st $1 = {st_dev = 0x22a520, st_ino = 0x0, st_mode = 0x0, st_nlink = 0x4, st_uid = 0x0, st_gid = 0x0, st_rdev = 0x68a4e5, st_size = 0x0, st_atime = 0x7ff7fc35af9, st_mtime = 0x0, st_ctime = 0x1} In the non-optimized case, immediately after _stat returns: (gdb) p/x st $1 = {st_dev = 0x2, st_ino = 0x0, st_mode = 0x81ff, st_nlink = 0x1, st_uid = 0x0, st_gid = 0x0, st_rdev = 0x2, st_size = 0x12c9aa9, st_atime = 0x4a713f85, st_mtime = 0x4a713f85, st_ctime = 0x4a713f83} --- reproduces with t.c --- #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> int main(int argc, char *argv[]) { struct stat st; if (0 == stat(argv[0], &st)) printf("mode = %x\n", st.st_mode); return 0; } --- t.c --- /usr/local/mingw-w64/bin/x86_64-w64-mingw32-gcc -g t.c -O2 && ./a.exe mode = 0 /usr/local/mingw-w64/bin/x86_64-w64-mingw32-gcc -g t.c -O0 && ./a.exe mode = 81ff Hmm, possibly this is a bug in our inline functions of mingw-w64. Does the switch -fno-strict-aliasing solves this issue, too? We have here struct casts, which maybe are reasoning here strict aliasing issues. Kai Hmm, with gcc-4.4.2 (branch rev. 150249), I always get "mode = 81ff" reported on the console with both -O0 and -O2 compiled exes from t.c test. This is with mingw-w64 headers and crt revision 1101, the exes cross-compiled from an i686-linux host. It ~may~ be a gcc-4.5 thing, but we don't know which revision of mingw-w64-crt and mingw-w64-headers the reporter used, either. It indeed smells like a alias violation. Preprocessed source would help here. Created attachment 18272 [details] pre-processed t.c Some answers: The -fno-strict-aliasing does help: /usr/local/mingw-w64/bin/x86_64-w64-mingw32-gcc -g t.c -O2 -fno-strict-aliasing && ./a.exe mode = 81ff > which revision of mingw-w64-crt and mingw-w64-headers the reporter used For the record, I am just following up on GDB bug report from here: I did not build mingw-w64 myself, but used a snapshot build from here: Yep: extern __inline__ int __attribute__((__cdecl__)) stat(const char *_Filename,struct stat *_Stat) { return _stat64i32(_Filename,(struct _stat64i32 *)_Stat); } that isn't going to fly. struct stat { _dev_t st_dev; _ino_t st_ino; unsigned short st_mode; short st_nlink; short st_uid; short st_gid; _dev_t st_rdev; _off_t st_size; time_t st_atime; time_t st_mtime; time_t st_ctime; }; struct _stat64i32 { _dev_t st_dev; _ino_t st_ino; unsigned short st_mode; short st_nlink; short st_uid; short st_gid; _dev_t st_rdev; _off_t st_size; __time64_t st_atime; __time64_t st_mtime; __time64_t st_ctime; }; I suggest you mark _stat32i64 with attribute((may_alias)). (In reply to comment #5) > Yep: > > extern __inline__ int __attribute__((__cdecl__)) stat(const char > *_Filename,struct stat *_Stat) { > return _stat64i32(_Filename,(struct _stat64i32 *)_Stat); > } > > that isn't going to fly. [...] > I suggest you mark _stat32i64 with attribute((may_alias)). Why does 4.4 not have any problem with this? Besides my question in comment #6, I wonder why is this actually considered an aliasing violation? The only difference between struct stat and struct _stat64i32 is the time fields: _stat64i32 has __time64_t and stat has time_t which, in this particular case, is typedef'ed as __time64_t already.. With 4.4 you are probably simply lucky ;) This is an aliasing violation because C even considers struct A { int i; }; struct B { int i; }; to be different. If the indirection isn't needed why provide two structs at all? Interesting that the problem occurs only with the inlined version: if the test object is linked with a non-inline version in a separate *.a file, the test behaves correctly.. BTW, neither 4.4 nor 4.5 complains even with -Wstrict-aliasing=2. Filed MingW bug here: (In reply to comment #10) > Filed MingW bug here: > > Wrong project tracker. Please go to for the mingw-w64 tracker. BTW, I worked around the issue in our svn by adding optimize("0") attributes to those one-line inlines for now. Use revision 1108 and it'll be fine. noinline attributes would be better I think. (In reply to comment #12) > noinline attributes would be better I think. > noinline for the inline functions?? Yes, and of course remove the inline qualifier ;) I have no idea what optimize(0) will do and why it should affect the bug you are seeing (I guess it disallows inlining, which is why I think noinline is a better choice here). Created attachment 18279 [details] [ _... (In reply to comment #15) > Created an attachment (id=18279) [edit] > [ _... > Nah.. the file is from using 4.4, but the disabling of inline shouldn't change from 4.4 to 4.5, I guess..
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40909
CC-MAIN-2016-40
refinedweb
1,367
61.16
Monitoring the Kinesis Producer Library with Amazon CloudWatch The Kinesis Producer Library (KPL) for Amazon Kinesis Data Streams publishes custom Amazon CloudWatch metrics on your behalf. You can view these metrics by navigating to the CloudWatch console There is a nominal charge for the metrics uploaded to CloudWatch by the KPL; specifically, Amazon CloudWatch Custom Metrics and Amazon CloudWatch API Requests charges apply. For more information, see Amazon CloudWatch Pricing Topics Metrics, Dimensions, and Namespaces You can specify an application name when launching the KPL, which is then used as part of the namespace when uploading metrics. This is optional; the KPL provides a default value if an application name is not set. You can also configure the KPL to add arbitrary additional dimensions to the metrics. This is useful if you want finer-grained data in your CloudWatch metrics. For example, you can add the hostname as a dimension, which then allows you to identify uneven load distributions across your fleet. All KPL configuration settings are immutable, so you can't change these additional dimensions after the KPL instance is initialized. Metric Level and Granularity There are two options to control the number of metrics uploaded to CloudWatch: - metric level This is a rough gauge of how important a metric is. Every metric is assigned a level. When you set a level, metrics with levels below that are not sent to CloudWatch. The levels are NONE, SUMMARY, and DETAILED. The default setting is DETAILED; that is, all metrics. NONEmeans no metrics at all, so no metrics are actually assigned to that level. - granularity This controls whether the same metric is emitted at additional levels of granularity. The levels are GLOBAL, STREAM, and SHARD. The default setting is SHARD, which contains the most granular metrics. When SHARDis chosen, metrics are emitted with the stream name and shard ID as dimensions. In addition, the same metric is also emitted with only the stream name dimension, and the metric without the stream name. This means that, for a particular metric, two streams with two shards each will produce seven CloudWatch metrics: one for each shard, one for each stream, and one overall; all describing the same statistics but at different levels of granularity. For an illustration, see the following diagram. The different granularity levels form a hierarchy, and all the metrics in the system form trees, rooted at the metric names: MetricName (GLOBAL): Metric X Metric Y | | ----------------- ------------ | | | | StreamName (STREAM): Stream A Stream B Stream A Stream B | | -------- --------- | | | | ShardID (SHARD): Shard 0 Shard 1 Shard 0 Shard 1 Not all metrics are available at the shard level; some are stream level or global by nature. These are not produced at the shard level, even if you have enabled shard-level metrics ( Metric Yin the preceding diagram). When you specify an additional dimension, you need to provide values for tuple:<DimensionName, DimensionValue, Granularity>. The granularity is used to determine where the custom dimension is inserted in the hierarchy: GLOBALmeans that the additional dimension is inserted after the metric name, STREAMmeans it's inserted after the stream name, and SHARDmeans it's inserted after the shard ID. If multiple additional dimensions are given per granularity level, they are inserted in the order given. Local Access and Amazon CloudWatch Upload Metrics for the current KPL instance are available locally in real time; you can query the KPL at any time to get them. The KPL locally computes the sum, average, minimum, maximum, and count of every metric, as in CloudWatch. You can get statistics that are cumulative from the start of the program to the present point in time, or using a rolling window over the past N seconds, where N is an integer between 1 and 60. All metrics are available for upload to CloudWatch. This is especially useful for aggregating data across multiple hosts, monitoring, and alarming. This functionality is not available locally. As described previously, you can select which metrics to upload with the metric level and granularity settings. Metrics that are not uploaded are available locally. Uploading data points individually is untenable because it could produce millions of uploads per second, if traffic is high. For this reason, the KPL aggregates metrics locally into 1-minute buckets and uploads a statistics object to CloudWatch one time per minute, per enabled metric.
https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-kpl.html
CC-MAIN-2021-21
refinedweb
719
51.38
On 8/19/06, Lennart Regebro <[EMAIL PROTECTED]> wrote: Nothing. But when we have loads of empty top level packages that each have a couple of modules it gets confusing, since you need to keep track of what does which. This is a perception problem, which indicates a documentation problem. Each of the separate packages is in a top-level project in Subversion, so worry about the top-level namespace a red herring. The issue is that there are a lot of individual packages, and it's hard to keep track of so many in your head at once if you haven't really used them. An I'm guess no one has used *all* of the Zope 3 packages on svn.zope.org at this point. I would much more prefer if we could keep all small useful packages in some sort of kommon namespace, which we know holds loads of small useful packages. If this in unfeasible, then fine, I'll just have to live with it. I guess what I'm getting at is that it's not the top-level packages we need to worry about, but the packages themselves. Those are what offer interesting functionality that we want to consider for re-use in applications. Seems to me what we need is a way to easily find a list of what's available, with concise human-readable descriptions of what each does. There are a options to consider: - The Python Package Index (PyPI) has framework categories. I thought Jim had requested one for Zope 3, but I see only Paste and TurboGears in the currently published list. We can get the appropriate category added to PyPI and use that for browsing the available Zope 3 component offerings. This would also make Zope 3 activity visible to the rest of the Python community. - There's a mostly-ignored Zope 3 wiki on dev.zope.org that could be used more effectively. Adding a page to act as a catalog of what's available is straightforward and keeps the barrier to entry low. This doesn't offer the visibility or other features that PyPI offers, however. -Fred -- Fred L. Drake, Jr. <fdrake at gmail.com> "Every sin is the result of a collaboration." --Lucius Annaeus Seneca _______________________________________________ Zope3-dev mailing list [email protected] Unsub:
https://www.mail-archive.com/[email protected]/msg05937.html
CC-MAIN-2018-47
refinedweb
389
73.07
When I was developing my Personal Assistant application for Windows back in 2015, I needed to run a command in CMD and capture its output. I tried several methods and found the following code as the simplest one. First, we need to add references to System.Diagnostics namespace. This namespace provides classes that are helpful to interact with system processes, event logs. using System.Diagnostics; Now, we’re creating a new process with the Process class. Process p= new Process(); The next step is to execute a command and capture its output from the terminal. For that, we need to change some properties of the process as shown below. // Specifies not to use system shell to start the process p.StartInfo.UseShellExecute = false; // Instructs the process should not start in a separate window p.StartInfo.CreateNoWindow = true; // Indicates whether the output of the application is returned as an output stream p.StartInfo.RedirectStandardOutput = true; p.StartInfo.FileName = @"D:\mycommand.bat"; p.Start(); string res = p.StandardOutput.ReadToEnd(); Console.WriteLine(res); Note: Place the command which you need to execute in the mycommand.bat file. Subscribe Join the newsletter to get the latest updates.
https://www.geekinsta.com/how-to-run-a-command-in-command-prompt-and-capture-output-in-c/
CC-MAIN-2020-40
refinedweb
193
58.89
CGI::AppToolkit::Template - Perl module to manipulate text templates This module takes a raw complex data structure and a formatted text file and combines the two. This is useful for the generation of HTML, XML, or any other formatted text. The templating syntax is formatted for quick parsing (by human or machine) and to be usable in most GUI HTML editors without having to do a lot of backflips. CGI::AppToolkit::Template was developed to fulfill several goals. It is similar to HTML::Template in concept and in style, but goes about it by very different means. It's goals are: Shortcut to new(-set=>"template text") or new(-file=>"filename"). If the supplied string has line endings, it's assumed to be the template text, otherwise it's assumed to be a filename. This may be called as a method or as a subroutine. It will be imported into useing namespace when requested: use CGI::AppToolkit::Template qw/template/; NOTE: This module is loaded and this method is called by CGI::AppToolkit->template(), which should be used instead when using CGI::AppToolkit. Example: $t = template('template.html'); # OR $t = CGI::AppToolkit->template('template.html'); # or to read the file in from another source manually open FILE, 'template.html'; @lines = <FILE>; $t = template(\@lines); # must pass a ref # or $t = template(join('', @lines)); # or a single string Create a new CGI::AppToolkit::Template object. The template() method cals this method for you, or you can call it directly. NOTE: If you are using CGI::AppToolkit, then it is highly recommended that you use it's CGI::AppToolkit->template() method instead. OPTIONS include: load (or file), set (or text or string), and cache. load and set are shorthand for the corresponding methods. cache, if non-zero, will tell the module to cache the templates loaded from file in a package-global variable. This is very useful when running under mod_perl, for example. Example: $t = CGI::AppToolkit::Template->new(-load => 'template.html'); # or to read the file in from another source manually open FILE, 'template.html'; @lines = <FILE>; $t = CGI::AppToolkit::Template->new(-text => \@lines); # must pass a ref # or $t = CGI::AppToolkit::Template->new(-text => join('', @lines)); # or a single string Load a file into the template object. Called automatically be template() or CGI::AppToolkit->template(). Example: $t = CGI::AppToolkit::Template->new(); $t->load('template.html'); Sets the template to the supplied TEXT. Called automatically be template() or CGI::AppToolkit->template(). Example: $t = CGI::AppToolkit::Template->new(); open FILE, 'template.html'; @lines = <FILE>; $t->set(\@lines); # must pass a ref # or $t->set(join('', @lines)); # or a single string Makes the template. output and print are synonyms. Example: $t->make({token => 'some text', names => [{name => 'Rob'}, {name => 'David'}]}); Checks to see if the template file has been modified and reloads it if necessary. var loads a variable tagged with NAME from the template and returns it. vars returns a list of variable names that can be passed to var. Example: $star = $t->var('star'); @vars = $t->vars(); The template syntax is heirarchical and token based. Every tag has two forms: curly brace or HTML-like. All curly brace forms of tags begin with {? and end with ?}. Angle brackets <> may be used instead of curly braces {}. For example, the following are all the same: {? $name ?} <? $name ?> <token name="name"> <token name> Use of HTML-like tags or curly brace tags with angle brackets might make the template difficult to use in some GUI HTML editors. NOTE: Tokens may be escaped with a backslash '\' ... and becuase of this, backslashes will be lost. You must escape any backslashes you want to keep in your template. Tokens may be nested to virtually any level. The two styles, curly bace and html-like, may be mixed at will, but human readability may suffer. Line endings may be of any OS style: Mac, UN!X, or DOS. A simple token. Replaced with the string value of a token provided with the specified name key. If a filter() is specified, then the named CGI::AppToolkit::Template::Filter subclass will be loaded and it's filter() function will be called, with the token's value and any parameters specified passed to it. Please see CGI::AppToolkit::Template::Filter for a list of provided filters. NOTE: The template module's ability to parse the parameters are very rudimentary. It can only handle a comma delimited list of space-free words or single or double quoted strings. The string may have escaped quotes in them. The style of quote (single or double) makes no difference. A decision if..else block. Checks token to be true, or compares it to the string text, the subtemplate template, or the number number, respectively, and if the test passes then the template code inside this token is appended to the output text. If there is an 'else' ( {?-- $token --?} or <else>) and the test fails, the template code between the else and the end ( {?-- $token ?}) will be appended to the output text. The comparison operators <, <=, >, >=, or != may be used, and you may also place an exclamation point ( !) before the $token. {?if $token<='a string' --?}...{?-- $token?} {?if !$token --?}...{?-- $token?} {?if !$token!='a string' --?}...{?-- $token?} <-- if token equals 'a string' Comparison is done as a number if the value is not quoted, as a string if is single-quoted, and as a subtemplate if it is double-quoted. This is intended to be similar to the use of quotes in perl: <option {?if ${?-- $state?} {?if $count > 0 --?}{?$count?}{?-- $count --?}<font color="red">$count</font>{?-- $count?} An alternate syntax for the decision if..else block. Checks token to be true, or compares it to the value value, and if the test passes then the template code inside this token is appended to the output text. If there is an 'else' ({?-- $token --?} or <else>) and the test fails, the template code between the else and the end ( </iftoken>) will be appended to the output text. If there is no value="..." given, then the token is tested for perl 'trueness.' If the comparison="..." is given as not or ne then the 'trueness' of the token is reversed. The value, if given, is treated as described in the as="...", or as a number if not specified. Unlike the curly brace form, the style of quoting does not matter. Possible as values are string, template, or number. The token is compared to the value according to the value of comparison="...". Possible values are not (false), ne (not equal), eq (equal), lt (less than), le (less than or equal to), gt (greater than), or ge (greater than or equal to). <iftoken name="thanks">Thanks for visiting!<else>You're not welcome here! Go away.</iftoken> You can mix token stylas as you wish, to the dismay of anyone (or any GUI HTML app) trying to read the template: <iftoken name="id" as="number" value="10" comparison="gt">Your id is greated than 10!{?-- $id --?}Your id <= 10.{?-- $id?} {?if $name --?}Hello '<token name='name'>'.<else>I don't know who you are!</iftoken> {?if $address --?}I know where you live!<else>I don't know your address{?-- $address?} <iftoken id><token id>{?-- $id?} A repeat token. Repeats the contents of this token for each hashref contained in the arrayref provided with the name token and the results are appended to the output text. If the arrayref is empty and ther is an 'else' ({?-- $token --?} or <else>), then the template code between the else and the end ( </iftoken>) will be appended to the output text. A repeat token, as above, except it repeats the line that it is on. The token can appear anywhere in the line. <select name="tool"> <option value="{?$id?}" {?if $id="{?$selected-tool?}" --?}SELECTED{?-- $id?}>{?$name?}{?@tools?} </select> In the above example, the <option ...> line will be repeated for every tool of the 'tools' array. If the id is the same as {?$selected-tool?}, then SELECTED. So, in the code we call: print CGI::AppToolkit->template('tools')->make( 'tools' => [ {'id' => 1, 'name' => 'Hammer'}, {'id' => 2, 'name' => 'Name'}, {'id' => 3, 'name' => 'Drill'}, {'id' => 4, 'name' => 'Saw'}, ], 'selected-tool' => 3 ); And, assuming the file is called ' tools.tmpl,' then the result should look something like: <select name="tool"> <option value="1" >Hammer <option value="2" >Name <option value="3" SELECTED>Drill <option value="4" >Saw </select> A variable token. This will not appear in the output text, but the contents (value) can be retrieved with the var() and vars() methods. The data passed to the make method corresponds to the tags in the template. Each token is a named key-value pair of a hashref. For example, the following code: use CGI::AppToolkit; my $t = CGI::AppToolkit->template('example.tmpl'); print $t->make({'token' => 'This is my text!'}); Given the file example.tmpl contains: <html> <head><title>{?$token?}</title></head> <body> Some text: {?$token?} </body> </html> Will print: <html> <head><title>This is my text!</title></head> <body> Some text: This is my text! </body> </html> Complex data structures can be represented as well: use CGI::AppToolkit; my $t = CGI::AppToolkit->template('example2.tmpl'); print $t->make({ 'title' =>'All about tokens', 'tokens' => [ {'token' => 'This is my text!'}, {'token' => 'Text Too!'} ] }); Given the file example.tmpl contains: <html> <head><title>{?$title?}</title></head> <body> {?@tokens?}Some text: {?$token?} </body> </html> Will print: <html> <head><title>All about tokens</title></head> <body> Some text: This is my text! Some text: Text Too! </body> </html> In this example I combine the use of <?$token?> style syntax and {?$token?} style syntax. <html> <head> <title><?$title><title> <head> <body> <?$body?><br> Made by: <token name="who"> <table> <tr> <td> Name </td> <td> Options </td> </tr> {?@repeat --?} <tr> <td> <token name> </td> <td> <a href="index.cgi?edit-id={?$id?}">Edit<?a> </td> </tr> {?-- @repeat?} </table> </body> </head> </html> <?my $author--> <B><A HREF="mailto:[email protected]">Rob Giseburt</A></B> <?--$author> #!/bin/perl use CGI; # see the perldoc for the CGI module use CGI::AppToolkit; #-- Standard CGI/CGI::AppToolkit stuff --# $cgi = CGI->new(); $kit = CGI::AppToolkit->new(); $kit->connect( ... ) || die $db->errstr; # load the data from a DB # returns an arrayref of hashrefs $repeat = $kit->data('item')->fetch(-all => 1); # Place the loaded data in a page-wide data structure $data = { title => 'This is an example of CGI::AppToolkit::Template at work.', body => 'Select edit from one of the options below:', repeat => $repeat }; # print the CGI header print $cgi->header(); #-- CGI::AppToolkit::Template stuff --# $template = $kit->template('example.tmpl'); # load the 'author' HTML from the template $author = $template->var('author'); # place it into the data $data->{'who'} = $author; # output the results of the data inserted into the template print $template->output($data); This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Please visit for complete documentation.
http://search.cpan.org/dist/CGI-AppToolkit/lib/CGI/AppToolkit/Template.pm
CC-MAIN-2017-04
refinedweb
1,810
67.65
The topic of combining a database system (usually a conventional relational db system) with a file system to add meta-data, a richer set of attributes to files, has been a recurring discussion item on this and other sites. The article published last week, Rethinking the OS, under the heading “Where Is It Stored?” talks about the ability to locate a file without knowing the exact name or location. The Basic Idea While there are many variations, the essential idea is to enhance the number and type of file attributes to enable location and access to files without knowing the exact path. BeOS added B-tree indices for additional attributes (such as author, title, etc.) thus enabling fast searching based on those attributes. WinFS seems to use Microsft’s SQL server (or a version thereof) to store and index file attributes. Using RDBMS technology to index file attributes enables fast search of files based on a number of attributes besides file name and path. In effect, we enhance access capability by providing many more new access paths that were not available before. At its core, this idea of adding a database to a file system is about adding many more new access mechanisms to files. The increased flexibility enables users to locate files using different terms, especially domain specific terms that have nothing to do with file name and path. The Arguments Against It Many developers and other technically-inclined computer users are very much against this idea. However, their opinion is often not based on sufficient information and clarity to be sound. It is easy to jump to conclusions based on headlines alone or a short summary provided by news. The common criticism is that addition of a database is simply not necessary. After all, the modern file systems do work very well, provide a very fast and efficient access to many files already. Why mess with them? This is the, “If it ain’t broke don’t fix it”, philosophy. This is a perfectly valid argument, as it is well known that the modern file systems are results of a long evolution and therefore have many excellent characteristics. It is important to note, though, that this idea simply adds database to an existing file system and doesn’t eliminate it! The standard file system continues to play the same role as it always has had. It is not being removed of replaced! Another argument is that the overhead introduced by a database system on top of (or next to) a file system is simply not necessary. There are several problems with this argument. First, the overhead is not very well understood today. This argument may be valid only if the overhead is significant. Naturally all implementations strive to keep the overhead minimal. In the past when computing resources were scarce this argument would make sense. Today with massive processing, memory and hard disk capacities available the overhead should be handled without any problems. In fact, future multicore processors can easily execute database operations on one of the cores in background with little to no impact on other tasks. Another argument is that database simply adds unnecessary complexity to the filesystem. There are many RDBMS of different sizes and complexities. In general, database systems being combined with file systems are of the smaller, simpler variety. We are not talking about a massive Oracle database using massive amounts of memory and disk space. In general, they would be some sort of smaller embedded database systems that are much lighter on machine resources. The Perspective The I think the major misconception to such paradigm shifts is not the ‘I forgot the filename’ argument so much as the linking, and referencing to other ideas. This is the power of metadata, and the generation of metadata dynamically is where the power will be realized in such systems. This is the focus of my own project, in which abstracted concepts can be related to abstracted concepts, and by which are able to complete metadata based upon statistical analysis, combined with domain narrowing, as well as human interaction where necessary. This is the real usage of metadata, thereby being able to find any concept based upon it’s metadata. ‘I know I got that report from Bob, via email’ thus these concepts could be related by the nature (domain) of the information one is looking for, and the filters (narrowing) based upon how that information has been acquired. Basically, what these filesystemdatabase systems should try to acchieve is that of replacing the ‘file’ paradigm with a ‘concept’ paradigm. Make all information available via a standardized api that all programs can access and manipulate via a templating system that is dynamically defined. Admittedly, not an easy task, but i’ve had pretty good luck with my implementation thus far. Any ideas, recommendations, caveats, (constructive) criticisms? links? It occurs to me that no discussion on the file system as a database is complete without mentioning versioning! It would be extremely handy to integrate CVS-like versioning capabilities into the operating system to allow for automatic backups of user files… Under Windows this might only affect the “My Documents” folder, or be a user settable property of a folder, such that whenever files are saved to that directory it actually treats it as a revision (if a file with the same name already exists). Then, by adding a couple of simple hooks to access revisions – apps could be easily extended to show differences between visual or textual (not sure how to handle audible) data. For example an image could be versioned and then a “difference” viewed in an image editor that displayed the two images either side-by-side or overlaid with transparency. Obviously this would have some serious storage, and performance impacts – so one might not want to allow the versioning of video files… The key problem I see with adding a “database” to the file system in order to store meta-data about the files is that users rarely enter such metadata. Sure it can be derived from some files – MP3s and ID3 tags. But, those are obvious, and any application that is aware of them already handles them. Perhaps a better approach would be to automatically generate meta-data about files based on user actions. Example: if the do a right-click -> send as attachment, you might want to backfill the name of the receipient(s). People do clean out their “sent” folder occassionally… But this way you’d still be able to query the filesystem to see if you ever sent a file to a person. Combined with built in versioning – you’d even know which revision of the file they had. Windows 2K and XP already let you add in tons of meta-data (it’s a lot like BeOS in that fashion) and with indexing on, it actually leverages it. But I know very few people who actually fill it out beyond what applications automate for them. So while I appreciate the convienience of an embedded database I question whether it would ever actually be used. That’s a really good point about the application of such concepts: how does this information get completed? The weakest point of metadata is admittedly the completion of such data. To date, the only reason that metadata has not been fully utilized is because as usual, the tools are not that complete, or fun, or ideal to use. The referencing of knowledge is not centralized to provide a clean way of doing this. Think about this: the best example I’ve seen of a music library sorting application is (sadly enough) windows media player, in which you can select numerous files, and drag them to the artist, and the file information is updated to reflect where it was dropped. Why is this not a fundamental approach for all information that is stored on a computer, with central repository of knowledge that can be used globally? Example two: Why is it not simple to use your address book in Outlook, Thunderbird, and quicken? Why do they all have their own repositories? The problem isn’t the filesytem, or the database; it’s the tools that are lacking, and a vision to incorporate such concepts at a root level. Heck, why the big philisophical talk? It works great. My company already has built a system like this inside a world class leading database and it really runs well. I am the developer of PhOS based on BeOS (R5.1d0 Dano/Exp), and I see no new ideas in these blurbs about file systems. Not to say they aren’t good ideas 🙂 And that includes versioning. I managed almost 60,000 source code files, on my own system. ow, with that many files, it was a big pain creating and organizing backups of the files.. to mark stable versions, etc… I thought that a system-side cvs-like versioning system would be awesome. So I made one. I add attributes to each and every file, which contain all changes since I last marked the file as stable. This is not done at the OS-level, though I may very well add this into Haiku’s oBFS when the time comes, as it is not terribly difficult. I currently have a development_server running which watches for the startup of an entire list of tools I use for development, and then launches NodeMonitors (Nodes are ‘dumb’ they represent the data in the file, but not the file, and hey represent the only way the file system really thinks of files on BeOS. That is, this whole idea of who cares where a file is idea :-). So, when any of the applicaions are loaded, I simply go into that app from the server and find out what file it is working on, then ensure that the file’s atributes are up-to-date with the changes since the stable version. If not, I update them, using cached data from the file, and will do so for every instance the applicaion either opens or closes that file. Sure, I could very easily do it for every save (I get a notification every time the file is changed), however that is far from needed, as the applications have Undo functions 🙂 I can right-click on a file, and select the Tracker add-on I created just for this purpose, entitled ‘Version Control’. Version Control gives me a couple of choices. I can force my current code to be the new base-code (mark as stable), I can remove any of the revisions saved, or I can force the creation of a revision at this point, without marking that revision as a stable version, and of course I can back-track (or even forward) to a different revision. I can search the revisions for a specific line, etc… It seems complex, but it really isn’t. It just goes hand-in-hand with how BeOS and databased file systems should act. And one step beyond that, I use the same mechanism to parse all of my sources to ensure there are no problems in others areas of code (i.e. compatibility problems). It also contains a list of optimizations for code, and can more or less figure out what I am doing is not the best way, and let me know. But that is the hard part 🙂 –The loon A package like Run Time Access lets you view the statistics, status, and configuration in your running programs as data in a database. This, in combination with a DBMS for a file system, can give the user a unified way to view all data on the computer. If developers want a database for a filesystem then what almost no one mentions is that you will not be able to use object oriented programming as it currently stands. For a filesystem to be truly useful, the entire structure of any file format most use the database top to bottom. If you want to search for that SVG file with the “blue circle next to the red square” then the XML structure of an SVG file most be carefully mapped to a universal object structure upon which the filesystem is based. This universal object structure will not look like OO when all is said and done. You’ll know that it is not OO when people can actually understand it and explain it to others. What will this universal object model be? There are some good choices already out there. One is the relational model. Visit for more information. I think this was a really nice and concise article. Thanks!! This is definately the way to go for end users. I think overhead and system complexity truly are meaningless to the end user – on an average desktop setup, a lot more time is wasted today on finding documents than waiting for some task to finish. Something like this was to be my thesis work. That was until Apple came up with Spotlight. My concern is that a filesystem and thus a user becomes dependant on one database system — ‘free’ or not. Filesystems are currently a choice-once-run-always unless a reinstall or other drastic changes are made. If the database is open and thus convertible or convertible whereas there are different backends then this is less of a problem but what if the FS only supports one database whereas that user doesn’t want that to depend on that one, wants to depend on another or wants to hae the freedom to switch to another easily? Consider you’re using e.g. Oracle or PostGres whereas you have to use MySQL for your filesystem. Thats bloat and limitting; could be very likely not wished for. OTOH, one might actually prefer different databases or at least 2 different services where one is not dependant on the stability of the other. I’m afraid WinFS will create dependence on Microsoft’s proprietary software, i hope others will not make the same compatibility and freedom related mistake. >> While there are many variations, the essential idea is to enhance the number and type of file attributes to enable location and access to files without knowing the exact path. << Isn’t this the part of the concept HFS + and Mac OS uses? it stores the data, but it also stores a unique identifier. You can use eitehr to locate the file, and if the location changes everything gets updated. That way you can move an Application but the Aliases still work. The system can compensate. It also stores a resource file, which if I am paying attention is what Spotlight is going to use as part of it’s search? yet there is no real database. Yep, exactly the right idea. There is a wealth of file meta-data that can be automatically collected and stored from file usage context. While working on my gizmo I found that many file formats have a set of attributes, not always properly filled but better than nothing. Please post a link to your site, where are your docs,… thanks The Tool Problem is really key to all of this, and we don’t have the tools now simply because we don’t have a dominant implementation of such a system, much less a portable standard to work against. The poster mentioned the things that he can do with BeOS, and I’m sure many of the Be programs can leverage the system the same way, but obiviously those programs won’t port (at least that part of their functionality) to something like Windows or Unix. Also, not having a standard API limits porting as well. Simple example are things like extended File Attributes that are supported on most filesystems in some form: NTFS, Linux, Mac, BSD, Solaris all have some form of extended file attributes, but they vary in their implementation and limitations, and no mainstream program that *I* know of leverages these capabilities (save system utilities). Neither HTTP nor FTP have been upgraded to support these new file systems, for example, so you can’t even (directly) transfer a file using extended attributes from one system to another using these protocols. So, those are simple examples of some of the barriers towards adoption of these extended file systems. An open API would help a lot so that developers can start to use these features. For example, I don’t think any of the mainstream scripting languages have support for extended file attributes. But the attributes exists, they’re here, now, and actually wide spread. There’s hope that over the next year or two we’ll see some stabilization among the feature sets so that developers can actually rely on them. Once They(tm) have come to grips on how best to interoperate files using these extended attributes, then we can look at leveraging those attributes using some kind of indexing scheme. Regarding: “Dependence on one database”, that’s not even an issue. If you use an extended file system feature that is unique to a particular filesystem, then you’re stuck on that filesystem regardless of how the developers implemented the filesystem. If not, then you have no dependence whatsoever on that filesystem and can move your files (and ideally, your application) from one filesystem to another. The actual implementaion and internal control structures will be largely irrelevant to your application, just like it is now. I’ve been using swish-e on FreeBSD and Copernic Desktop Search on MS Windows for a while now and I have some concerns and issues. First of all, I think that a lot of this idea that database file systems will radically solve all of the problems of file management are based on a bit of a fallacy. Databases are only as good as the keywords you put into the database. There seems to be an assumption that people unwilling to organize files into a heirarchal database will be willing to add metadata to files. As anyone who has tried to find something through google or through a periodical catalog should know, keyword searches unless constructed well tend to produce either too many misleading hits, or not enough hits. Secondly, well-constructed file trees are metadata. I know that a collection of documents in a folder called Dissertation/draft1 is likely to be the first draft of a dissertation. images/mushrooms/ohio_10_04/*.jpg are likely to be photographs of specimins sighted in ohio. images/mushrooms/ohio_10_04_edited/ should be obvious. Third, a part of me really worries about the possibility for vendor lock-in. Copernic Desktop Seach does not handle OOo files. swish-e will handle everything as long as you add a parser that will extract the text from a file. So one of my concerns is that applications will only have their data indexed in the database if it includes proprietary APIs. well, my site isn’t really a site, per se, but there will be content there soon… openknowledgebase.info nice domain name, huh? I actually do have large plans on this sort of app, so keep your eyes open… I hope that comment doesn’t come back to haunt me… One reason UFS is the best we have, right now, is that it is something different vendors can at least agree on for interoperability. Once we get into higher-level discussions about meta-data and databases, everyone has their own idea that is better than everyone else’s, and it will be a long time before enough ideas are tried that people come to a concensus on them. We’ve had UFS for a couple decades or so, so I expect it’ll be quite a few years before something better that everyone likes comes along. Is not it already realised, tested and be ready to include in mainline ? Article remember me 2 years old reiser4 concept on namesys.com the MAIN objective of such concepts is to be able to relate things together, and have the meaning relevant to the computer and the human. This I would suggest be carried out via all textual relationships: by this I mean that instead of being able to find something by name, it’s actually always referred to via a GUID. This GUID concept has the ability to have related concepts (attributes, metadata) These attributes are really just concepts that also have a GUID. peep this: A Concept(Bob Johnson) has Attribute(Concept(Father)=’Jim Johnson’) therefore a ‘folder’ (used loosely) of ConceptType(Person) would hold references (links, if you will) to those Concepts(‘Bob’ & ‘Jim’) Individually, Bob exists, as does Jim: but with no relation. Now we can add a relationship between these two abstracted types of information, with a commom framework that allows for this type of dynamically generated ‘Types’ to ALL types of information. Depending on how I’ve setup my system, this framework extends itself to many different concepts, and allows flexibilty to grow, and still be able to link to other concepts. whew, are you still with me? Now combine that dynamic, abstracted framework to a gui that would replace all of your current software. This would (via plugins) retrieve all your messages for example, and create the necessary relationships based on they applications setttings, which are also stored in the central db. Lets’ say that your messaging app gets an email from Robert Johnson, that email has properties that the computer could parse through, and create metadata based on what is found. Using GUIDs would also make this much easier. Once you had established that [email protected] was a known person, they’d be given a GUID, to which the emails attribute (From) would have that completed dynamically. i have to stop now, but I think you get the idea? post your thoughts, please! I know that this looks a lot of things in the face, and much of what exists fulfills this idea, but not with a CENTRALIZED repository of information. I read once, that in order to make a great leap, one has to ignore a lot of what has been done. Edison didn’t get the lightbulb right on the first time, and he said he had learned of thousands of ways NOT to build one. I think the same applies in such ideas of computing also. cheers! Have you investigated any SmallTalk implementations? It has many of the aspects which you are thinking about. It is basically like an OS which is image based instead of file based. Everything is stateful including the virtual machine. Check out Squeak or another installation. I have heard of it, but paying the bills fixing M$ code all the time has not left me enough time to fully investigate it… The more research I’ve been doing, the more I’ve been finding references to it, also. I haven’t heard it put the way you have in your post. That’s the last straw!!! i’m going to check it out right now. Thanks! Furthermore, the gui I’ve been writing is in .Net, but the further along I get, the more I’ve been seriously considering a platform change, the ‘vendor lockin’ people are always talking about. i’ve been considering Mono, on fedora core 3, or a linux from scratch alternative: i need the linux practice , (or I’ve outdorked myself). As always main problem with developing such a systems is formalization. Human being doesn’t think neither express itself in logical, formalized way; computer per se cannot understand illogical and contradictional commands. Well, users can be teached to perform some less or more simple operations, like give name to a document (do your letters at home desk have names?), save it (do your hadwritten essays need saving?) – but this is not natural. Mixing file system, database, metadata should in principe create something more natural. Unfortunately this involves two stages of formalization – first while creating metadata (this must be automated, otherwise it’s pointless), second while performing searces on this data (this requires understanding human language at least). All the rest is pure technical problem and many degrees simpler. Seems that this task is unsolvable. There can be many partial solutions (many files have extended attributes/tags, created automatically; XP can create indices, based on file content; MS has SQL application, able to understand natural language queries etc) but noone has developed complete solution yet (for many decades already). Microsoft tries this with WinFS – which is adjourned many times. Seems that they cannot solve these problems too. Versioning capabilities will be added eventually, either covering full file system or a portion of it. It is just a matter of time, as well as ability to create a simple interface as the current version constrol system interfaces are too complex for an average user. Certainly most applications are capable of processing attributes of the file types they deal with (music players can access music file attributes for instance). However, to be able to access attributes of many/most file types an application would require knowledge of many file types, which is very cumbursome, expensive and difficult to maintain. For instance, file manager search command is limited to simple text and similar ASCII based files because it is too difficult to embed bunch of filters for other binary types. Thus storing these attributes in a database that is not file format dependant would enable any application to access attributes of any file format so long as it can access the API. It is precisely the fact that we are unable to access these attributes that are file format specific from any context (such as generic applications, file manager, …) that discourages users to enter them. Their scope is too limited. They become truly useful only on a global level, when searching not just from a specific application a specific set of files, but a full file system. Therefore, standardized API is likely to encourage users to start using the attributes as they will be accessible from any application. It is precisely the fact that we are unable to access these attributes that are file format specific from any context… I agree 100%, this is vital to the completion of metadata. Also, by creating tools that enable one to create this metadata programatically, ie. a ‘import Plugin’ such as BeOS had with it’s translation system would be a major boon to such a system. Damn I miss BeOS, my new computer doesn’t work with it. There was so much potential there, so long ago. Very good points. There are certainly some difficult issues to be resolved. I believe that the RDBMS technology is sufficinelty standardized today that it should be easy to switch. Most systems have some version of unload/load or dump/load functionality. There is also XML format that can be used to transfer metadata. Database is hidden behind an API such as WinFS and apps should never have direct access to it. Thus it should be easy to replace one type with another. While MS will probably prohibit anything else than MSQL, open source systems will likely offer some choice. That shouldn’t be a problem. That being said, there is no standard API for accessing file metadata. Therefore, there is the danger of being locked into a certain API. For instance, to use Windows platform you have no choice but to use WinFS, hence vendor lockin. I am hoping that eventually a standard multi-vendor API will emerge as WinFS becomes popular and useful. Another problem is what happends to metadata when file travels between 2 file systems. Some sort of accompanying XML content with metadata may be created, or … Remains to be solved as well. There are certainly some implementation and interop issues, no doubt. It is not going to be trivial. Quality of metadata or lack thereof is based on users. The same issue exists today with hierarchies. The difference is that user is not forced to enter information he/she doesn’t have. You can enter title and author if applicable and not bother with a path. Today, you have no choice, hence a lot of people dump most files into a single directory. Having a larger number of available attributes increases chances of entering higher quality values. Google is a huge database. File system database is much smaller, so number of false hits ought to be proportionally smaller. >>> Secondly, well-constructed file trees are metadata Indeed they are. In fact, as I stated in the article they can remain in use, no need to kill hierarchies. In fact, you could define multiple hierarchies, based on different criteria. The main point is that we need more flexibility to be able to use multiple attributes, not just a single hierarchy. Well I think some kind of meta data FS is inevitable, but I really don’t see the directory based system going anywhere. It’s just too easy to reference stuff when you know what it is. The article makes a good point when it says ” Full file path is simply a unique key to a file, necessary to be able to reference files without any ambiguity. They serve the same role as a primary key in database systems. ” Any db file system will have to use this for exactness and just for people’s transitional period. The biggest battle will come when we need to standardize on meta-data names. Then there’s the issue of storing all data as an object that the ‘system’ understands. That’s a faroff goal especially if the data structures are not constant…I htink you’dhave to do something like Java’s XMLEncoder…but man would your harddrive space perish. Try getting the world to standardize on that.. Database-driven OSs have been done in the past (no, not BeOS) decades ago, I dont remember the name but it was one of those weird non-unix operative systems, perhaps it was before of unix. Back in the 70’s and 80’s there was this little OS that was a Database primarily, it was called PICK (Multi-valued database), which lived long before SQL, and even had its own natural query language. It bestowed the virtues of everything being stored as data, including the OS, which was stored in the database. Thinks have changed now, and D3 (latest version of PICK), has followed UNIVERSE and other PICK/Multivalued databases and now live ontop of other OS’s. While not as advanced as you are suggesting, the fundamentals are pretty close. Several posts above advocate versioning in the filesystem. Versioning at the filesystem level is useful only in the context of WORM-type storage, because users always become their own worst enemy with regard to keeping data intact. Combine versioning with Solaris ZFS-style expandable partitions might allow this to happen, where disks can be added or replaced as storage needs grow. Except for hard-core video editors or MP3 pack-rats, 100+GB is almost enough for a true WORM storage, given that most files are only a few KB. I’m still well under 30GB, myself. Individual apps should be handling this sort of thing. People lose their files, fine. Load up the app you used to make it and check through your recent files you worked with (it should store everyone you make and view, and let you sort through and search them). People are going to lose files no matter what you do, and I don’t see adding mass overhead as helping. If you restrict it to home directories it will help (eliminate literally tens of thousands of files that really don’t need to be searched) however you still often end up with massive directories of stuff. It depends a lot on the user, most will only have large numbers of files in their IE cache. I honestly think people will simply not fill out most of the extra information on files and it will become useless for those who need it. Those who don’t need it may continue to simply organize their directories and pick good filenames. Most people make, save, write, and think later. And typically, non-techie, users will see files as something associated with an application and not associated with the computer as a whole. So do find that word doc they will often open word to find it (and this is logical, and if apps were written with this in mind it would work too). The mega-finder app is a dream that’s just not gonna solve all our problems in 4.2msec. And I got news for ya, computing resources are still scarce. It’s just that we seem to have infinite text processing capabilities; but you throw in video and graphics and you find the modern PC slowing down and begging for well written code. Also, using a standard SQL server for this is MASS overkill; but I imagine developers realize this. I hope so anyway; I for one do not run mySQL on my desktop for a reason. If somebody gets a great implementation of this integrated with a filesystem I will support it; I probably won’t use it for years but I’ll support it! Good luck. Your concepts and attributes appear to be elements of a semantic network. AI (Artificial Intelligence) research has been developing this sort of ideas for a long time now. Unfortunatelly most of the academic papers are very dense reading. Still, you should examine some summary texts regarding semantic nets in AI. Long term AI will play a major role, but it is too early today. Only a few brave companies are playing with some semantic processing mostly of documents to extract metadata. I’m a programmer, have been for almost 20 years now. I still get tickled when I can’t find something on my BeOS machine and do an attribute search and come back with what I am looking for instantaneously. I cannot imagine how cool Apple’s Spotlight will be. Why should I write a tool to do anything like that when the OS can do it (and SHOULD do it) for me? Afterall it is the OS’s job to manage my devices and allow me easy access to the data contained on those devices. Mike DonQ, Excellent post! I was hoping to advance the discussion There is certainly very large chasm between human languages which are full of ambiguities and precise, discrete computer languages and data. At the moment the translation is mostly performed by us, humans. For example, when entering a query in Google, we often revise queries multiple times to add/replace synonyms. Especially skilled users can reformulate queries in many ways to convey the same semantics. This is both tedious and difficult (time consuming). In the short term there are a few tricks, heuristics that can be programmed to ease the burden. Google already performs word processing such as removing s for plural, removing punctuation, etc. – normalizing the search terms. Word comparison can be relaxed to tolerate 1 or 2 letter difference to account for minor spelling mistakes. More ambitous programmer could add a thesaurus and perform synonym replacement automatically, attempting a series of queries until results are obtained. And so on. Long term solid NLP (natural language processing) technology will be essential to translate between the 2 worlds. I believe that Microsoft already has staff working on NLP for use in their Office products (remember the clip?). I believe that it will be a gradual evolution starting with simple heuristics (some of which are already in use today) to full NLP. It will take time though. Unfortunately few companies and OS projects are putting any effort into it. I believe that Seth Nickell () is trying to use a parser in his project. Having applications automatically extract and store metadata from file content itself can alleviate both problems of using consistent terms as well as metadata collection effort. By copying the same terms as used in content into the database the terms should be consistent. In addition, user is not burdened with metadata entry. As Bob Johnson mentioned eralier, there is also contextual information that can be interpreted and stored as metadata. Thus ideally user would not even notice that metadata was stored. While full NLP may not be available today, simple keyword based search are very much possible as Google has already demonstrated. WinFS delays have multiple causes. Keep in mind that they have a huge and mostly conservative user base. Therefore, it takes them much longer to develop stuff, especially brand new technology such as this because they don’t get 2nd chances. The trouble is that WinFS is at such a low level (file system) that they have to be careful not to adversely impact existing applications and users. For instance, most games may have no need for the new WinFS features, but need top performance. Microsoft has to ensure that games still get great performance. It is a very complex balancing act for them. Any tinkering at such a low level in an OS is inherently risky. Microsoft doesn’t like risk, so they probably want to take the extra time to make sure that it works well. rarely comes up, and that’s lazyness. If people are being too lazy to maintain an orderly file tree and put their files where they need to go to be found – Do you REALLY think these people are going to take the time to fill out all these extra attributes at save time? Sure some of these can be plugged in by the program that created them, but if you end up with 100 files all with identical sub-info, what are you accomplishing besides chewing up disk space for what amounts in functionality to little more than what a three letter extension like .doc or .jpg does?. In my experience you are LUCKY if your user takes the time to enter more than ten letters as a filename, and on the occasion you find names longer than that it’s usually because M$ office auto-named the file after the first sentence of it’s content. You really think they are going to spend the time to fill out MORE? People are WAY lazier than that. On top of which, the hard drive is STILL the biggest bottleneck in a modern computer. Adding more data to parse through on writes, reads, and searches looks really stupid on paper. The article hits on the most important point – all this should be OPTIONAL. Leave the underlying system intact, and allow files to be created without the overhead… The biggest fear anyone has is being saddled with options they don’t want, don’t use yet still slow everything down – which is why you see so many vehement attacks against it. Like “task orientation” – Big fancy word for wizards, and a lot of users HATE wizards. Great for nubes, screws everyone else. Switching to ‘task oriented’ only pisses people off and wastes time, for god sakes leave the underlying option boxes in place. >>> Any db file system will have to use this for exactness and just for people’s transitional period. The biggest battle will come when we need to standardize on meta-data names. Agreed. I have seen some WinFS presentations that showed an elaborate meta-schema. They are developing a system of types, metadata names. I am not sure there will be a battle. WinFS stands alone in scope and may have the 1st mover advantage. Hard to tell. >>>. Exactly. >>> laziness This used to be my opinion. Those lazy users, why don’t they do the proper job of creating directories and organizing files?!? I was annoyed with the graphics designer Jason who kept placing all his files onto desktop, forget directories. Then I realized that he works and thinks in graphical terms, not text based hierarchies. He works with images all day long and directories simply don’t exist in his world, don’t interest him. Indeed, there are many users who either don’t understand directory hierarchies and/or lack the skills to create good hierarchies. It has nothing to do with laziness! We, programmers, need to understand and accept this fact. We need to accomodate these users even if we don’t like it. >>> 100 files all with identical sub-info The current apps make a feeble attempt to automatically enter obvious and useless information, such as Adobe attaching, Created by app: xxxxxxx Of course such information is not of much use. However, should metadata be stored and managed at the OS level and become much more accessible, both application and users would make a greater effort to enter useful information. Today there is very little incentive. For example, note how many music files have ID3 tags with useful info (artist, title, ….) This has happened only because of P2P apps followed by the players that make excellent use of the information. Users followed and made the extra effort to enter the information. They in turn were followed by more apps, such as music file meta databases, etc. The virtuous cycle was created between programmers building apps and users trying to attach and make the most use of metadata. In fact, the entire Napster/P2P phenomenon would not be possible without metadata. >>>. Most ID3 tags have at least 3 attributes that are important to users: title, author, album Many applications have options to create file names from these. The result are very large file names, so most users end up with title alone. I still think that ID3 is a huge success, demonstrates the potential of attaching metadata to files and making it widely accessible. >>> In my experience you are LUCKY if your user takes the time to enter more than … Certainly users should not have to enter all metadata. Aplications will have to become more intelligent and fill some of the data automatically. Many apps already do this to a limited extent. >>> Adding more data to parse through on writes, reads, and searches looks really stupid on paper. Storing metadata in database will avoid having to parse/process file content all the time to access the values. The main reason for using a database is precisely to avoid this overhead. Instead, parsing may take place only on file create and write. Even then it is likely to be handled by a separate lower priority background thread. One of the reasons WinFS is late is that Microsoft is making sure that the overhead is not significant, to minimize the need for parsing. >>> all this should be OPTIONAL I see no reason why the flexibility to enable/disable should not be provided. It could be enabled/disabled based on location (only for home directories) and/or file extensions (data file types), etc. There are certainly files that need to be excluded (browser cache, systems files, etc.) OK, what is the need? The need is to that your father is able to find easily a file. There are two ways to find easily a file: – by its name which is done correctly with current implementation. – by its content, which is a problem. Note: I didn’t say by its metadata, but by its content, as currently MP3 tags are inside the files. If you want to find a file by its content, either a) the ‘file search’ tool must understand the format of the file searched, or b) the application which creates the files must extract the content of the file to put it in the metadata for example. For me the advantage of a) is that 1) it doesn’t rely on the capability of the FS, the DB is as-it is now managed by the search tool. 2) the search tool is able to read the whole content of the file not just the metadata. The big downside of a) is that the search tool must be very intelligent to understand each format, but on practive very few formats are needed to cover 90% of the need: doc format for document, MP3 tags for songs, ASCII files is enough (add images/video tags extration for 99%).. You advocate modifying the OS for adding capabilities to search tool, I wonder if just modifying the search tool wouldn’t be enough? You advocate modifying the OS for adding capabilities to search tool, I wonder if just modifying the search tool wouldn’t be enough? To cut into the conversation… Well my first thought is that it would make incremental indexing easier and faster. How to notify the search tool of every modification, making sure nothing is missed, without a costly re-index? What happens when the program is not active when files are added and modified? Storing this in the FS structures is much more efficient and reliable. How, also, would other applications be able to take advantage of this searching and present data in convenient ways for the application domain? It’s really more than just a program for searching. The point of a database is for the data to be pulled apart and tagged in a consistent way instead of thousands of incompatible file formats which are very hard to process automatically using general tools. My thoughts on network interchange… Invent a file format that can represent whatever model you have (please, not XML, we need non-text too). Convert to-and-from. And then wait for HTTP/2.0 😉 While I agree that if every application is made ‘meta-data’ aware, this is probably less costly than having an external tool doing the indexation (even though the indexation is of time aware and only scan modified directories/files), I’m quite wary of having to be sure of the behaviour of each application to make something work correctly.. What happen if an application do not update its metadata? You can’t search correctly those files: so bye-bye porting easily applications from other OS. Of course if the ‘search tool’ doesn’t know how to handle a file type, there is the same problem, but a bit smaller as if several applications use the same file format you need only one plugin for parsing the file type.. Also you’re writing about using non-text in the metadata, I’m not sure what you plan to put inside.. Images? Sure there are some AI applications which are able to search inside images, but noone use them.. Also you’re writing about using non-text in the metadata, I’m not sure what you plan to put inside.. Images? Sure there are some AI applications which are able to search inside images, but noone use them.. BeOS stored file-icons as metadata. Thumbnails for hi-res images and perhaps also for video-files makes perfect sense to store as metadata (Haiku will most probably do this for image files). While this is not meant for searching, I still see it as the “right” way to store icons and thumbnails. Also, at some point in the future it will probably be quite normal to do image searches on your home computer to find that photo of your dog eating the neighbour’s ice cream. As for what Luke was thinking of when mentioning non-textual data, he will have to answer for himself. (Representation of objects from an OO language, maybe?) >>> but on practive very few formats are needed to cover 90% of the need: doc format for document, MP3 tags for songs, ASCII files is enough (add images/video tags extration for 99%).. Indeed, there are few formats used by the majority that would cover most needs. However, there are many specialized formats as well, such as CAD/CAM, software design and many other industries have their own formats. Worse yet, while Word defines specific title, author and other meta-attributes, others don’t. XML and HTML can encode attributes in many ways. There are many XML DTDs/schemas, each would need a specific filter. The industry is dynamic and formats are not static. They keep changing, new ones are created all the time. Therefore, there is a large number of old obsolete formats that still need to be supported as files can be stored for a long time. Thus we have the burden of both maintaining filters for old formats and developing new ones to keep up with the latest popular formats. Finaly, some formats are not public, are private property of companies. They can be very difficult or impossible to process. You get into all kinds of legal issues. While on the surface it seems simple to handle the few most popular formats, in practice it is not good enough. >>> You advocate modifying the OS for adding capabilities to search tool, I wonder if just modifying the search tool wouldn’t be enough? Make it plural, tools and applications. Some search from a file manager, others from an application such as a word processor, or mail client. All the applications would need this capability, all the filers replicated. That is why placing the parsing/processing of file formats at the OS level would remove redundancy and make it much easier to implement. I hope you’re right, especially on the API / database independence part. After reading the posts here, there’s another thing i’m wondering about: the current metadata. What are you going to do with that? I’m gonna use MP3 ID3 tags as example even though there are multiple versions (ID3v1 and ID3v2). I’ll keep numbers low for the sake of simplicity, but they could just as well be higher in another example so please don’t apply the solution that migration is easy in this example. Lets say a user has a number of these MP3s files. * The user has 10 correct ID3v1 tagged MP3s in the directory “artist/cdname” which he/she ripped from a CD him/herself in 1999 (these defined as ‘X’). * The user has 6 correct ID3v2 tagged MP3s in the directory “artist/cdname” which he/she ripped from a CD in 2004 him/herself in 2004 (these defined as ‘Y’). * The user has 4 MP3s which he/she downloaded from the Internet. ** 1 of them has the filename ‘a.mp3’ and has no ID3 tag (this defined as ‘Z1’). ** The other one has the filename ‘artist – songname’ and has no ID3 tag (this defined as ‘Z2’). ** The thirth one has the filename ‘b.mp3’ and has a correct ID3v1 tag (defined as ‘Z3’). ** The last one has the filename ‘c’ and has a correct IDV2 tag (defined as ‘Z4’). Now, my questions in this example are: 1) How would you know how (or actually do it) to to index this correctly. What would you index? Would you ‘use’ the ID3v1 tags and/or the ID3v2 tags? What about the filenames then? How do you know which delimiter to use (often used in ‘cut’ but thats not ‘AI’ of any kind). 2) Where would you store it for use? Would the ‘system’ (DBFS system) take the ID3vX tags and put that into its own database? Would it try to be in sync or would it remove the ID3vX tags and put that metadata into its system? Would it use it, and then simply ignore it? 3) I’m really wondering if this example also does apply on other examples or if the MP3 example is simply just a popular example but not representative.). PS: versioning is nothing new (VMS had it). Afaict, this doesn’t have to use a DBFS or architecture at all, but i’ll look a bit more into OSF now so take it with a grain of salt. Expandable partitions is nothing new either (XFS has it, LVM allows it easily). is that this is a paradigm shift that WILL happen, it’s the natural progression. I think what most people don’t think about is the underlying concepts intend to replace the traditional file system IN TIME, not right now. This discussion has brought up some major issues, which are valid: 1. Backwards compatibility Personally, the software I’m working on will import all of the current information, and pretty much do away with my current software. 2. This is not intended to be a file finder: the concepts involved (at least in my project, and many others) are intended to create a new way of storing data, so that the current idea of dealing with a file is obsolete. One would deal with concepts that can be related to concepts. In the webcentric world that we are quickly evolving, you mostly just deal with (html, css, etc) formatted database entries. The ideal with systems such as this include application servers more than anything. 3. Versioning:…. Luke put this very well: The point of a database is for the data to be pulled apart and tagged in a consistent way instead of thousands of incompatible file formats which are very hard to process automatically using general tools. The way I see it, the metadata should be more accessible to applications and the user. One should be able to dynamically create metadata that is relevant to the object, and there should be a somewhat decentralized definition of the base of an object: a person has a first name, last name, etc…. Basically, the aim of my software is to stop reinventing the wheel, and make my life easier with not using bad, buggy, and non-datacentric software. in the end, it really doesn’t matter what people believe will happen with these concepts of the database as a filesystem. Basically there is no right answer when it comes down to it. Just because there are files, directories and what not and it works doesn’t mean there aren’t any other good paradigms to find. It just happens that files and directories have done the job for a long, long time. (Longer than i’ve been around, that’s for sure), but i think for truly innovative things to appear, one must disregard common thought. it reminds me of a creative thinking class I had at engineering schoool, we had to come up with an invention: I came up with automatically dimming headlights. The teacher gave me an ‘D’, and told me that GM tried it in the 50’s, and it worked for crap. I replied that perhaps there’ve been some advances in technology since then, and it might work a little bit better these days. Since then, I’ve seen it in numerous models of European cars. This is the point: just because it didn’t work in the past, or seems like it won’t, I won’t let someone like that discourage me from proving them wrong;) cheers!. A few points to this one. By default, you show the latest version. VMS had this in the shell, in the form of filename.extension;version. If you did not specify the version, it took the latest. I couldn’t find so fast how the VMS FS exactly functions. The user manual doesn’t seem to explain that. What you say applies to the editting of files; as in, creative work from humans. That could be e.g. text, audio, video, code or a combination of that. What you say sounds very much like a Wiki on FS layer, which i read about at Shoulexist.org the other day: One really awesome example of usage could be that you’re able to ‘diff’ 2 versions against each other. Besides being able to see what the difference of content is, it would also be interesting to see other differences between these 2 versions: when was it editted (‘modified’), who editted it (‘username’ and/or ‘realname’), eventually statistics or allowing a feature such as moderation (‘how useful does each $user find this edit?’). As i said, what you say applies to ‘editting of files’. Files which use static, such as libraries and binaries, don’t necessarily need this. This makes one think it should be able to turn versioning off, or to turn it on at specific directories and/or files. Logs also don’t need this because they already have this function! Actually compressing old logs, such as with logrotate, is very useful against too much storaged data which ain’t used hence this would apply eventually to ‘versioning’ in FS as well. This is not directly related to this article but I think it might be of some interest: There was an interesting article about the possibility of replacing directories with permanent database queries on the OpenBeOS (now HaikuOS) Newsletter. I don’t know if it’s somewhere on their new site, so here is a link to the old one: I think that, meta-data will be used where it is either relatively easy or very desirable for the users to create the meta-data. This applies particularly particularly to digital archives of music, films, pictures and the like. Meta-Data entry could be made fairly easy for users in all kinds of ways: Cameras with built-in GPS could generate tags which indicate where pictures were taken (useful for identifying holiday snaps). Where meta-data is harder to generate, or is less useful, other methods of cataloguing files are required. eg. full text searh of textual data. Kramii. Users could select a set of files and apply the same meta-data to them all, eg. selecting a whole lot of pictures that have dogs in them and entering “Pictures of dogs” in the meta-data. >>> Now, my questions in this example are: … Indeed, there are certainly a lot of details to be defined. It is not going to be a simple 1-1 mapping from file format attributes to database columns. Some sort of heuristic will have to be employed to filter out junk. For instance, if the attribute Author=a is found then filter ought to realize that this is junk, should be ignored. Using a dictionary of words and common names could improve accuracy. It won’t be trivial. Maybe this is one of the reasons for WinFS delyas? >>>). Conventional approach would be to define a plugin mechanism of some sort so that filters for formats can be added and removed as necessary, either by the original creator (Microsoft for WinFS) or developers using the API (apps using WinFS). Alternative is for apps themselves to supply the values via the API. Without going into too much details, let me just point you at two links: Both products are widely used in the legal industry (document centric) and do an amazing job at abstracting the filesystem from the user. All a person has to do is say “I want all briefs from my xxx case with the name John Smith somewhere in the text” and presto! It seems like the various projects discussed recently are trying to make this type of functionality available to the masses. A great idea if implemented properly, but a terrible idea unless they can get all vendors to change their file open/save dialog boxes… > Why is it not simple to use your address book in Outlook BeMail runs a query on People files (each contact is a 0 length file in BeOS), to populate the To: popup menu, and does typeahead on that list. > What about versionning ? As others mentioned, VMS has that since ages. You might as well have a look at cvsfs-fuse and WayBack: > That was until Apple came up with Spotlight. Apple didn’t come up with that, they hired an ex-Be engineer and sto^H^H^Hreused what BeOS has for years. > Neither HTTP nor FTP have been upgraded to support these new file systems Good point, I often have to zip up files to ftp them between beos boxen. Time to write an RFC I guess > An open API would help a lot so that developers can start to use these features. I’ve been thinking around about making a compatibility layer for BeOS attributes and Linux ones (POSIX attr), but they aren’t even compatible semantically. POSIX attrs are just a couple (char name[],byte value[]), while BeOS attributes are (char name[], uint32 type, char value[]) (type is not part of the key though; i.e. you can’t have 2 attrs of the same name even of different types). Simplest would be storing the type as the first 4bytes of value in linux, but I’m sure ppl will object it makes strings less readable and blah blah (hough updated tools could parse them and show that correctly). > Damn I miss BeOS, my new computer doesn’t work with it. You might want to have a look at bebits, chances are you’ll find patches for more than 1GB of RAM, athlonXPs and fast CPUs… and new drivers for new video cards > There was so much potential there, so long ago. <advertising> There *is* potential: – > Another problem is what happends to metadata when file travels between 2 file systems. Well, BeOS handles that to some extend, that is old MacOS resources are published as attributes, so if you copy a file to BFS, then back to HFS they are preserved at least. > In fact, you could define multiple hierarchies, based on different criteria. > The main point is that we need more flexibility to be able to use multiple > attributes, not just a single hierarchy. That’s something BeOS doesn’t provide off-hand but could be possibly implemented in Tracker, by building special type objects which would contain namespace description using attribute names, and filter-outs, like {MAIL:thread}/{MAIL:when}/{name}. Seriously thinking about prototyping something with Tracker. > Toss in some google style search technology so we don’t all need to know regular expressions and SQL syntax, Or just use BeOS, you don’t need to know the query language to use it, you can build queries graphically: Btw, nice article there describing what one can already do with bfs. I think I’ll keep a bookmark on that news, I’ll read all the messages later, I’m sure there are nifty idea I could prototype in BeOS/Zeta Talking about bookmarks: That’s no more a prototype The idea is to integrate Google’s search withing BeOS. You can run a query on that volume, and it will publish google’s answers as bookmarks (0 length files with url, title, … attributes). So you don’t need a static bookmark anymore wtf! now thats an insane idea. but can those files/bookmarks be saved so that i can pass them on to someone else as a collection via some kind of storage media? haveing the folders in fact be preset database querys are one of the more interesting parts of useing a database as a filesystem (or part of one). if it allso allows for metadata setting by drag and drop into one of these preset quesrys/folders (i belive this was a feature presented for winfs) then over time the setting of metadata dont have to be as troublesome as it can be at times right now…
https://www.osnews.com/story/9228/file-systems-and-databases/
CC-MAIN-2019-51
refinedweb
10,559
60.95
Writing scripts to interact with Web sites is possible with the basic Python modules, but you don't want to if you don't have to. The modules urllib and urllib2 in Python 2.x, along with the unified urllib.* subpackages in Python 3.0, do a passable job of fetching resources at the ends of URLs. However, when you want to do any sort of moderately sophisticated interaction with the contents you find at a Web page, you really need the mechanize library (see Resources for a download link). One of the big difficulties with automating Web scraping or other simulations of user interaction with Web sites is server use of cookies to track session progress. Obviously, cookies are part of HTTP headers and are inherently visible when urllib opens resources. Moreover, the standard modules Cookie ( http.cookie in Python 3) and cookielib ( http.cookiejar in Python 3) help in handling those headers at a higher level than raw text processing. Even so, doing this handling at this level is more cumbersome than necessary. The mechanize library takes this handling to a higher level of abstraction and lets your script—or your interactive Python shell—act very much like an actual Web browser. Python's mechanize is inspired by Perl's WWW:Mechanize, which has a similar range of capabilities. Of course, as a long-time Pythonista, I find mechanize more robust, which seems to follow the general pattern of the two languages. A close friend of mechanize is the equally excellent library Beautiful Soup (see Resources for a download link). This is a wonderful "sloppy parser" for the approximately valid HTML you often find in actual Web pages. You do not need to use Beautiful Soup with mechanize, nor vice versa, but more often than not you will want to use the two tools together as you interact with the "actually existing Web." A real-life example I have used mechanize in several programming projects. The most recent was a project to gather a list of names matching some criteria from a popular Web site. This site comes with some search facilities, but not with any official API for performing such searches. While readers might be able to guess more specifically what I was doing, I will change specifics of the code I present to avoid giving too much information on either the scraped site or my client. In general form, code very much like what I present will be common for similar tasks. Tools to start with In the process of actually developing Web scraping/analysis code, I find it invaluable to be able to peek at, poke, and prod the content of Web pages in an interactive way in order to figure out what actually occurs on related Web pages. Usually, these are sets of pages within a site that are either dynamically generated from queries (but thereby having consistent patterns) or are pre-generated following fairly rigid templates. One valuable way of doing this interactive experimentation is to use mechanize itself within a Python shell, particularly within an enhanced shell like IPython (see Resources for a link). Doing exploration this way, you can request various linked resources, submit forms, maintain or manipulate site cookies, and so on, prior to writing your final script that performs the interaction you want in production. However, I find that much of my experimental interaction with Web sites is better performed within an actual modern Web browser. Seeing a page conveniently rendered gives a much quicker gestalt of what is going on with a given page or form. The problem is that rendering a page alone only gives half the story, maybe less than half. Having "page source" gets you slightly further. To really understand what is behind a given Web page or a sequence of interactions with a Web server, I find more is needed. To get at these guts, I usually use the Firebug (see Resources for a link) or Web Developer plug-ins for Firefox (or the built-in optional Develop menu in recent Safari versions, but that's for a different audience). All of these tools let you do things like reveal form fields, show passwords, examine the DOM of a page, peek at or run Javascript, watch Ajax traffic, and more. Comparing the benefits and quirks of these tools is a whole other article, but do familiarize yourself with them if you do any Web-oriented programming. Whatever specific tool you use to experiment with a Web site you intend to automate interaction with, you will probably spend many more hours figuring out what a site is actually doing than you will writing the amazingly compact mechanize code needed to perform your task. The search result scraper For the purposes of the project I mentioned above, I split my hundred-line script into two functions: - Retrieve all the results of interest to me - Pull out the information that interests me from those retrieved pages I organized the script this way as a development convenience; when I started the task, I knew I needed to figure out how to do each of those two things. I had a sense that the information I wanted was on a general collection of pages, but I had not yet examined the specific layout of those pages. By first retrieving a batch of pages and just saving them to disk, I could come back to the task of pulling out the information I cared about from those saved files. Of course, if your task involves using that retrieved information to formulate new interactions within the same session, you will need to use a slightly different sequence of development steps. So, first, let's look at my fetch() function: Listing 1. Fetching page contents import sys, time, os from mechanize import Browser LOGIN_URL = '' USERNAME = 'DavidMertz' PASSWORD = 'TheSpanishInquisition' SEARCH_URL = '?' FIXED_QUERY = 'food=spam&' 'utensil=spork&' 'date=the_future&' VARIABLE_QUERY = ['actor=%s' % actor for actor in ('Graham Chapman', 'John Cleese', 'Terry Gilliam', 'Eric Idle', 'Terry Jones', 'Michael Palin')] def fetch(): result_no = 0 # Number the output files br = Browser() # Create a browser br.open(LOGIN_URL) # Open the login page br.select_form(name="login") # Find the login form br['username'] = USERNAME # Set the form values br['password'] = PASSWORD resp = br.submit() # Submit the form # Automatic redirect sometimes fails, follow manually when needed if 'Redirecting' in br.title(): resp = br.follow_link(text_regex='click here') # Loop through the searches, keeping fixed query parameters for actor in in VARIABLE_QUERY: # I like to watch what's happening in the console print >> sys.stderr, '***', actor # Lets do the actual query now br.open(SEARCH_URL + FIXED_QUERY + actor) # The query actually gives us links to the content pages we like, # but there are some other links on the page that we ignore nice_links = [l for l in br.links() if 'good_path' in l.url and 'credential' in l.url] if not nice_links: # Maybe the relevant results are empty break for link in nice_links: try: response = br.follow_link(link) # More console reporting on title of followed link page print >> sys.stderr, br.title() # Increment output filenames, open and write the file result_no += 1 out = open(result_%04d' % result_no, 'w') print >> out, response.read() out.close() # Nothing ever goes perfectly, ignore if we do not get page except mechanize._response.httperror_seek_wrapper: print >> sys.stderr, "Response error (probably 404)" # Let's not hammer the site too much between fetches time.sleep(1) Having done my interactive exploration of the site of interest, I find that queries I wish to perform have some fixed elements and some variable elements. I just concatenate those together into a big GET request and take a look at the "results" page. In turn, that list of results contains links to the resources I actually want. So, I follow those (with a couple of try/ except blocks thrown in, in case something does not work along the way) and save whatever I find on those content pages. Pretty simple, huh? Mechanize can do more than this, but this short example shows you a broad brush of its capabilities. Processing the results At this point, we are done with mechanize; all that is left is to make some sense of that big bunch of HTML files we saved during the fetch() loop. The batch nature of the process lets me separate these cleanly, but obviously in a different program, fetch() and process() might interact more closely. Beautiful Soup makes the post-processing even easier than the initial fetch. For this batch task, we want to produce tabular comma-separated value (CSV) data from some bits and pieces we find on those various Web pages we fetched. Listing 2. Making orderly data from odds and ends with Beautiful Soup from glob import glob from BeautifulSoup import BeautifulSoup def process(): print "!MOVIE,DIRECTOR,KEY_GRIP,THE_MOOSE" for fname in glob('result_*'): # Put that sloppy HTML into the soup soup = BeautifulSoup(open(fname)) # Try to find the fields we want, but default to unknown values try: movie = soup.findAll('span', {'class':'movie_title'})[1].contents[0] except IndexError: fname = "UNKNOWN" try: director = soup.findAll('div', {'class':'director'})[1].contents[0] except IndexError: lname = "UNKNOWN" try: # Maybe multiple grips listed, key one should be in there grips = soup.findAll('p', {'id':'grip'})[0] grips = " ".join(grips.split()) # Normalize extra spaces except IndexError: title = "UNKNOWN" try: # Hide some stuff in the HTML <meta> tags moose = soup.findAll('meta', {'name':'shibboleth'})[0]['content'] except IndexError: moose = "UNKNOWN" print '"%s","%s","%s","%s"' % (movie, director, grips, moose) The code here in process() is an impressionistic first look at Beautiful Soup. Readers should read its documentation to find more on the module details, but the general feel is well represented in this snippet. Most soup code consists of some .findAll() calls into a page that might be only approximately well-formed HTML. Thrown in here are some DOM-like .parent, nextSibling, and previousSibling attributes. These are akin to the "quirks" mode of Web browsers. What we find in the soup is not quite a parse tree; it is more like a sack full of the vegetables that might go in the soup (to strain a metaphor). Conclusion Old fogies like me, and even some younger readers, will remember the great delight of scripting with TCL Expect (or with its workalikes written in Python and many other languages). Automating interaction with shells, including remote ones such as telnet, ftp, ssh, and the like, is relatively straightforward since everything is displayed in the session. Web interaction is slightly more subtle in that information is divided between headers and bodies, and various dependent resources are often bundled together with href links, frames, Ajax, and so on. In principle, however, you could just use a tool like wget to retrieve every byte a Web server might provide, and then run the very same style of Expect scripts as with other connection protocols. In practice, few programmers are quite so committed to old-timey approaches as my suggested wget + Expect approach. Mechanize still has much of the same familiar and comforting feel as those nice Expect scripts, and is just as easy to write, if not easier. The Browser() object commands such as .select_form(), .submit(), and .follow_link() are really just the simplest and most obvious way of saying "look for this and send that" while bundling in all the niceness of sophisticated state and session handling that we would want in a Web automation framework. Resources Learn - "Build a Web spider on Linux" (developerWorks, November 2006) discusses Web spiders and scrapers and shows how to build several simple scrapers using Ruby. - "Debug and tune applications on the fly with Firebug" (developerWorks, May 2008) shows how to use Firebug to go far beyond viewing the page source for Web and Ajax applications. - "Using Net-SNMP and IPython" (developerWorks, December 2007) details how IPython and Net-SNMP can combine to provide interactive, Python-based network management. - In the developerWorks Linux zone, find more resources for Linux developers, and scan our most popular articles and tutorials. - See all Linux tips and Linux tutorials on developerWorks. - Stay current with developerWorks technical events and Webcasts. Get products and technologies - Download mechanize and its documentation. - Download Beautiful Soup and its documentation. - IPython is a wonderfully enhanced version of Python's native interactive shell that can do some rather fancy things such as aiding parallelizing computations; I mostly use it simply for its interactivity aids such as colorization of code, improved command-line recall, tab completion, macro capabilities, and improved interactive help. - You can install Firebug, which delivers a wealth of editing, debugging, and monitoring Web development tools at your fingertips while you browse, right from the Tools/Add-ons menu of Firefox 3.0+. You can add the Web Developer extension, which adds a menu and a toolbar to the browser with various Web developer tools, the same way. -.
http://www.ibm.com/developerworks/linux/library/l-python-mechanize-beautiful-soup/index.html
CC-MAIN-2014-42
refinedweb
2,144
50.97
A simple application for printing file contents as hexadecimal. Wednesday, 30 April 2008 C#: file hex dump application Posted by McDowell at Wednesday, April 30, 2008 2 comments Labels: C#, hexadecimal Tuesday, 29 April 2008 C#: Hello, World! using System; public class HelloWorld { public static void Main(String[] args) { Console.WriteLine("Hello, World!"); } } Posted by McDowell at Tuesday, April 29, 2008 0 comments Labels: C#, Hello World. Posted by McDowell at Tuesday, April 22, 2008 5 comments Labels: EL, Expression Language, Java, JUEL, Tomcat, Unified Expression Language Monday, 14 April 2008 Java: finding binary class dependencies with BCEL Sometimes you need to find all the dependencies for a binary class. You might have a project that depends on a large product and want to figure out the minimum set of libraries to copy to create a build environment. You might want to check for missing dependencies during the kitting process. Posted by McDowell at Monday, April 14, 2008 2 comments Labels: BCEL, byte code engineering library, dependencies, Java Wednesday, 9 April 2008 Java: finding the application directory EDIT 2009/05/28: It has been pointed out to me that a far easier way to do all this is using this method: ...which makes everything below here pointless. You live and learn! Posted by McDowell at Wednesday, April 09, 2008 5 comments Tuesday, 8 April 2008 Java: synchronizing on an ID If you are a For example, there is nothing in the Servlet 2.5 MR6 specification that says a. Posted by McDowell at Tuesday, April 08, 2008 6 comments Labels: concurrency, Java, mutex, synchronization Java: Hello, World! Posted by McDowell at Tuesday, April 08, 2008 0 comments Labels: Hello World, Java
http://illegalargumentexception.blogspot.co.uk/2008/04/
CC-MAIN-2018-22
refinedweb
281
51.58
This article describes how to read in image scenes from the EarthAI Catalog into a RasterFrame. In a previous article, we discussed how to query imagery data using the EarthAI Catalog API. Now we will show how to read that catalog of imagery scenes into a RasterFrame using spark.read.raster. Import Library We import the EarthAI library as a first step. from earthai.init import * a RasterFrame cat contains a catalog of Landsat 8 imagery scenes. It contains references to the imagery files, but not actual imagery. To read in the imagery, you can pass cat to spark.read.raster and it will read these imagery scenes into a RasterFrame. When passing a catalog to spark.read.raster, you must also provide a list of bands you wish to read, in the catalog_col_names parameter. To view a list of available Landsat 8 bands, you can run earth_ondemand.item_assets('landsat8_l1tp'). This function provides information on the available bands, spatial resolution, and band details for each collection in the EarthAI catalog. The band_name field is what should be passed to spark.read.raster. earth_ondemand.item_assets('landsat8_l1tp').sort_values('asset_name') In this example, we will use spark.read.raster to read in B4, B3, and B2, which correspond to the red, green, and blue bands. rf = spark.read.raster(cat, catalog_col_names = ['B4', 'B3', 'B2']) You can select the band names to view tile samples. rf.filter(rf_tile_max('B4') > 0).select('B4', 'B3', 'B2') spark.read.raster breaks imagery scenes up into a gridded set of tiles. By default, the dimensions of each tile will be 256 by 256 pixels, but if you want a different size, then you can pass a Tuple of dimensions, e.g. (512, 512), to the tile_dimensions parameter. You can run ?spark.read.raster in a cell to get more information about the parameters that spark.read.raster can take. Please sign in to leave a comment.
https://docs.astraea.earth/hc/en-us/articles/360051407332-Read-in-Scenes-from-a-Raster-Catalog-into-a-RasterFrame
CC-MAIN-2021-31
refinedweb
318
67.86
zones. For starters, the right answer for production code is almost always to use a proper library rather than rolling your own. The potential difficulties with DateTime calculation discussed in this article are only the tip of the iceberg, but they’re still helpful to know about, with or without a library. a large extent, to meet me at a nearby cafe at 3:00 PM, Uppsala, Sweden that I want to talk to him at 5 PM.+05:45. This means that when it is 5 PM where I live, it is 5 PM - 05:45 = 11:15 AM UTC, which translates to 11:15 AM UTC + 01:00 = 12:15 PM in Uppsala, perfect for both of us. Also, be aware of the difference between time zone (Central European Time) and time zone offset (UTC+05:45). Countries can decide to change their time zone offsets for Daylight Savings Time for political reasons as well. Almost every year there’s a change to the rules in at least one country, meaning any code with these rules baked in must be kept up-to-date—it’s worth considering what your codebase depends on for this for each tier of your app. That’s another good reason we’ll recommend that only the front end deals with time zones in most cases. When it doesn’t, what happens when the rules your database engine uses don’t match those of your front or back store DateTime in UTC. Standardizing the Format Standardizing the time is wonderful because I only need to store the UTC time and as long as I know the time zone of the user, I can always convert to their time. Conversely, if I know a user’s local time and know their time zone, programmers really like because it’s short and precise. We like to call it “ISO date format,” which is a simplified version of the ISO-8601 extended format and it looks like this: Luxon, date-fns, or dayjs. (Whatever you use, avoid the once-popular Moment.js—often simply called moment, as it appears in code—since it’s now deprecated.) But for educational purposes, we will use the methods that the Date() object provides to learn how JavaScript handles DateTime. Getting Current Date const currentDate = new Date(); If you don’t pass anything to the Date constructor, the date object returned contains the current date and time. You can then format it to extract only the date part as follows: const currentDate = new Date(); const currentDayOfMonth = currentDate.getDate(); const currentMonth = currentDate.getMonth(); // Be careful! January is 0, not 1 const currentYear = currentDate.getFullYear(); const dateString = currentDayOfMonth + "-" + (currentMonth + 1) + "-" + currentYear; // "27-11-2020" Note: The “January is 0” pitfall is common but not universal. It’s worth double-checking the documentation of any language (or configuration format: e.g., cron is notably 1-based) before you start using it. Getting the Current Time Stamp If you instead want to get the current time stamp, you can create a new Date object and use the getTime() method. const currentDate = new Date(); const timestamp = currentDate.getTime(); In JavaScript, a time stamp is the number of milliseconds that have passed since January 1, 1970. If you don’t intend to support <IE8, you can use Date.now() to directly get the time stamp without having to create a new Date object. Parsing a Date Converting a string to a JavaScript date object is done in different ways. The Date object’s constructor accepts a wide variety of date formats: const date1 = new Date("Wed, 27 July 2016 13:30:00"); const date2 = new Date("Wed, 27 July 2016 07:45:00 UTC"); const date3 = new Date("27 July 2016 13:30:00 UTC+05:45"); Note that you do not need to include the day of week because JS can determine the day of the week for any date. You can also pass in the year, month, day, hours, minutes, and seconds as separate arguments: const date = new Date(2016, 6, 27, 13, 30, 0); Of course, you can always use ISO date format: const date = new Date("2016-07-27T07:45:00Z"); However, you can run into trouble when you do not provide the time zone explicitly! const date1 = new Date("25 July 2016"); const date2 = new Date("July 25, 2016"); Either of these will give you 25 July 2016 00:00:00 local time. If you use the ISO format, even if you give only the date and not the time and time zone, it will automatically accept the time zone as UTC. This means that: new Date("25 July 2016").getTime() !== new Date("2016-07-25").getTime() new Date("2016-07-25").getTime() === new Date("2016-07-25T00:00:00Z").getTime() Formatting a Date Fortunately, modern JavaScript has some convenient internationalization functions built into the standard Intl namespace that make date formatting a straightforward operation. For this we’ll need two objects: a Date and an Intl.DateTimeFormat, initialized with our output preferences. Supposing we’d like to use the American (M/D/YYYY) format, this would look like: const firstValentineOfTheDecade = new Date(2020, 1, 14); // 1 for February const enUSFormatter = new Intl.DateTimeFormat('en-US'); console.log(enUSFormatter.format(firstValentineOfTheDecade)); // 2/14/2020 If instead we wanted the Dutch (D/M/YYYY) format, we would just pass a different culture code to the DateTimeFormat constructor: const nlBEFormatter = new Intl.DateTimeFormat('nl-BE'); console.log(nlBEFormatter.format(firstValentineOfTheDecade)); // 14/2/2020 Or a longer form of the American format, with the month name spelled out: const longEnUSFormatter = new Intl.DateTimeFormat('en-US', { year: 'numeric', month: 'long', day: 'numeric', }); console.log(longEnUSFormatter.format(firstValentineOfTheDecade)); // February 14, 2020 Now, if we wanted a proper ordinal format on the day of the month—that is, “14th” instead of just “14”—this unfortunately needs a bit of a workaround, because day’s only valid values as of this writing are "numeric" or "2-digit". Borrowing Flavio Copes’ version of Mathias Bynens’ code to leverage another part of Intl for this, we can customize the day of the month output via formatToParts(): const pluralRules = new Intl.PluralRules('en-US', { type: 'ordinal' }) const suffixes = { 'one': 'st', 'two': 'nd', 'few': 'rd', 'other': 'th' } const convertToOrdinal = (number) => `${number}${suffixes[pluralRules.select(number)]}` // At this point: // convertToOrdinal("1") === "1st" // convertToOrdinal("2") === "2nd" // etc. const extractValueAndCustomizeDayOfMonth = (part) => { if (part.type === "day") { return convertToOrdinal(part.value); } return part.value; }; console.log( longEnUSFormatter.formatToParts(firstValentineOfTheDecade) .map(extractValueAndCustomizeDayOfMonth) .join("") ); // February 14th, 2020 Unfortunately, formatToParts isn’t supported by Internet Explorer (IE) at all as of this writing, but all other desktop, mobile, and back-end (i.e. Node.js) technologies do have support. For those who need to support IE and absolutely need ordinals, the sidenote below (or better, a proper date library) provides an answer. If you need to support older browsers like IE before version 11, date formatting in JavaScript is tougher because there were no standard date-formatting functions like strftime in Python or PHP. In PHP for example, the function strftime("Today is %b %d %Y %X", mktime(5,10,0,12,30,99)) gives you Today is Dec 30 1999 05:10:00. You can use a different combination of letters preceded by % to get the date in different formats. (Careful, not every language assigns the same meaning to each letter—particularly, 'M' and 'm' may be swapped for minutes and months.) If you are sure of the format you want to use, it is best to extract individual bits using the JavaScript functions we covered above and create a string yourself. var currentDate = new Date(); var date = currentDate.getDate(); var month = currentDate.getMonth(); var year = currentDate.getFullYear(); We can get the date in MM/DD/YYYY format as var monthDateYear = (month+1) + "/" + date + "/" + year; The problem with this solution is that it can give an inconsistent length to the dates because some months and days of the month’s: const myDate = new Date("Jul 21, 2013"); const dayOfMonth = myDate.getDate(); const month = myDate.getMonth(); const year = myDate.getFullYear(); function pad(n) { return n<10 ? '0'+n : n } const ddmmyyyy = pad(dayOfMonth) + "-" + pad(month + 1) + "-" + year; // "21-07-2013" Using JavaScript Date Object’s Localization Functions The date formatting methods we discussed above should work in most applications, but if you really want to localize the formatting of the date, I suggest you use the Date object’s toLocaleDateString() method:—a very useful feature. As shown in the previous section, the newer Intl.DateTimeFormat-based technique works very similarly to this, but lets you reuse a formatter object so that you only need to set options once. With toLocaleDateString(),. Note: If you want the browser to automatically use the user’s locale, you can pass “undefined” as the first parameter. If you want to show the numeric version of the date and don’t want to fuss with MM/DD/YYYY vs. DD/MM/YYYY for different locales, I suggest the following simple solution: const today = new Date().toLocaleDateString(undefined, { day: 'numeric', month: 'numeric', year: 'numeric', }); On my computer, this outputs 7/26/2016. If you want to make sure that month and date have two digits, just change the options: const today = new Date().toLocaleDateString(undefined, { day: '2-digit', month: '2-digit', year: 'numeric', }); This outputs 07/26/2016. Just what we wanted! You can also use some other related functions to localize the way both time and date are displayed: Calculating Relative Dates and Times Here’s an example of adding 20 days to a JavaScript Date (i.e., figuring out the date 20 days after a known date): const myDate = new Date("July 20, 2016 15:00:00"); const nextDayOfMonth = myDate.getDate() + 20; myDate.setDate(nextDayOfMonth); const newDate = myDate.toLocaleString(); The original date object now represents a date 20 days after July 20 and newDate contains a localized string representing that date. On my browser, newDate contains “8/9/2016, 3:00:00 PM”. To calculate relative time stamps with a more precise difference than whole days, you can use Date.getTime() and Date.setTime() to work with integers representing the number of milliseconds since a certain epoch—namely, January 1, 1970. For example, if you want to know when it’s 17 hours after right now: const msSinceEpoch = (new Date()).getTime(); const seventeenHoursLater = new Date(msSinceEpoch + 17 * 60 * 60 * 1000); Comparing Dates As with everything else related to date, comparing dates has its own gotchas. First, we need to create date objects. Fortunately, <, >, <=, and >= all work. So comparing July 19, 2014 and July 18, 2014 is as easy as: const date1 = new Date("July 19, 2014");: const date1 = new Date("June 10, 2003"); const date2 = new Date(date1); const equalOrNot = date1 == date2 ? "equal" : "not equal"; console.log(equalOrNot); This will output not equal. This particular case can be fixed by comparing the integer equivalents of the dates (their time stamps). const userEnteredString = "12/20/1989"; // MM/DD/YYYY format const dateStringFromAPI = "1989-12-20T00:00:00Z"; const dateFromUserEnteredString = new Date(userEnteredString) const dateFromAPIString = new Date(dateStringFromAPI); if (dateFromUserEnteredString.getTime() == dateFromAPIString.getTime()) { transferOneMillionDollarsToUserAccount(); } else { doNothing(); } Both represented the same date but unfortunately your user will not get the million dollars. Here’s the problem: JavaScript always assumes the time zone time stamp. It’s not possible to change just the time zone of an existing date object, so our target is now to create a new date object but with UTC instead of local time zone. We will ignore the user’s time zone. const userEnteredDate = "12/20/1989"; const parts = userEnteredDate.split("/"); const userEnteredDateISO = parts[2] + "-" + parts[0] + "-" + parts[1]; const userEnteredDateObj = new Date(userEnteredDateISO + "T00:00:00Z"); const dateFromAPI = new Date("1989-12-20T00:00:00Z"); const result = userEnteredDateObj.getTime() == dateFromAPI.getTime(); // true This also works if you don’t specify the time since that will default to midnight (i.e., 00:00:00Z): const userEnteredDate = new Date("1989-12-20"); const dateFromAPI = new Date("1989-12-20T00:00:00Z"); const result = userEnteredDate.getTime() == dateFromAPI.getTime(); // true Remember: If the date constructor is passed a string in correct ISO date format of YYYY-MM-DD, it assumes UTC automatically. - JavaScript provides a neat Date.UTC() function that you can use to get the UTC time stamp of a date. We extract the components from the date and pass them to the function. const userEnteredDate = new Date("12/20/1989"); const userEnteredDateTimeStamp = Date.UTC(userEnteredDate.getFullYear(), userEnteredDate.getMonth(), userEnteredDate.getDate(), 0, 0, 0); const dateFromAPI = new Date("1989-12-20T00:00:00Z"); const result = userEnteredDateTimeStamp == dateFromAPI.getTime(); // true ... Finding the Difference Between Two Dates A common scenario you will come across is to find the difference between two dates. We discuss two use cases: Finding the Number of Days Between Two Dates Convert both dates to UTC time stamp, find the difference in milliseconds and find the equivalent days. const dateFromAPI = "2016-02-10T00:00:00Z"; const now = new Date(); const datefromAPITimeStamp = (new Date(dateFromAPI)).getTime(); const nowTimeStamp = now.getTime(); const microSecondsDiff = Math.abs(datefromAPITimeStamp - nowTimeStamp); // Math.round is used instead of Math.floor to account for certain DST cases // Number of milliseconds per day = // 24 hrs/day * 60 minutes/hour * 60 seconds/minute * 1000 ms/second const daysDiff = Math.round(microSecondsDiff / (1000 * 60 * 60 * 24)); console.log(daysDiff); Finding User’s Age from Their Date of Birth const birthDateFromAPI = "12/10/1989"; Note: We have a non-standard format. Read the API doc to determine if this means 12 Oct or 10 Dec. Change to ISO format accordingly. const parts = birthDateFromAPI.split("/"); const birthDateISO = parts[2] + "-" + parts[0] + "-" + parts[1]; const birthDate = new Date(birthDateISO); const today = new Date();: const dateFromPicker = "2012-10-12"; const timeFromPicker = "12:30"; const dateParts = dateFromPicker.split("-"); const timeParts = timeFromPicker.split(":"); const localDate = new Date(dateParts[0], dateParts[1]-1, dateParts[2], timeParts[0], timeParts[1]); Try to avoid creating a date from a string unless it is in ISO date format. Use the Date(year, month, date, hours, minutes, seconds, microseconds) method instead. Getting Only the Date If you are getting only the date, a user’s birthdate for instance, it is best to convert the format to valid ISO date format to eliminate any time zone information that can cause the date to shift forward or backward when converted to UTC. For example: const dateFromPicker = "12/20/2012"; const dateParts = dateFromPicker.split("/"); const ISODate = dateParts[2] + "-" + dateParts[0] + "-" + dateParts[1]; const birthDate = new Date(ISODate).toISOString(); In case you forgot, if you create a Date object with the input in valid ISO date format (YYYY-MM-DD), it will default to UTC instead of defaulting to the browser’s time zone. Storing the Date Always store the DateTime in UTC. Always send an ISO date string or a time stamp to the back end. Generations of computer programmers have realized this simple truth after bitter experiences trying to show the correct local time to the user. Storing the local time in the back end is a bad idea, it’s better to let the browser handle the conversion to local time in the front end. Also, it should be apparent that you should never send a DateTime string like “July 20, 1989 12:10 PM” to the back end. Even if you send the time zone as well, you are increasing the effort for other programmers to understand your intentions and parse and store the date correctly. Use the toISOString() or toJSON() methods of the Date object to convert the local DateTime to UTC. const dateFromUI = "12-13-2012"; const timeFromUI = "10:20"; const dateParts = dateFromUI.split("-"); const timeParts = timeFromUI.split(":"); const date = new Date(dateParts[2], dateParts[0]-1, dateParts[1], timeParts[0], timeParts[1]); const dateISO = date.toISOString(); $.post("", {date: dateISO}, ...) Displaying the Date and Time - Get the time stamp or the ISO formatted date from a REST API. - Create a Dateobject. - Use the toLocaleString()or toLocaleDateString()and toLocaleTimeString()methods or a date library to display the local time. const dateFromAPI = "2016-01-02T12:30:00Z"; const localDate = new Date(dateFromAPI); const localDateString = localDate.toLocaleDateString(undefined, { day: 'numeric', month: 'short', year: 'numeric', }); situation, it would be wiser to save the local time as well. As usual, we would like to create the date in ISO format, but we have to find the time zone zone offset. const now = new Date(); const tz = now.gettime zoneOffset(); For my time zone +05:45, I get -345, this is not only the opposite sign, but a number like -345 might be completely perplexing to a back-end developer. So we convert this to +05:45. const sign = tz > 0 ? "-" : "+"; const hours = pad(Math.floor(Math.abs(tz)/60)); const minutes = pad(Math.abs(tz)%60); const tzOffset = sign + hours + ":" + minutes; Now we get the rest of the values and create a valid ISO string that represents the local DateTime. const localDateTime = now.getFullYear() + "-" + pad(now.getMonth()+1) + "-" + pad(now.getDate()) + "T" + pad(now.getHours()) + ":" + pad(now.getMinutes()) + ":" + pad(now.getSeconds()); If you want, you can wrap the UTC and local dates in an object. const eventDate = { utc: now.toISOString(), local: localDateTime, tzOffset: tzOffset, } Now, in the back end, if you wanted to find out if the event occurred before noon local time, you can parse the date and simply use the getHours() function. const localDateString = eventDate.local; zone. Sometimes, even with the local time zone stored, you’ll want to display dates in a particular time zone. For example, times for events might make more sense in the current user’s time zone if they’re virtual, or in the time zone where they will physically take place, if they’re not. In any case, it’s worth looking beforehand at established solutions for formatting with explicit time zone names. Server and Database Configuration Always configure your servers and databases to use UTC time zone. (Note that UTC and GMT are not the same thing—GMT, for example, might imply a switch to BST during the summer, whereas UTC never will.) We have already seen how much of a pain time zone conversions can be, especially when they are unintended. Always sending UTC DateTime and configuring your servers to be in UTC time zone can make your life easier. Your back-end code will be much simpler and cleaner as it doesn’t have to do any time zone conversions. DateTime data coming in from servers across the world can be compared and sorted effortlessly. Code in the back end should be able to assume the time zone of the server to be UTC (but should still have a check in place to be sure). A simple configuration check saves having to think about and code for conversions every time new DateTime code is written. It’s Time for Better Date Handling Date manipulation is a hard problem. The concepts behind the practical examples in this article apply beyond JavaScript, and are just the beginning when it comes to properly handling DateTime data and calculations. Plus, every helper library will come with its own set of nuances—which is even true of the eventual official standard support{target=”_blank”} for these types of operations. The bottom line is: Use ISO on the back end, and leave the front end to format things properly for the user. Professional programmers will be aware of some of the nuances, and will (all the more decidedly) use well-supported DateTime libraries on both the back end and the front end. Built-in functions on the database side are another story, but hopefully this article gives enough background to make wiser decisions in that context, too.
https://www.toptal.com/software/definitive-guide-to-datetime-manipulation
CC-MAIN-2021-31
refinedweb
3,283
54.73
If I run the following program, it never prints "done". If I uncomment the commented line, it does. import Prelude hiding (catch) import Control.Exception import System.Process import System.IO demo = do putStrLn "starting" (inp,out,err,pid) <- runInteractiveCommand "nonesuchcommand" putStrLn "writing to in on bad command" hPutStr inp "whatever" -- putStr "flushing" hFlush inp `catch` \e -> do print e; return () putStrLn "done" main = demo `catch` \e -> do print e; return () On my machine the output is: $ runhaskell test.hs starting writing to in on bad command $ It appears to exit at the hFlush call. (hClose has the same behavior.) I find this surprising -- I'd expect, even if I'm using an API incorrectly, to get an exception.
http://www.haskell.org/pipermail/haskell-cafe/2008-March/040129.html
CC-MAIN-2014-35
refinedweb
119
59.09
recently my mom has become obsessed with "word jumbles" in our local paper, which are scrambled letters that you have to unscramble to form a word. for example, the correct answer to "btior" is "orbit." sometimes she gets to one she can't solve though, (and annoys the family with requests for help) and so I decided to write a program to just do them automatically. I'm planning to compare the permutations that it generates to a word list, but for now, I'm just printing out all the possible combinations and letting the user sift through them. someone suggested setting up a hash table with the word list stored in it. I don't know anything about hash tables, however, so this would be a good learning opportunity. from what I have looked up online, I should make an array of linked lists to prevent collisions. here is where I get confused though. how could I dynamically create linked lists? what I mean is, you obviously can't do something like I think I know (knew) some method on how to do something like this, but it's hazy now.I think I know (knew) some method on how to do something like this, but it's hazy now.Code:for (int X = 1; X < 20; X++) { list<string> listX; } another question: will I have to create a linked list class from scratch? I've previously used linked lists, but they were the STL kind. I've never actually created my own linked list structure. also, if anyone has any good sites for introductions to hash tables, or any personal advice, I'd be appriciative. finally, here's my current code: Code:#include <iostream> #include <string> #include <algorithm> using namespace std; int main() { string jumbled; int i=0; int loop=0; int garbage = 1; cout << "enter your string (no capitals)"<<endl; cin >> jumbled; sort(jumbled.begin(), jumbled.end()); cout<<endl; cout<< "all possible permutations of your string:"<<endl; do { for (i = 0; i < jumbled.size(); i++) cout << jumbled[i]; loop++; cout << "\n"; if (loop%20 == 0) { cout << "press 0 to quit, any other number to continue"<<endl; cin >> garbage; } if (garbage == 0) return 0; } while(next_permutation(jumbled.begin(), jumbled.end())); cout <<endl << "to exit, enter any number."<<endl; cin >> garbage; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/28202-help-beginning-hash-tables-please.html
CC-MAIN-2016-07
refinedweb
384
70.53
Opened 6 months ago Last modified 6 months ago #15598 new bug RebindableSyntax with RankNTypes and type class method call yields panic. Description (last modified by ) The following program in a file ghc-panic.hs {-# LANGUAGE GADTSyntax , RankNTypes , RebindableSyntax #-} import Prelude hiding ((>>=)) data InfDo where InfDo :: String -> (forall a. a -> InfDo) -> InfDo prog :: InfDo prog = do _ <- show (42 :: Int) prog where (>>=) = InfDo main :: IO () main = let x = prog in x `seq` return () when loaded into GHCi yields λ> main ghc: panic! (the 'impossible' happened) (GHC version 8.4.3 for x86_64-unknown-linux): nameModule system $dShow_abfY Call stack: CallStack (from HasCallStack): callStackDoc, called at compiler/utils/Outputable.hs:1150:37 in ghc:Outputable pprPanic, called at compiler/basicTypes/Name.hs:241:3 in ghc:Name and separate compilation yields $ ghc ghc-panic.hs [1 of 1] Compiling Main ( ghc-panic.hs, ghc-panic.o ) ghc: panic! (the 'impossible' happened) (GHC version 8.4.3 for x86_64-unknown-linux): StgCmmEnv: variable not found $dShow_a1qI local binds for: $tcInfDo $trModule $tcInfDo1_r1th $tcInfDo2_r1tH $trModule1_r1tI $trModule2_r1tJ $trModule3_r1tK $trModule4_r1tL sat_s1E4 Call stack: CallStack (from HasCallStack): callStackDoc, called at compiler/utils/Outputable.hs:1150:37 in ghc:Outputable pprPanic, called at compiler/codeGen/StgCmmEnv.hs:149:9 in ghc:StgCmmEnv The problem disappears when either the rank-2 type is removed from InfDo or when the call to show is replaced by a static string. Besides 8.4.3, also reproduced with 8.6.0.20180714. I believe it is somewhat related to Change History (4) comment:1 Changed 6 months ago by comment:2 Changed 6 months ago by comment:3 Changed 6 months ago by comment:4 Changed 6 months ago by Note: See TracTickets for help on using tickets. I think you are right in connecting it with #14963. But your example here is nothing to do with GHCi, nor with -fdefer-type-errors, so it's helpful. Lint complains immediately This is pretty bad. It's all in tcSyntaxOp, I believe.
https://ghc.haskell.org/trac/ghc/ticket/15598
CC-MAIN-2019-09
refinedweb
330
57.47
Details - Type: Improvement - Status: Closed - Priority: Minor - Resolution: Fixed - Affects Version/s: None - - Component/s: modules/analysis - Labels:None Description Activity - All - Work Log - History - Activity - Transitions I also added, in SinkTokenizer, to override the close() method on Tokenizer so that it doesn't try to close the Reader, which throws an NullPointerEx. Why not? Seems more flexible, and this is an expert level API. Then we should document that they must call reset before calling next(), right? Same could go for the add() method. And, of course, if we are calling add() outside of the Tee process, we probably don't need to clone the token, either. >. In looking again, I also wonder whether the getTokens() shouldn't return an immutable list? Or should we allow adding tokens outside of the Tee process? I see. Then we should set iter=null in add() in case after reset() more tokens are added to the list, right? > In SinkTokenizer you could initalize the iterator in the constructor Iterators are generally fail-fast, hence it would throw an exception when you tried to use it later after adding some elements. Sorry, this review is a bit late. Only a simple remark: In SinkTokenizer you could initalize the iterator in the constructor, then you can avoid the check if (iter==null) in next()? Committed revision 599478. Tee it is. And here I just thought you liked golf! I guess I have never used the tee command in UNIX. The SinkTokenizer name could make sense, but I think TeeTokenFilter makes more sense than SourceTokenFilter (it is a tee, it splits a single token stream into two, just like the UNIX tee command). Whew. I think we are there and I like it! I renamed Yonik's suggestions to be SinkTokenizer and SourceTokenFilter to model the whole source/sink notion. Hopefully people won't think the SourceTokenFilter is for processing code. I will commit tomorrow if there are no objections. Will do. Patch to follow shortly I'm quite busy currently with other stuff. Feel free to go ahead OK, looks good to me and is much simpler. Only thing that gets complicated is the constructors, but that should be manageable. Thanks for bearing w/ me One of you want to whip up a patch w/ tests or do you want me to do it? I like the TeeTokenFilter! +1 I think having the "tee" solves the many-to-many case... you can have many fields contribute tokens to a new field. ListTokenizer sink1 = new ListTokenizer(null); ListTokenizer sink2 = new ListTokenizer(null); TokenStream source1 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader1), sink1), sink2); TokenStream source2 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader2), sink1), sink2); // now sink1 and sink2 will both get tokens from both reader1 and reader2 after whitespace tokenizer // now we can further wrap any of these in extra analysis, and more "tees" can be inserted if desired.)); Very similar to what I came up with I think... (all untested, etc) class ListTokenizer extends Tokenizer { protected List<Token> lst = new ArrayList<Token>(); protected Iterator<Token> iter; public ListTokenizer(List<Token> input) { this.lst = input; if (this.lst==null) this.lst = new ArrayList<Token>(); } /** only valid if tokens have not been consumed, * i.e. if this tokenizer is not part of another tokenstream */ public List<Token> getTokens() { return lst; } public Token next(Token result) throws IOException { if (iter==null) iter = lst.iterator(); return iter.next(); } /** Override this method to cache only certain tokens, or new tokens based * on the old tokens. */ public void add(Token t) { if (t==null) return; lst.add((Token)t.clone()); } public void reset() throws IOException { iter = lst.iterator(); } } class TeeTokenFilter extends TokenFilter { ListTokenizer sink; protected TeeTokenFilter(TokenStream input, ListTokenizer sink) { super(input); this.sink = sink; } public Token next(Token result) throws IOException { Token t = input.next(result); sink.add(t); return t; } } We need to change the CachingTokenFilter a bit (untested code): public class CachingTokenFilter extends TokenFilter { private List cache; private Iterator iterator; public CachingTokenFilter(TokenStream input) { super(input); this.cache = new LinkedList(); } public Token next() throws IOException { if (iterator != null) { if (!iterator.hasNext()) { // the cache is exhausted, return null return null; } return (Token) iterator.next(); } else { Token token = input.next(); addTokenToCache(token); return token; } } public void reset() throws IOException { if(cache != null) { iterator = cache.iterator(); } } protected void addTokenToCache(Token token) { if (token != null) { cache.add(token); } } } Then you can implement the ProperNounTF: class ProperNounTF extends CachingTokenFilter { protected void addTokenToCache(Token token) { if (token != null && isProperNoun(token)) { cache.add(token); } } private boolean isProperNoun() {...} } And then you add everything to Document: Document d = new Document(); TokenStream properNounTf = new ProperNounTF(new StandardTokenizer(reader)); TokenStream stdTf = new CachingTokenFilter(new StopTokenFilter(properNounTf)); TokenStrean lowerCaseTf = new LowerCaseTokenFilter(stdTf); d.add(new Field("std", stdTf)); d.add(new Field("nouns", properNounTf)); d.add(new Field("lowerCase", lowerCaseTf)); Again, this is untested, but I believe should work? OK, I am trying not be fixated on the Analyzer. I guess I haven't fully synthesized the new TokenStream use in DocsWriter I agree, I don't like the no-value Field, and am open to suggestions. So, I guess I am going to push back and ask, how would you solve the case of where you have two fields and the Analysis given by: source field: StandardTokenizer Proper Noun TF LowerCaseTF StopTF buffered1 Field: Proper Noun Cache TF (cache of all terms found to be proper nouns by the Proper Noun TF) buffered2 Field: All terms lower cased And the requirement is that you only do the Analysis phase once (i.e. for the source field) and the other two fields are from memory. I am just not seeing it yet, so I appreciate the explanation as it will better cement my understanding of the new Token Stream stuff and DocsWriter I think the ideas here make sense, e. g. to have a buffering TokenFilter that doesn't buffer all tokens but enables the user to control which tokens to buffer. What is still not clear to me is why we have to introduce a new API for this and a new kind of analyzer? To allow creating an no-value field seems strange. Can't we achieve all this by using the Field(String, TokenStream) API without the analyzer indirection? The javadocs should make clear that the IndexWriter processes fields in the same order the user added them. So if a user adds TokenStream ts1 and thereafter ts2, they can be sure that ts1 is processed first. With that knowledge ts1 can buffer certain tokens that ts2 uses then. Adding even more fields that use the same tokens is straightforward. I dunno... it feels like we should have the right generic solution (many-to-many) before committing anything in this case, simply because this is all user-level code (the absence of this patch doesn't prohibit the user from doing anything... no package protected access rights are needed, etc). Any objection to me committing the CachedAnalyzer and CachedTokenizer pieces of this patch, as I don't think they are effected by the other parts of this and they solve the pre-analysis portion of this discussion. In the meantime, I will think some more about the generic field case, as I do think it is useful. I am also trying out some basic benchmarking on this. Things like entity extraction are normally not done by lucene analyzers AFAIK Consider yourself in the "know" now, as I have done this on a few occasions, but, yes, I do agree a one to many approach is probably better if it can be done in a generic way. To some extent, I was thinking that this could help optimize Solr's copyField mechanism. Maybe... it would take quite a bit of work to automate it though I think. As far as pre-analysis costs. iteration is pretty much free in comparison to everything else. Memory is the big factor. Things like entity extraction are normally not done by lucene analyzers AFAIK... but if one wanted a framework to do that, the problem is more generic. Your really want to be able to add to multiple fields from multiple other fields. What if they wanted 3 fields instead of two? True. I'll have to think about a more generic approach. In some sense, I think 2 is often sufficient, but you are right it isn't totally generic in the spirit of Lucene. To some extent, I was thinking that this could help optimize Solr's copyField mechanism. In Solr's case, I think you often have copy fields that have marginal differences in the filters that are applied. It would be useful for Solr to be able to optimize these so that it doesn't have to go through the whole analysis chain again. Isn't this what your current code does? No, in my main use case (# of buffered tokens is << # of source tokens) the only tokens kept around is the (much) smaller subset of buffered tokens. In the pre-analysis approach you have to keep the source field tokens and the buffered tokens. Not to mention that you are increasing the work by having to iterate over the cached tokens in the list in Lucene. Thus, you have the cost of the analysis in your application plus the storage of both token lists (one large, one small, likely) then in Lucene you have the cost of iterating over two lists. In my approach, I think, you have the cost of analysis plus the cost of storage of one list of tokens (small) and the cost of iterating that list. As for the convoluted cross-field logic, I don't think it is all that convoluted. But it's baked into CollaboratingAnalyzer... it seems like this is better left to the user. What if they wanted 3 fields instead of two? I do agree somewhat about the pre-analysis approach, except for the case where there may be a large number of tokens in the source field, in which case, you are holding them around in memory Isn't this what your current code does? Maybe I'm missing something? No, I don't think you are missing anything in that use case, it's just an example of its use. And I am not totally sold on this approach, but mostly am I had originally considered your option, but didn't feel it was satisfactory for the case where you are extracting things like proper nouns or maybe it is generating a category value. The more general case is where not all the tokens are needed (in fact, very few are). In those cases, you have to go back through the whole list of cached tokens in order to extract the ones you want. In fact, thinking some more of on it, I am not sure my patch goes far enough in the sense that what if you want it to buffer in mid stream. For example, if you had: StandardTokenizer Proper Noun TF LowerCaseTF StopTF and Proper Noun TF is solely responsible for setting aside proper nouns as it comes across them in the stream. As for the convoluted cross-field logic, I don't think it is all that convoluted. There are only two fields and the implementing Analyzer takes care of all of it. Only real requirement the application has is that the fields be ordered correctly. I do agree somewhat about the pre-analysis approach, except for the case where there may be a large number of tokens in the source field, in which case, you are holding them around in memory (maxFieldLength mitigates to some extent.) Also, it puts the onus on the app. writer to do it, when it could be pretty straight forward for Lucene to do it w/o it's usual analysis pipeline. At any rate, separate of the CollaboratingAnalyzer, I do think the CachedTokenFilter is useful, especially in supporting the pre-analysis approach. Maybe I'm not looking at it the right way yet, but I'm not sure this feels "right"... Since Field has a tokenStreamValue(), wouldn't it be easiest to just use that? If the tokens of two fields are related, one could just pre-analyze those fields and set the token streams appropriately. Seems more flexible and keeps any convoluted cross-field logic in the application domain. Grant, I'm not sure why we need this patch. For the testcase that you're describing: For example, if you want to have two fields, one lowercased and one not, but all the other analysis is the same, then you could save off the tokens to be output for a different field. can't you simply do something like this: Document d = new Document(); TokenStream t1 = new CachingTokenFilter(new WhitespaceTokenizer(reader)); TokenStream t2 = new LowerCaseFilter(t1); d.add(new Field("f1", t1)); d.add(new Field("f2", t2)); Maybe I'm missing something? fixed a failing test Added some more documentation, plus a test showing it is bad to use the no value Field constructor w/o support from the Analyzer to produce tokens. If no objections, I will commit on Thursday or Friday of this week. A new version of this with the following changes/additions: DocumentsWriter no longer requires that a Field have a value (i.e. stringValue, etc.) Added a new Field constructor that allows for the construction of a Field without a value. This would allow for Analyzer implementations that produce their own tokens (whatever that means) Moved CollaboratingAnalyzer, et. al to the core under analysis.buffered as I thought these items should be in core given the changes to Field and DocsWriter. Note, I think this is a subtle, but important change in DocumentsWriter/Field behavior. Here's a patch that modifies the DocumentsWriter to not throw an IllegalArgumentException if no Reader is specified. Thus, an Analyzer needs to be able to handle a null Reader (this still needs to be documented). Basically, the semantics of it are that the Analyzer is producing Tokens from some other means. I probably should spell this out in a new Field constructor as well, but this should suffice for now, and I will revisit it after the break. I also added in a TestCollaboratingAnalyzer. All tests pass. Some javadoc comments for the modifyToken method in BufferingTokenFilter should be sufficient, right? Something to the effect that if this TokenFilter is not the last in the chain that it should make a full copy. As for the CachedTokenizer and CachedAnalyzer, those should be implied, since the user is passing them in to begin with. The other thing of interest, is that calling Analyzer.tokenStream(String, Reader) is not needed. In fact, this somewhat suggests having a new Fieldable property akin to tokenStreamValue(), etc. that says don't even ask the Fieldable for a value. Let me take a crack at what that means and post a patch. It will mean some changes to invertField() in DocumentsWriter and possibly changing it to not require that one of tokenStreamValue, readerValue() or stringValue() be defined. Not sure if that is a good idea or not. I think the discussion in LUCENE-1063 is relevant to this issue: if you store (& re-use) Tokens you may need to return a copy of the Token from the next() method to ensure that nay filters that alter the Token don't mess up your private copy. First draft at a patch, provides two different approaches: 1. CachedAnalyzer and CachedTokenizer take in a list of Tokens and output them as appropriate. Similar to CachingTokenFilter, but assumes you already have the Tokens 2. In contrib/analyzers/buffered, add CollaboratingAnalyzer and related classes for creating a Analyzer, etc. that work in the stream. Still not sure if and how this plays with the Token reuse (I think it doesn't) Changed to override next() instead of next(Token)
https://issues.apache.org/jira/browse/LUCENE-1058?focusedCommentId=12546271&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-27
refinedweb
2,655
63.19
Manage exceptions with the debugger in Visual Studio An exception is an indication of an error state that occurs while a program is being executed. You can tell the debugger which exceptions or sets of exceptions to break on, and at which point you want the debugger to break (that is, pause in the debugger). When the debugger breaks, it shows you where the exception was thrown. You can also add or delete exceptions. With a solution open in Visual Studio, use Debug > Windows > Exception Settings to open the Exception Settings window. Provide handlers that respond to the most important exceptions. If you need to know how to add handlers for exceptions, see Fix bugs by writing better C# code. Also, learn how to configure the debugger to always break execution for some exceptions. When an exception occurs, the debugger writes an exception message to the Output window. It may break execution in the following cases when: - An exception is thrown that isn't handled. - The debugger is configured to break execution before any handler is invoked. - You have set Just My Code, and the debugger is configured to break on any exception that isn't handled in user code. Note ASP.NET has a top-level exception handler that shows error pages in a browser. It doesn't break execution unless Just My Code is turned on. For an example, see Tell, so you may examine the exception before a handler is invoked. In the Exception Settings window (Debug > Windows > Exception Settings), expand the node for a category of exceptions, such as Common Language Runtime Exceptions. Then select the check box for a specific exception within that category, such as System.AccessViolationException. You can also select an entire category of exceptions. Tip You can find specific exceptions by using the Search window in the Exception Settings toolbar, or use search to filter for specific namespaces (such as System.IO). If you select an exception in the Exception Settings window, debugger execution will break wherever the exception is thrown, no matter whether it's handled., execution will break on the throwline when you run this code in the debugger. You can then continue execution. The console should display both lines: caught exception goodbye but it doesn't display the hereline. A C# console application references a class library with a class that has two methods. One method throws an exception and handles it, while a second method throws the same exception, execution will break on the throwline in both ThrowHandledException() and ThrowUnhandledException() when you run this code in the debugger. To restore the exception settings to the defaults, choose the Restore the list to the default settings button: Tell the debugger to continue on user-unhandled exceptions If you are debugging .NET or JavaScript code with Just My Code, you can tell the debugger to prevent breaking on exceptions that aren't handled in user code but are handled elsewhere. In the Exception Settings window, open the shortcut menu by right-clicking a column label, and then select Show Columns > Additional Actions. (If you've turned off Just My Code, you won't see this command.) A third column named Additional Actions appears. For an exception that shows Continue when unhandled in user code in this column, the debugger continues if that exception isn't handled in user code but is handled externally. To change this setting for a particular exception, select the exception, right-click to show the shortcut menu, and select Continue When Unhandled in User Code. You may also change the setting for an entire category of exceptions, such as the entire Common Language Runtime exceptions). For example, ASP.NET web applications handle exceptions by converting them to an HTTP 500 status code (Exception handling in ASP.NET Web API), which may not help you determine the source of the exception. In the example below, the user code makes a call to String.Format() that throws a FormatException. Execution breaks as follows: Add and delete exceptions You can add and delete exceptions. To delete an exception type from a category, select the exception, and choose the Delete the selected exception from the list button (the minus sign) on the Exception Settings toolbar. Or you may right-click the exception and select Delete from the shortcut menu. Deleting an exception has the same effect as having the exception unchecked, which is that the debugger won't break when it's thrown. To add an exception: In the Exception Settings window, select one of the exception categories (for example, Common Language Runtime). Choose the Add an exception to the selected category button (the plus sign). Type the name of the exception (for example, System.UriTemplateMatchException). The exception is added to the list (in alphabetical order) and automatically checked. To add an exception to the GPU Memory Access Exceptions, JavaScript Runtime Exceptions, or Win32 Exceptions categories, include the error code. Now only added exceptions are persisted; deleted exceptions aren't. You may've created an exception like this code: public class GenericException<T> : Exception { public GenericException() : base("This is a generic exception.") { } } You can add the exception to Exception Settings using the previous procedure: Add conditions to an exception Use the Exception Settings window to set conditions on exceptions. Currently supported conditions include the module name(s) to include or exclude for the exception. By setting module names as conditions, you can choose to break for the exception only on certain code modules. You may also choose to avoid breaking on particular modules. Note Adding conditions to an exception is supported starting in Visual Studio 2017. To add conditional exceptions: Choose the Edit conditions button in the Exception Settings window, or right-click the exception and choose Edit Conditions. To add extra required conditions to the exception, select Add Condition for each new condition. Additional condition lines appear. For each condition line, type the name of the module, and change the comparison operator list to Equals or Not Equals. You may specify wildcards (\*) in the name to specify more than one module. If you need to delete a condition, choose the X at the end of the condition line.
https://docs.microsoft.com/en-us/visualstudio/debugger/managing-exceptions-with-the-debugger?view=vs-2019
CC-MAIN-2020-45
refinedweb
1,029
54.52
Often. <several sections omitted>. For this demonstration, we use the same debug code snippet as shown in Listing 14-4. (Parts of this is reproduced in Listing 14-9, below.) However, for this example, we have compiled the kernel with the compiler optimization flag -O2. This is the default for the Linux kernel. Listing 14-7 shows the results of this debugging session. Listing 14-7 Optimized Architecture-Setup Code $ ppc_44x-gdb --silent vmlinux (gdb) target remote /dev/ttyS0 Remote debugging using /dev/ttyS0 breakinst () at arch/ppc/kernel/ppc-stub.c:825 825 } (gdb) b panic Breakpoint 1 at 0xc0016b18: file kernel/panic.c, line 74. (gdb) b sys_sync Breakpoint 2 at 0xc005a8c8: file fs/buffer.c, line 296. (gdb) b yosemite_setup_arch Breakpoint 3 at 0xc020f438: file arch/ppc/platforms/4xx/yosemite.c, line 116. (gdb) c Continuing. Breakpoint 3, yosemite_setup_arch () at arch/ppc/platforms/4xx/yosemite.c:116 116 def = ocp_get_one_device(OCP_VENDOR_IBM, OCP_FUNC_EMAC, 0); (gdb) l (gdb) p yosemite_setup_arch $1 = {void (void)} 0xc020f41c <yosemite_setup_arch> Referring back to Listing 14-4, (this can also be seen in Listing 14-9 below) notice that the function yosemite_setup_arch() actually falls on line 306 of the file yosemite.c. Compare that with Listing 14-7. We hit the breakpoint, but gdb reports the breakpoint at file yosemite.c line 116. It appears at first glance to be a mismatch of line numbers between the debugger and the corresponding source code. Is this a bug? First let's confirm what the compiler produced for debug information. Using the readelf5 tool described in Chapter 13, "Development Tools", we can examine the debug information for this function produced by the compiler. $ ppc_44x-readelf --debug-dump=info vmlinux | grep -u6 \ yosemite_setup_arch | tail -n 7 DW_AT_name : (indirect string, offset: 0x9c04): yosemite_setup_arch DW_AT_decl_file : 1 DW_AT_decl_line : 307 DW_AT_prototyped : 1 DW_AT_low_pc : 0xc020f41c DW_AT_high_pc : 0xc020f794 DW_AT_frame_base : 1 byte block: 51 (DW_OP_reg1) We don't have to be experts at reading DWARF2 debug records6 to recognize that the function in question is reported at line 307 in our source file. We can confirm this using the addr2line utility, also introduced in Chapter 13. Using the address derived from gdb in Listing 14-7: $ ppc_44x-addr2line -e vmlinux 0xc020f41c arch/ppc/platforms/4xx/yosemite.c:307 At this point, gdb is reporting our breakpoint at line 116 of the yosemite.c file. To understand what is happening, we need to look at the assembler output of the function as reported by gdb. Listing 14-8 is the output from gdb after issuing the disassemble command on the yosemite_setup_arch() function. Listing 14-8 Disassemble Function yosemite_setup_arch (gdb) disassemble yosemite_setup_arch 0xc020f41c <yosemite_setup_arch+0>: mflr r0 0xc020f420 <yosemite_setup_arch+4>: stwu r1,-48(r1) 0xc020f424 <yosemite_setup_arch+8>: li r4,512 0xc020f428 <yosemite_setup_arch+12>: li r5,0 0xc020f42c <yosemite_setup_arch+16>: li r3,4116 0xc020f430 <yosemite_setup_arch+20>: stmw r25,20(r1) 0xc020f434 <yosemite_setup_arch+24>: stw r0,52(r1) 0xc020f438 <yosemite_setup_arch+28>: bl 0xc000d344 <ocp_get_one_device> 0xc020f43c <yosemite_setup_arch+32>: lwz r31,32(r3) 0xc020f440 <yosemite_setup_arch+36>: lis r4,-16350 0xc020f444 <yosemite_setup_arch+40>: li r28,2 0xc020f448 <yosemite_setup_arch+44>: addi r4,r4,21460 0xc020f44c <yosemite_setup_arch+48>: li r5,6 0xc020f450 <yosemite_setup_arch+52>: lis r29,-16350 0xc020f454 <yosemite_setup_arch+56>: addi r3,r31,48 0xc020f458 <yosemite_setup_arch+60>: lis r25,-16350 0xc020f45c <yosemite_setup_arch+64>: bl 0xc000c708 <memcpy> 0xc020f460 <yosemite_setup_arch+68>: stw r28,44(r31) 0xc020f464 <yosemite_setup_arch+72>: li r4,512 0xc020f468 <yosemite_setup_arch+76>: li r5,1 0xc020f46c <yosemite_setup_arch+80>: li r3,4116 0xc020f470 <yosemite_setup_arch+84>: addi r26,r25,15104 0xc020f474 <yosemite_setup_arch+88>: bl 0xc000d344 <ocp_get_one_device> 0xc020f478 <yosemite_setup_arch+92>: lis r4,-16350 0xc020f47c <yosemite_setup_arch+96>: lwz r31,32(r3) 0xc020f480 <yosemite_setup_arch+100>: addi r4,r4,21534 0xc020f484 <yosemite_setup_arch+104>: li r5,6 0xc020f488 <yosemite_setup_arch+108>: addi r3,r31,48 0xc020f48c <yosemite_setup_arch+112>: bl 0xc000c708 <memcpy> 0xc020f490 <yosemite_setup_arch+116>: lis r4,1017 0xc020f494 <yosemite_setup_arch+120>: lis r5,168 0xc020f498 <yosemite_setup_arch+124>: stw r28,44(r31) 0xc020f49c <yosemite_setup_arch+128>: ori r4,r4,16554 0xc020f4a0 <yosemite_setup_arch+132>: ori r5,r5,49152 0xc020f4a4 <yosemite_setup_arch+136>: addi r3,r29,-15380 0xc020f4a8 <yosemite_setup_arch+140>: addi r29,r29,-15380 0xc020f4ac <yosemite_setup_arch+144>: bl 0xc020e338 <ibm440gx_get_clocks> 0xc020f4b0 <yosemite_setup_arch+148>: li r0,0 0xc020f4b4 <yosemite_setup_arch+152>: lis r11,-16352 0xc020f4b8 <yosemite_setup_arch+156>: ori r0,r0,50000 0xc020f4bc <yosemite_setup_arch+160>: lwz r10,12(r29) 0xc020f4c0 <yosemite_setup_arch+164>: lis r9,-16352 0xc020f4c4 <yosemite_setup_arch+168>: stw r0,8068(r11) 0xc020f4c8 <yosemite_setup_arch+172>: lwz r0,84(r26) 0xc020f4cc <yosemite_setup_arch+176>: stw r10,8136(r9) 0xc020f4d0 <yosemite_setup_arch+180>: mtctr r0 0xc020f4d4 <yosemite_setup_arch+184>: bctrl 0xc020f4d8 <yosemite_setup_arch+188>: li r5,64 0xc020f4dc <yosemite_setup_arch+192>: mr r31,r3 0xc020f4e0 <yosemite_setup_arch+196>: lis r4,-4288 0xc020f4e4 <yosemite_setup_arch+200>: li r3,0 0xc020f4e8 <yosemite_setup_arch+204>: bl 0xc000c0f8 <ioremap64> End of assembler dump. (gdb) Once again, we need not be PowerPC assembly language experts to understand what is happening here. Notice the labels associated with the PowerPC bl instruction. This is a function call in PowerPC mnemonics. The symbolic function labels are the important data points. After a cursory analysis, we see several function calls near the start of this assembler listing: Address Function 0xc020f438 ocp_get_one_device() 0xc020f45c memcpy() 0xc020f474 ocp_get_one_device() 0xc020f48c memcpy() 0xc020f4ac ibm440gx_get_clocks() Listing 14-9 reproduces portions of the source file yosemite.c. Correlating the functions we found in the gdb disassemble output, we see those labels occurring in the function yosemite_set_emacdata(), around the line numbers reported by gdb when the breakpoint at yosemite_setup_arch() was encountered. The key to understanding the anomaly is to notice the subroutine call at the very start of yosemite_setup_arch(). The compiler has inlined the call to yosemite_set_emacdata()instead of generating a function call, as would be expected by simple inspection of the source code. This inlining produced the mismatch in the line numbers when gdb hit the breakpoint. Even though the yosemite_set_emacdata() function was not declared using the inline keyword, GCC inlined the function as a performance optimization. Listing 14-9 Portions of Source File yosemite.c 109 static void __init yosemite_set_emacdata(void) 110 { 121 def = ocp_get_one_device(OCP_VENDOR_IBM, OCP_FUNC_EMAC, 1); 122 emacdata = def->additions; 123 memcpy(emacdata->mac_addr, __res.bi_enet1addr, 6); 124 emacdata->phy_mode = PHY_MODE_RMII; 125 } 126 ... 304 305 static void __init 306 yosemite_setup_arch(void) 307 { 308 yosemite_set_emacdata(); 309 310 ibm440gx_get_clocks(&clocks, YOSEMITE_SYSCLK, 6 * 1843200); 311 ocp_sys_info.opb_bus_freq = clocks.opb; 312 313 /* init to some ~sane value until calibrate_delay() runs */ 314 loops_per_jiffy = 50000000/HZ; 315 316 /* Setup PCI host bridge */ 317 yosemite_setup_hose(); 318 319 #ifdef CONFIG_BLK_DEV_INITRD 320 if (initrd_start) 321 ROOT_DEV = Root_RAM0; 322 else 323 #endif 324 #ifdef CONFIG_ROOT_NFS 325 ROOT_DEV = Root_NFS; 326 #else 327 ROOT_DEV = Root_HDA1; 328 #endif 368 Chapter 14 34 min ago 11 hours 36 min ago 16 hours 22 min ago 19 hours 47 min ago 1 day 4 hours ago 1 day 8 hours ago 1 day 8 hours ago 1 day 11 hours ago 1 day 14 hours ago
http://www.linuxjournal.com/article/9252
CC-MAIN-2013-48
refinedweb
1,117
50.46
How to store and consume environment variables for local development APIs and third-party integrations require developers to use configuration data called environment or config variables. These variables are usually stored in password-protected places like CI tools or deployment pipelines, but how can we use them when we’re developing applications locally? TL;DR: - Don’t store environment variables in source control - Use dotenv to read data from your .env file - create-react-app forces a namespace on environment variables This short tutorial will explain one way of loading environment variables into your code when developing locally. The main benefit is that secrets such as API keys are not committed to source control to keep your application safer. Requirements: - A Javascript application - A package manager (yarn and npm are great) - Node 7+ Set up the variables Create a file called “.env” in the root of your repository. This file is called a “dot file” and is different from regular files in that it is usually hidden in file browsers. Most IDEs will allow users to create files without a name, but if that’s not the case, head over to your terminal and cd into your application’s root folder. touch .env Next, set up your variables with the format key=value, delimited by line breaks: API_KEY=abcde API_URL= Finally, make sure the .env file is not committed to your repository. This can be achieved by opening (or creating) a .gitignore file and adding this line: .env # This contains secrets, don't store in source control Consume the variables Head to your terminal to install dotenv with your preferred package manager: # Using npm: npm i dotenv # Using yarn: yarn add dotenv You’re now ready to read from your .env file. Add this line of code as early as possible in your application. With React apps, that’s usually index.js or App.js, but it’s entirely up to your setup: require('dotenv').config(); And that’s it! Your application should have access to environment variables via the process.env object. You can double-check by calling: console.log(process.env); If all is well, you should see something like: { NODE_ENV: "development", API_KEY: "abcde", API_URL: " } 🎉 You’re now ready to use environment variables in your application! Now, for those of us that use create-react-app, there’s a catch, and I wish it was documented a little better. Using create-react-app Facebook’s create-react-app works a little differently. If you followed the steps above and haven’t ejected the application, all you should see is the NODE_ENV variable. That’s because create-react-app only allows the application to read variables with the REACT_APP_ prefix. So in order to make our variables work, we’ll need to update our .env file like so: REACT_APP_API_KEY=abcde REACT_APP_API_URL= Once again, verify your setup by logging process.env to the console: { NODE_ENV: "development", REACT_APP_API_KEY: "abcde", REACT_APP_API_URL: " } And you’re done 😎 Tips Variables in .env files don’t require quotation marks unless there are spaces in the value. NO_QUOTES=thisisokay QUOTES="this contains spaces" It’s good practice to create a .env.sample file to keep track of the variables the app should expect. Here’s what my own sample file looks like in my current project. Note that it explains where someone might be able to find those keys and URLs. CONTENTFUL_SPACE_TOKEN="see Contentful dashboard" CONTENTFUL_API_KEY="see Contentful dashboard" S3_BUCKET_URL="check AWS" SHOW_DEBUG_SIDEBAR="if true, show debug sidebar" Further reading: Thank you for reading! Do you prefer another way of loading environment variables locally? I’d love to hear about it in the comments below! Discussion (18) Hi there, nice article. Just have a quick question. Does the dotenv script load the entire .env file into the client side? If that's the case then wouldn't that expose sensitive data such as DB password etc? This might be silly but I was wondering exactly the same thing. If you can do console.log(process.env);I wonder if the values are automatically replaced by environment variables perhaps? --EDIT-- I went ahead and read the link to the 12-factor app and this is exactly what happens. The values are replaced by environment variables with each deploy. Twelve-factor App Very cool. Thanks for taking the time to answer. Hi Muhammad! The entire .env file is indeed loaded, so all the secrets (including database passwords, in your case) will be exposed on the client, if that's where your app is running. This would obviously be a huge problem in a production environment, but my use case was centered around local development. Security depends heavily on your deployment pipeline and the kind of system you're building, and I don't want to go too deep on that topic in a comment, but I'll leave you with two things: Hope this answers your question! I see. I was thinking of using this in production in my current client's app. Thanks for pointing this out. Dodged a bullet there. Additional tip So, the most interesting part is missing: How do I actually shove some .envfiles into the deployment environment? Are we talking about functions? Container images? VM images? Persistent VMs? Hi Mihail! This article is focused on local development, in part because there are countless ways to execute deployments. For instance, tools like Travis, Heroku, and Netlify provide a UI that lets you set up environment variables. If you're using a VM-like environment like EC2 or Digital Ocean, you can actually upload a .env file directly. If you're using a container system like Docker, you can use Compose or config arguments to set environment vars. Hope this helps! So, using the dotenv module is essentially the local version of those managed environments' ENV configs? Then, perhaps, the best way to use it is node -r dotenv/config your_script.js, and only include it in devDependenciesso it's not present at all in production? ...but in that case it's not very different from just putting a cross-envat the start of your development script. I guess I still don't entirely understand what unique niche dotenvis the best fit for. Hi Mihail, have you written anything about how you go about protecting while using your secret keys? I'm interested to learn more and if you have then please post a link. These kinds of nuts and bolts articles are in such dire need from my point of view. Well, so far as protecting secrets, at the moment I believe that these are indeed best set as environmental variables of the deployment environment. I know some people use git hooks that test that they aren't committing any secrets, but I believe these are brittle and only give a false sense of security. A rule that seems to work for me is - if you want to make sure something is never committed, don't put it in the project directory. Don't test with it. But then there's still the app's responsibility of not sending the secrets to any users. Corollary: Don't rely on this as a way to protect the secrets from malicious developers or even accidental disclosure. If they can get code into production, they can compromise any data available in the production environment. Even if all deployment goes through CI from a protected branch, all you get is blame a long time later. Hence, all secrets must have the minimum permissions possible. For example, every service should have its own database login/connection string. Not for permissions alone, but so that it can be easily replaced when compromised. Another example (although not usually provided through ENV since they are reissued at runtime) could be asymmetric JWT algorithms, where most services can only verify the token but not issue it. Thank you! I can't read input like this often enough. It really helps me. I wish experienced devs talked more about it, since it's so key to delivering a basic professional experience Thank you! I've been trying to figure this out for a while and couldn't find a resource on how to do it. Glad you found this useful! good read before going to sleep :) Great tutorial. Thx for sharing 👍 I don't think there's any harm in keeping development secrets under version control in many, many cases. Especially when you are using something like compose. It speeds up the onboarding of new developers. Production secrets are another matter. Further read: dev.to/ismailpe/handling-environme...
https://dev.to/deammer/loading-environment-variables-in-js-apps-1p7p
CC-MAIN-2022-21
refinedweb
1,433
65.62
Tutorial: Get started with .NET for Apache Spark This tutorial teaches you how to run a .NET for Apache Spark app using .NET Core on Windows, macOS, and Ubuntu. In this tutorial, you learn how to: - Prepare your environment for .NET for Apache Spark - Write your first .NET for Apache Spark application - Build and run your .NET for Apache Spark application Prepare your environment Before you begin writing your app, you need to set up some prerequisite dependencies. If you can run dotnet, java, spark-shell from your command line environment, then your environment is already prepared and you can skip to the next section. If you cannot run any or all of the commands, do the following steps. 1. Install .NET To start building .NET apps, you need to download and install the .NET SDK (Software Development Kit). Download and install the .NET Core SDK. Installing the SDK adds the dotnet toolchain to your PATH. Once you've installed the .NET Core SDK, open a new command prompt or terminal and run dotnet. If the command runs and prints out information about how to use dotnet, can move to the next step. If you receive a 'dotnet' is not recognized as an internal or external command error, make sure you opened a new command prompt or terminal before running the command. 2. Install Java Install Java 8.1 for Windows and macOS, or OpenJDK 8 for Ubuntu. Select the appropriate version for your operating system. For example, select jdk-8u201-windows-x64.exe for a Windows x64 machine (as shown below) or jdk-8u231-macosx-x64.dmg for macOS. Then, use the command java to verify the installation. 3. Install compression software Apache Spark is downloaded as a compressed .tgz file. Use an extraction program, like 7-Zip or WinZip, to extract the file. 4. Install Apache Spark Download and install Apache Spark. You'll need to select from version 2.3.* or 2.4.0, 2.4.1, 2.4.3, 2.4.4, 2.4.5, 2.4.6, 2.4.7, 3.0.0, 3.0.1, 3.0.2, 3.1.1, 3.1.2, 3.2.0, or 3.2.1 (.NET for Apache Spark is not compatible with other versions of Apache Spark). See the .NET Spark Release Notes for more information on compatible versions. The commands used in the following steps assume you have downloaded and installed Apache Spark 3.0.1. If you wish to use a different version, replace 3.0.1 with the appropriate version number. Then, extract the .tar file and the Apache Spark files. To extract the nested .tar file: - Locate the spark-3.0.1-bin-hadoop2.7.tgz file that you downloaded. - Right click on the file and select 7-Zip -> Extract here. - spark-3.0.1-bin-hadoop2.7.tar is created alongside the .tgz file you downloaded. To extract the Apache Spark files: - Right-click on spark-3.0.1-bin-hadoop2.7.tar and select 7-Zip -> Extract files... - Enter C:\bin in the Extract to field. - Uncheck the checkbox below the Extract to field. - Select OK. - The Apache Spark files are extracted to C:\bin\spark-3.0.1-bin-hadoop2.7\ Run the following commands to set the environment variables used to locate Apache Spark. On Windows, make sure to run the command prompt in administrator mode. setx /M HADOOP_HOME C:\bin\spark-3.0.1-bin-hadoop2.7\ setx /M SPARK_HOME C:\bin\spark-3.0.1-bin-hadoop2.7\ setx /M PATH "%PATH%;%HADOOP_HOME%;%SPARK_HOME%bin" # Warning: Don't run this if your path is already long as it will truncate your path to 1024 characters and potentially remove entries!. 5. Install .NET for Apache Spark Download the Microsoft.Spark.Worker release from the .NET for Apache Spark GitHub. For example if you're on a Windows machine and plan to use .NET Core, download the Windows x64 netcoreapp3.1 release. To extract the Microsoft.Spark.Worker: - Locate the Microsoft.Spark.Worker.netcoreapp3.1.win-x64-1.0.0.zip file that you downloaded. - Right-click and select 7-Zip -> Extract files.... - Enter C:\bin in the Extract to field. - Uncheck the checkbox below the Extract to field. - Select OK. 6. Install WinUtils (Windows only) .NET for Apache Spark requires WinUtils to be installed alongside Apache Spark. Download winutils.exe. Then, copy WinUtils into C:\bin\spark-3.0.1-bin-hadoop2.7\bin. Note If you are using a different version of Hadoop, which is annotated at the end of your Spark install folder name, select the version of WinUtils that's compatible with your version of Hadoop. 7. Set DOTNET_WORKER_DIR and check dependencies Run one of the following commands to set the DOTNET_WORKER_DIR environment variable, which is used by .NET apps to locate .NET for Apache Spark worker binaries. Make sure to replace <PATH-DOTNET_WORKER_DIR> with the directory where you downloaded and extracted the Microsoft.Spark.Worker. On Windows, make sure to run the command prompt in administrator mode. Finally, double-check that you can run dotnet, java, spark-shell from your command line before you move to the next section. Write a .NET for Apache Spark app 1. Create a console app In your command prompt or terminal, run the following commands to create a new console application: dotnet new console -o MySparkApp cd MySparkApp The dotnet command creates a new application of type console for you. The -o parameter creates a directory named MySparkApp where your app is stored and populates it with the required files. The cd MySparkApp command changes the directory to the app directory you created. 2. Install NuGet package To use .NET for Apache Spark in an app, install the Microsoft.Spark package. In your command prompt or terminal, run the following command: dotnet add package Microsoft.Spark Note This tutorial uses the latest version of the Microsoft.Spark NuGet package unless otherwise specified. 3. Write your app Open Program.cs in Visual Studio Code, or any text editor, and replace all of the code with the following: using Microsoft.Spark.Sql; using static Microsoft.Spark.Sql.Functions; namespace MySparkApp { class Program { static void Main(string[] args) { // Create Spark session SparkSession spark = SparkSession .Builder() .AppName("word_count_sample") .GetOrCreate(); // Create initial DataFrame string filePath = args[0]; DataFrame dataFrame = spark.Read().Text(filePath); //Count words DataFrame words = dataFrame .Select(Split(Col("value")," ").Alias("words")) .Select(Explode(Col("words")).Alias("word")) .GroupBy("word") .Count() .OrderBy(Col("count").Desc()); // Display results words.Show(); // Stop Spark session spark.Stop(); } } } SparkSession is the entrypoint of Apache Spark applications, which manages the context and information of your application. Using the Text method, the text data from the file specified by the filePath is read into a DataFrame. A DataFrame is a way of organizing data into a set of named columns. Then, a series of transformations is applied to split the sentences in the file, group each of the words, count them and order them in descending order. The result of these operations is stored in another DataFrame. Note that at this point, no operations have taken place because .NET for Apache Spark lazily evaluates the data. It's not until the Show method is called to display the contents of the words transformed DataFrame to the console that the operations defined in the lines above execute. Once you no longer need the Spark session, use the Stop method to stop your session. 4. Create data file Your app processes a file containing lines of text. Create a file called input.txt file in your MySparkApp directory, containing the following text: Hello World This .NET app uses .NET for Apache Spark This .NET app counts words with Apache Spark Save the changes and close the file. Run your .NET for Apache Spark app Run the following command to build your application: dotnet build Navigate to your build output directory and use the spark-submit command to submit your application to run on Apache Spark. Make sure to replace <version> with the version of your .NET worker and <path-of-input.txt> with the path of your input.txt file is stored. spark-submit ^ --class org.apache.spark.deploy.dotnet.DotnetRunner ^ --master local ^ microsoft-spark-3-0_2.12-<version>.jar ^ dotnet MySparkApp.dll <path-of-input.txt> Note This command assumes you have downloaded Apache Spark and added it to your PATH environment variable to be able to use spark-submit. Otherwise, you'd have to use the full path (for example, C:\bin\apache-spark\bin\spark-submit or ~/spark/bin/spark-submit). When your app runs, the word count data of the input.txt file is written to the console. +------+-----+ | word|count| +------+-----+ | .NET| 3| |Apache| 2| | app| 2| | This| 2| | Spark| 2| | World| 1| |counts| 1| | for| 1| | words| 1| | with| 1| | Hello| 1| | uses| 1| +------+-----+ Congratulations! You successfully authored and ran a .NET for Apache Spark app. Next steps In this tutorial, you learned how to: - Prepare your environment for .NET for Apache Spark - Write your first .NET for Apache Spark application - Build and run your .NET for Apache Spark application To see a video explaining the steps above, check out the .NET for Apache Spark 101 video series. Feedback Submit and view feedback for
https://learn.microsoft.com/en-us/dotnet/spark/tutorials/get-started
CC-MAIN-2022-40
refinedweb
1,549
69.28