text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
A module is a set of QML content files that can be imported as a unit into a QML application. Modules can be used to organize QML content into independent units, and they can use a versioning mechanism that allows for independent upgradability of the modules. While QML component files within the same directory are automatically accessible within the global namespace, components defined elsewhere must be imported explicitly using the import statement to import them as modules. For example, an import statement is required to use: An import statement includes the module name, and possibly a version number. This can be seen in the snippet commonly found at the top of QML files: import QtQuick 1.0 This imports version 1.0 of the "QtQuick" module into the global namespace. (The QML library itself must be imported to use any of the QML Elements, as they are not included in the global namespace by default.) The Qt module is an installed module; it is found in the import path. There are two types of QML modules: located modules (defined by a URL) and installed modules (defined by a URI). Located modules can reside on the local filesystem or a network resource, and are referred to by a quoted location URL that specifies the filesystem or network URL. They allow any directory with QML content to be imported as a module, whether the directory is on the local filesystem or a remote server. For example, a QML project may have a separate directory for a set of custom UI components. These components can be accessed by importing the directory using a relative or absolute path, like this: Similarly, if the directory resided on a network source, it could be imported like this: import "" import "" 1.0 A located module can also be imported as a network resource if it has a qmldir file in the directory that specifies the QML files to be made available by the module. For example, if the MyComponents directory contained a qmldir file defined like this: Slider 1.0 Slider.qml CheckBox 1.0 CheckBox.qml Window 1.0 Window.qml If the MyComponents directory was then hosted as a network resource, it could be imported as a module, like this: import "" Window { Slider { // ... } CheckBox { // ... } } with an optional "1.0" version specification. Notice the import would fail if a later version was used, as the qmldir file specifies that these elements are only available in the 1.0 version. Note that modules imported as a network resource allow only access to components defined in QML files; components defined by C++ QML extension plugins are not available. Installed modules are modules that are made available through the QML import path, as defined by QDeclarativeEngine::importPathList(), or modules defined within C++ application code. An installed module is referred to by a URI, which allows the module to be imported from QML code without specifying a complete filesystem path or network resource URL. When importing an installed module, an un-quoted URI is used, with a mandatory version number: import QtQuick 1.0 import com.nokia.qml.mymodule 1.0 When a module is imported, the QML engine searches the QML import path for a matching module. The root directory of the module must contain a qmldir file that defines the QML files and/or C++ QML extension plugins that are made available to the module. Modules that are installed into the import path translate the URI into directory names. For example, the qmldir file of the module com.nokia.qml.mymodule must be located in the subpath com/nokia/qml/mymodule/qmldir somewhere in the QML import path. In addition it is possible to store different versions of the module in subdirectories of its own. For example, a version 2.1 of the module could be located under com/nokia/qml/mymodule.2/qmldir or com/nokia/qml/mymodule.2.1/qmldir. The engine will automatically load the module which matches best. The import path, as returned by QDeclarativeEngine::importPathList(), defines the default locations to be searched by the QML engine for a matching module. By default, this list contains: Additional import paths can be added through QDeclarativeEngine::addImportPath() or the QML_IMPORT_PATH environment variable. When running the QML Viewer, you can also use the -I option to add an import path. As an example, suppose the MyQMLProject directory in the previous example was located on the local filesystem at C:\qml\projects\MyQMLProject. The MyComponents subdirectory could be made available as an installed module by adding a qmldir file to the MyComponents directory that looked like this: Slider 1.0 Slider.qml CheckBox 1.0 CheckBox.qml Window 1.0 Window.qml Providing the path C:\qml is added to the QML import path using any of the methods listed previously, a QML file located anywhere on the local filesystem can then import the module as shown below, without referring to the module's absolute filesystem location: import projects.MyQMLProject.MyComponents 1.0 Window { Slider { // ... } CheckBox { // ... } } Installed modules are also accessible as a network resource. If the C:\qml directory was hosted as and this URL was added to the QML import path, the above QML code would work just the same. Note that modules imported as a network resource allow only access to components defined in QML files; components defined by C++ QML extension plugins are not available. C++ applications can define installed modules directly within the application using qmlRegisterType(). For example, the Writing QML extensions with C++ tutorial defines a C++ class named PieChart and makes this type available to QML by calling qmlRegisterType(): qmlRegisterType<PieChart>("Charts", 1, 0, "PieChart"); This allows the application's QML files to use the PieChart type by importing the declared Charts module: import Charts 1.0 For QML plugins, the module URI is automatically passed to QDeclarativeExtensionPlugin::registerTypes(). This method can be reimplemented by the developer to register the necessary types for the module. Below is the registerTypes() implementation from the QML plugins example:"); } }; Once the plugin is built and installed, and includes a qmldir file, the module can be imported from QML, like this: import com.nokia.TimeExample 1.0 Unlike QML types defined by QML files, a QML type defined in a C++ extension plugin cannot be loaded by a module that is imported as a network resource. By default, when a module is imported, its contents are imported into the global namespace. You may choose to import the module into another namespace, either to allow identically-named types to be referenced, or purely for readability. To import a module into a specific namespace, use the as keyword: import QtQuick 1.0 as QtLibrary import "../MyComponents" as MyComponents import com.nokia.qml.mymodule 1.0 as MyModule Types from these modules can then only be used when qualified by the namespace: QtLibrary.Rectangle { // ... } MyComponents.Slider { // ... } MyModule.SomeComponent { // ... } Multiple modules can be imported into the same namespace in the same way that multiple modules can be imported into the global namespace: import QtQuick 1.0 as Nokia import Ovi 1.0 as Nokia JavaScript files must always be imported with a named import: import "somescript.js" as MyScript Item { //... Component.onCompleted: MyScript.doSomething() } The qualifier ("MyScript" in the above example) must be unique within the QML document. Unlike ordinary modules, multiple scripts cannot be imported into the same namespace. A qmldir file is a metadata file for a module that maps all type names in the module to versioned QML files. It is required for installed modules, and located modules that are loaded from a network source. It is defined by a plain text file named "qmldir" that contains one or more lines of the form: # <Comment> <TypeName> [<InitialVersion>] <File> internal <TypeName> <File> plugin <Name> [<Path>] # <Comment> lines are used for comments. They are ignored by the QML engine. <TypeName> [<InitialVersion>] <File> lines are used to add QML files as types. <TypeName> is the type being made available, the optional <InitialVersion> is a version number, and <File> is the (relative) file name of the QML file defining the type. Installed files do not need to import the module of which they are a part, as they can refer to the other QML files in the module as relative (local) files, but if the module is imported from a remote location, those files must nevertheless be listed in the qmldir file. Types which you do not wish to export to users of your module may be marked with the internal keyword: internal <TypeName> <File>. The same type can be provided by different files in different versions, in which case later versions (e.g. 1.2) must precede earlier versions (e.g. 1.0), since the first name-version match is used and a request for a version of a type can be fulfilled by one defined in an earlier version of the module. If a user attempts to import a version earlier than the earliest provided or later than the latest provided, the import produces a runtime error, but if the user imports a version within the range of versions provided, even if no type is specific to that version, no error will occur. A single module, in all versions, may only be provided in a single directory (and a single qmldir file). If multiple are provided, only the first in the search path will be used (regardless of whether other versions are provided by directories later in the search path). The versioning system ensures that a given QML file will work regardless of the version of installed software, since a versioned import only imports types for that version, leaving other identifiers available, even if the actual installed version might otherwise provide those identifiers. plugin <Name> [<Path>] lines are used to add QML C++ plugins to the module. <Name> is the name of the library. It is usually not the same as the file name of the plugin binary, which is platform dependent; e.g. the library MyAppTypes would produce libMyAppTypes.so on Linux and MyAppTypes.dll on Windows. <Path> is an optional argument specifying either an absolute path to the directory containing the plugin file, or a relative path from the directory containing the qmldir file to the directory containing the plugin file. By default the engine searches for the plugin library in the directory that contains the qmldir file. The plugin search path can be queried with QDeclarativeEngine::pluginPathList() and modified using QDeclarativeEngine::addPluginPath(). When running the QML Viewer, use the -P option to add paths to the plugin search path. The QML_IMPORT_TRACE environment variable can be useful for debugging when there are problems with finding and loading modules. See Debugging module imports for more information.
http://doc.qt.nokia.com/4.7-snapshot/qdeclarativemodules.html
crawl-003
refinedweb
1,788
53.1
Quickstart: Install and use a package using the dotnet CLI NuGet packages contain reusable code that other developers make available to you for use in your projects. See What is NuGet? for background. Packages are installed into a .NET Core project using the dotnet add package command as described in this article for the popular Newtonsoft.Json package. Once installed, refer to the package in code with using <namespace> where <namespace> is specific to the package you're using. You can then use the package's API. Tip Start with nuget.org: Browsing nuget.org is how .NET developers typically find components they can reuse in their own applications. You can search nuget.org directly or find and install packages within Visual Studio as shown in this article. Prerequisites - The .NET Core SDK, which provides the dotnetcommand-line tool. Create a project NuGet packages can be installed into a .NET project of some kind. For this walkthrough, create a simple .NET Core console project as follows: Create a folder for the project. Create the project using the following command: dotnet new console Use dotnet runto test that the app has been created properly. Add the Newtonsoft.Json NuGet package Use the following command to install the Newtonsoft.jsonpackage: dotnet add package Newtonsoft.Json After the command completes, open the .csprojfile to see the added reference: <ItemGroup> <PackageReference Include="Newtonsoft.Json" Version="10.0.3" /> </ItemGroup> Use the Newtonsoft.Json API in the app Open the Program.csfile and add the following line at the top of the file: using Newtonsoft.Json; Add the following code before the class Programline: public class Account { public string Name { get; set; } public string Email { get; set; } public DateTime DOB { get; set; } } Replace the Mainfunction with the following: static void Main(string[] args) { Account account = new Account { Name = "John Doe", Email = "[email protected]", DOB = new DateTime(1980, 2, 20, 0, 0, 0, DateTimeKind.Utc), }; string json = JsonConvert.SerializeObject(account, Formatting.Indented); Console.WriteLine(json); } Build and run the app by using the dotnet runcommand. The output should be the JSON representation of the Accountobject in the code: { "Name": "John Doe", "Email": "[email protected]", "DOB": "1980-02-20T00:00:00Z" }
https://docs.microsoft.com/en-us/nuget/quickstart/install-and-use-a-package-using-the-dotnet-cli
CC-MAIN-2018-34
refinedweb
366
60.92
If I include the following it is corrected #include <WProgram.h> #include<Arduino.h> If they don't exist in ANY version then it is certainly strange that after adding them the sketch compiles. Just downloaded 1.03 and got the same basic problem in "Blink" and other basic sketches as I had in 1.02 and 1.01 (but not 0023) where "OUTPUT" seems not declared. There is a work around solution by adding some pointers to libraries but the install is straightforward and automatic and I get it on an Imac as well as on two Windows 7 laptops??? Anyone found a solution or seeing the same problem. I tried an uninstall and reinstall on the windows PCs but same thing. I even tried renaming the library so it would only install the new library, same thing. #include <WProgram.h> #include<Arduino.h> Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=137261.msg1031807
CC-MAIN-2016-30
refinedweb
180
75.1
The debugger has a new feature this time around called ‘Just my code’. This is a new-for-Whidbey managed debugging feature. Philosophy of Just my code: The basic idea is that when you debug an application, you want to debug the code in the application, and not all the code that is in the Framework. Specifics: - If you step into some non-user code (say the Framework), and that non-user code invokes your callback, you want to step into the callback, not step into the Framework - If you step out of your code, you don’t want to step into all that glue code, you want to break on your next event handler. - If an exception is thrown and caught in someone else’s code. You don’t care. - If your code failed to catch an exception, then you don’t want to let it get swallowed by some big exception filter that a bad person has installed. - Sometimes you want to break when an exception is thrown, but not every exception. You only care when your code throws the exception. What does the debugger consider ‘your code’? - If the code is compiled optimized, it is not your code - If you don’t have symbols, it is not your code When we were first coming up with this feature, I was thinking that this feature would be helpful to everyone. After all, everyone at least occasionally has the experience where code written by someone else gets in the way of debugging a problem in your code. However, after playing with the feature for a while, I have changed by mind. It’s a great feature if you want a simplified debugging experience. However, that isn’t me. If you are a hard core dev, do yourself a favor and turn the feature off (Tools->Options->Debugging->General->Just My Code). Also, one technical note for those who like the feature – if exceptions pass out of user code and are swallowed by certain parts of the Runtime (not the framework), the debugger will not be given a chance to break on the exception. The CLR is working to address this problem, but it is a difficult problem to solve. In Beta 1, one such location is the reflection engine. This problem happens the other way too – occasionally you cannot debug your own code! Try this: * Create a solution with two projects * Create a custom control in one project * Compile the control project * Add the new control to the toolbox using browse (to the release or debug build dll of the user control project). There is no other way to use the control! (I’m talking about inherited controls, not user controls. User controls *occasionally* show up in the toolbox by themselves) * Open the second project, and drop the control onto a form. * Place a breakpoint somewhere in the control project, ie/ the constructor * Run the application (the one with the form). The breakpoint never gets hit, even though the DLL for the control is in the solution.. A tricky problem, but a very frustrating one for developers who program "private" custom controls for their applications. I don’t know why that would be. What does the modules window say for your control? When I run the entire solution (including the controls) I get the following modules: mscorlib.dll HealthSpace.EhsManager.DesktopClient.exe system.dll healthspace.ehsmanager.desktopclient.core.dll system.xml.dll system.windows.forms.dll healthspace.data.dll healthspace.ehsmanager.businesslogic.dll purecomponents.navigator.dll system.drawing.dll dotnetwidgets.dll documentmanager.dll microsoft.visualbasic.dll (we use C#! probably a 3rd party control.. 😉 activereports.dll activereports.viewer.dll dundaswinchart.dll healthspace.ehsmanager.desktopclient.plugins.foodui.dll healthspace.ehsmanager.desktopclient.plugins.mainui.dll healthspace.errorreporting.dll healthspace.visualcontrols.dll (the one that won’t debug) janus.windows.common.dll janus.windows.explorerbar.dll janus.windows.gridex.dll system.data.dll system.enterpriseservices.dll system.enterpriseservices.thunk.dll Bear in mind that it won’t debug in 2003 – I don’t have a full beta 1 install of 2005 in virtual pc yet – just C# express at home. I always assumed that it never could debug because I manually browsed to the .dll when adding the controls to the toolbox. If you need any more information you can contact me at ebickle (at) healthspace.ca Thanks! The problem is that VC# Express 2005 does not have the option to turn this feature off. So everytime I start the debugging session I get a warning about "Just My Code", complete with instructions to turn it off by navigating to a non-existent configuration dialog. Kind of annoying. How about the ability to specify what namespaces you step into while debugging instead? Something like: System.Xml.* [NO] System.Windows.Forms.* [YES] MyNamespace.Objects.* [YES] ThirdParty.Object.* [NO] I’ve seen a similar feature to this in some Java IDEs. Greg: I will try and find out why C# express doesn’t have an option for this. You can always use the ultimate options editor — regedit. HKEY_CURRENT_USERSoftwareMicrosoftVCSExpress8.0DebuggerJustMyCode = 0 BTW: What are you doing that you have a project that is optimized or doesn’t have a PDB? Ryan: Good feadback. I decieded to post a seperate blog topic about customizations () Greg, I talked to some coworkers about your problem. I found out two things: 1) There is aparently a bug where sometimes the warning comes up when it shouldn’t. We are still tracking down why. 2) There is a button in tools options to show all options, if you click this, there should be an option to turn off just my code. I would recommend something like this for profiling as well. It’s difficult to profile a windows program when it’s taking up most of it’s time in a Windows look, but the choke point is somewhere in my code. PingBack from
https://blogs.msdn.microsoft.com/greggm/2004/07/29/is-just-my-code-for-you/
CC-MAIN-2016-36
refinedweb
984
62.88
You can read this and other amazing tutorials on ElectroPeak's official website Overview In this tutorial, you’ll learn how to use Arduino LCD keypad shield with 3 practical projects. What You Will Learn: - How to set up the shield and identify the keys - How to scroll text - How to display special characters Step 1: 1602 Arduino LCD Keypad Shield Features Displaying information in electronic projects has always been the most compelling issue. There are various ways to display data. These screens can be so simple such as 7segments or LEDs, or they can be more fascinating such as LCDs. Using LCDs has always been one of the most popular ways to display information. LCDs are divided into two generic types: Characters and Graphics. One of the most common, cheapest and simplest LCDs available is the character LCD. This LCD consists of several rows and columns. Letters and numbers are written in places created by rows and columns. For example, LCD character 16*2 has 2 rows and 16 columns. So it can display 32 characters. Working with these LCDs is very simple and they have full compatibility with all microcontrollers and processor boards. For easier use of these LCDs, its 16x2model, including four keys for making the menu, is made as a Shield which is also compatible with Arduino boards. Step 2: How to Use Arduino LCD Keypad Shield Arduino shiels is a user-friendly and simple shield. To use it you need to know its pinout and its connection to Arduino at first. Step 3: Required Materials Step 4: How to Read the Keys? In this shield, all 4 keys are connected to the analog pin 0 to save on digital pins. So we should use ADC to read them. When you press a key, it returns a value to the A0 pin according to the internal resistive splitting circuit, which identifies the type of the key. Let’s take a deeper look at the code: #include <LiquidCrystal.h> The library you need for character LCD. LiquidCrystal LCD( pin_RS, pin_EN, pin_d4, pin_d5, pin_d6, pin_d7); Defining the LCD object according to the pins that are connected to Arduino. lcd.begin(16, 2); Initial configuration of the LCD by specifying the number of columns and rows. The first argument is the number of columns, and the second is the number of rows. in the above table are some of the important functions to work with LCD. You can check the Arduino website for more functions. Step 5: How to Scroll a Text? We can do it easily using the above functions. Step 6: How to Display a Specific Character? You can create a character in each block from your LCD. To do this, you should convert your desired character to an array of codes, then display it on LCD. To convert your character to codes you can use online websites like this. Design your character, then copy the generated array to your code. lcd.createChar stores your array in a memory location and you can display it withlcd.write Step 7: What's Next? - Try to create a menu with ability of selecting options. Discussions 20 days ago That shield looks like an interesting bit of kit. Thank you for sharing your intro :-)
https://www.instructables.com/id/Using-1602-LCD-Keypad-Shield-W-Arduino-Practical-P/
CC-MAIN-2019-22
refinedweb
544
74.19
In some of my past columns, I’ve mentioned that my template system of choice is the aptly named Template Toolkit, a marvelous work by Andy Wardley. Although I’ve demonstrated how I’ve used the Template Toolkit (TT), I haven’t really talked enough about what makes it so wonderfully useful. So, this month, let’s take a more in-depth look at the wonders of TT. Why Template Toolkit? At a minimum, a templating system provides a way of replacing placeholders with values. The simplest templating system is a Perl “one-liner” such as this: my %v = (first => ‘Randal’, last => ‘Schwartz’); my $text = ‘My name is <first> <last>.’; ## here it comes: $text =~ s/<(\w+)>/$v{$1}/g; Here, each word in angle brackets is replaced with a value found in the %v hash. Simple enough, and some might say deceptively simple. Because it’s this easy to code a trivial templating system, many people have started here and grown their own templating systems independently. But the tricky parts are indeed tricky. Eventually, templating systems grow to include features that make them approach full-blown languages. But here’s where I think Andy did the right thing: TT’s control structures are handled by a mini-language. This mini-language carefully hides the differences between data structure access, method calls, and function calls, by sweeping them all under the same dot notation (just like Perl version 6). Thus, the TT mini-language is much more accessible to web designers who don’t care to learn the full-blown intricacies of Perl element access and method calls. Additionally, the TT mini-language has just enough features to handle templating, but not quite enough to do serious heavy lifting. This helps me to do the right thing in the right places, because I sense a steadily increasing difficulty when I’m using TT code where full Perl code is really needed. For example, in a web application, the TT code is used for the View code of the Model-View-Controller triad, whereas Perl works better for the Model and Controller parts. Another TT advantage: it was driven early in its development by two key projects: the (sadly now-defunct) etoys.com site and slashcode.com, the code behind Slashdot and hundreds of other web-based communities. Key developers from both projects provided valuable feedback to Andy about real world issues and concerns, guiding Andy in further design and features. The TT community is also quite active, with the mailing list getting one or two dozen emails a day. Novice questions are welcome, particularly because such questions occasionally point out shortcomings in the language or documentation that we “experts” overlook automatically. The TT mini-language is compiled into Perl code, which is then compiled and loaded into memory for execution. The Perl source code can be cached to disk, while the compiled subroutines can be cached in memory. Additional heavy-lifting code can be written directly in Perl, and loaded as modules with nice TT interfaces. For prototyping, you can also embed Perl directly in your templates, if you prefer. This all works together to ensure fairly speedy execution, even for traffic-intensive web sites. Almost every part of TT is configurable, sometimes excessively so. If you want a slightly different grammar for your TT mini-language, you can plug that in. If you want to load your templates from a database instead of a file, you can have that too. If you want your scalars to know how to rot13 themselves, you can get that added. Perhaps this is the source of the “Toolkit” part of the name: what you’re really getting for your application is a templating system built to your specifications from the large class of templating systems that TT supports. The TT Language The TT language consists of directives and variables. Directives provide the control instructions, like IF and WHILE. Variables map directly to Perl scalar, array, hash, and object variables. The TT code can be structured by having templates include other templates in various ways, similar to subroutines in a traditional language. Let’s look at a sample template: Dear [% name %], It has come to our attention that your account is in arrears to the sum of [% debt %]. Please settle your account before [% deadline %]. It has come to our attention that your account is in arrears to the sum of [% debt %]. Please settle your account before [% deadline %]. This template contains three directives that interpolate the TT variables as indicated. (We’ll see later how the variables get their values.) The construct [% variable %] or [% GET variable %] (The GET is optional and is frequently omitted) interpolates the value of variable. Meanwhile… [% SET variable = some + calculation %] … sets a value into a variable. (The SET is also optional.) If the [% and %] tags conflict with desired data, they can be changed to nearly arbitrary start and end sequences, such as HTML comment markers. Whitespace between the tags is almost entirely ignored. Comments start with # and extend to the end of the line. By default, newlines are kept, so… Hello [% a = 3 %] World [% a %] … results in Hello \nWorld 3\n. You can absorb the whitespace by placing - next to either %, as in… Hello [% a = 3 -%] World [% a -%] … which results in Hello World 3. A configuration mode called post chomp treats all directives as if they have this trailing minus, which I find very useful and consistent. Many control directives like FOREACH, WHILE, and IF nest as a block, terminating at a corresponding END. The output of a block directive may be captured into a variable: [% result = FOREACH user = userlist %] one user is [% user.name %] [% END %] Here, the output from the FOREACH loop ends up in result for later processing. Directives can be semicolon-separated within a tag pair, which saves an adjacent end-tag/start-tag combination: [% result = FOREACH user = userlist %] one user is [% user.name; END %] Some directives can trail other directives, similar to Perl: [% b = b * a FOREACH a = [1, 2, 3] %] Expressions are fairly Perl-like, supporting literal strings with ‘single quotes’ and “double quotes with $variable interpolation”. Like Perl 6, the concatenation operator is _ (“underscore”), not the dot. The INSERT directive brings in another chunk of text, often from another file: INSERT myfile For INSERT, the contents of myfile are not processed for directives. The file is found along the INCLUDE_PATH, selected by a configuration parameter. The INCLUDE directive is like INSERT as it brings in other data… INCLUDE myfile … but the contents of myfile are further examined for TT language constructs, thus making INCLUDE more like a subroutine call. Additionally, the argument might also be a named block in the same file, providing for local, common code for re-use: [% BLOCK myfile %] … some text here, with [% first %] [% last %] names. [% END %] Currently defined variables are visible to the included block or file, but variables set in the included section are not visible back to the rest of the file. For convenience, extra local variables can be defined during the invocation… [% INCLUDE myfile this = "that" %] … which provides a nice way to pass parameters down to the sub-template. A PROCESS directive acts like INCLUDE, but the local variables remain in effect, thus sharing the same namespace as the invoker. Typically, these directives are used for speed (localization costs time) or for common variable initialization. The WRAPPER directive… [% WRAPPER foo %] …some stuff… [% END %] … acts as if you’d said: [% INCLUDE foo content = "some stuff" %] This is great for writing text that wants to wrap some other text with enclosing materials: [% INCLUDE comment_label WRAPPER my_button color='blue' %] Here, the contents of include file comment_label can be enclosed in text provided by my_button, possibly modified by the current value of color. The output of a block can be captured: [% disclaimer = BLOCK %] Portions of tonight’s show not affecting the outcome were edited. [% IF sorry %] We’re sorry. [% END %] [% END %] This is a good way to define a large text string for boilerplate to use later. The TT language provides a Perl-like IF structure: [% IF age < 10 %] Hello [% name %], does your mother know you’re using her AOL account? [% ELSIF age < 18 %] Sorry, you’re not old enough to enter (and too dumb to lie about your age)! [% ELSE %] Welcome [% name %]. [% END %] And not to be outpaced by Perl 6, TT also provides a SWITCH structure: [% SWITCH myvar %] [% CASE value1 %] that value [% CASE [value2 value3] %] either of those [% CASE myhash.keys %] any of those keys [% CASE %] default [% END %] The FOREACH loop acts like the Perl equivalent: [% FOREACH thing = [foo 'Bar' "$foo Baz" ] %] * [% thing %] [% END %] When iterating over a hash, omitting the iteration variable causes the keys to be assigned directly as variables: [% userlist = [ { id => 'merlyn', name => 'Randal' } { id => 'fred', name => 'Fred Flintstone'} ] %] [% FOREACH userlist %] [% id %] is [% name %] [% END %] [% FOREACH userlist %] [% id %] is [% name %] [% END %] This loop acts like… [% FOREACH u = userlist %] [% u.id %] is [% u.name %] [% END %] … but with less typing. Nested loops are also supported, as are Perl-like NEXT and LAST operations. Unlike Perl’s equivalent, TT’s FOREACH directive understands where it is with regard to the loop, and provides meta-information via the loop variable. For example, loop.size gives the total iterations, and loop.last is true if this is the last item. Using these controls, we can make the loops act nicely. For example, this template… [% FOREACH i = [ 'foo', 'bar', 'baz' ] %] [% IF loop.first %]<ul>[% END %] <li>[% loop.count %] of [% loop.size %]: [% i %] [% IF loop.last %]</ul> [% END %] [% END %] … generates: <ul> <li>1 of 3: foo <li>2 of 3: bar <li>3 of 3: baz </ul> Without the loop controls, we’d typically move that start and end tag outside the loop, but then we’d get the tags even if the list was empty. Here, we’ll get start and end tags only if we’ve entered the loop at least once. Very cool. The WHILE directive provides the expected “loop as long as an expression is true.” NEXT and LAST work as they do in Perl: [% WHILE total < 100 %] [% total %] [% total += 1 %] [% END %] But beware: to prevent runaway programs, all loops are limited to 1000 iterations (an arbitrary value selected by Andy). Well, I’ve run out of room already, and I still have a bit more to say. Next time, I’ll finish up the descriptions of the directives, talk about exception handling, data structures, and using TT from Perl. I’ll also cover configuration directives, the command-line tools that come with TT’s distribution, and using TT with mod_perl. Until next time, enjoy!
http://www.linux-mag.com/id/1695/
CC-MAIN-2018-43
refinedweb
1,765
61.87
I am getting the following error ipython notebook cmd.py Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/me/.virtualenvs/demo/lib/python3.4/site-packages/IPython /__init__.py", line 48, in <module> from .core.application import Application File "/home/me/.virtualenvs/demo/lib/python3.4/site-packages/IPython /core/application.py", line 40, in <module> from IPython.core import release, crashhandler File "/home/me/.virtualenvs/demo/lib/python3.4/site-packages/IPython /core/crashhandler.py", line 28, in <module> from IPython.core import ultratb File "/home/me/.virtualenvs/demo/lib/python3.4/site-packages/IPython/core/ultratb.py", line 110, in <module> from IPython.core import debugger File "/home/me/.virtualenvs/demo/lib/python3.4/site-packages/IPython/core/debugger.py", line 59, in <module> from pdb import Pdb as OldPdb File "/usr/local/lib/python3.4/pdb.py", line 135, in <module> class Pdb(bdb.Bdb, cmd.Cmd): AttributeError: 'module' object has no attribute 'Cmd' 2015-01-14 22:23:36.895 [NotebookApp] WARNING | KernelRestarter: restart failed 2015-01-14 22:23:36.896 [NotebookApp] WARNING | Kernel 1005e1cf-b1b4-4f9d- af22-e65c310cfa51 died, removing from map. ERROR:root:kernel 1005e1cf-b1b4-4f9d-af22-e65c310cfa51 restarted failed! The last lines of your traceback show that CPython's module pdb is being imported while IPython starts up. Line 72 of the pdb.py source shows that another module called cmd is imported by "pdb.py". The Python docs show the following order when searching for imports, which I believe is the same for IPython: Since the startup process involves an import of a module called cmd and there is a file called "cmd.py" in the first place the interpreter looks, it attempts to import that file, which of course doesn't have the things it's looking for. Specifically, your file "cmd.py" doesn't have the Cmd class, so the AttributeError is raised. Removing or renaming "cmd.py" in your current working directory will resolve the issue.
https://codedump.io/share/Mvzbnczl7rEE/1/the-kernel-has-died-and-the-automatic-restart-has-failed
CC-MAIN-2017-30
refinedweb
334
53.47
oocdoc, Part 4 — sourcepath In the previous article, We've built a nagaqueen-based tool that can parse one ooc file, detect class declarations and print its doc strings. Today, we're making a bit of infrastructure for our app to support more sizable projects. Source path and lib folders Parsing a single file was a nice milestone, but it's not nearly enough. We want to generate documentation for a whole project at a time: and since we'll want to cross-link the various bits of documentation we generate, we'll also need to parse the various dependencies (such as the ooc sdk, and any used library) so that we can resolve argument types and link them properly. To undertake that task, we need to agree on a little bit of vocabulary. The thing we want to document, we'll call a project. Each folder that contains a hierarchy of folders and .ooc source files, we'll call a libFolder. The pathy part of an import directive, we'll call a spec. A list of libFolders will be called the sourcePath, similar to the Java classpath - that's simply where we look for stuff that's imported. Normally, the source path is set from .use files, and the SDK is found using environment variables and/or the location of the rock binary on your filesystem. But, again, that's for later. To handle the source path we'll create.. a SourcePath class! Let's put it into source/hopemage/sourcepath.ooc: ooc SourcePath: class { libFolders := ArrayList<LibFolder> new() add: func (libPath: String) { libFolders add(LibFolder new(libPath)) } locate: func (spec: String) -> File { for (libFolder in libFolders) { file := File new(libFolder path, spec + ".ooc") if (file exists?()) { return file } } null } split: func (path: String) -> (LibFolder, String) { path = File new(path) getAbsolutePath() for (libFolder in libFolders) { if (path startsWith?(libFolder path)) { return (libFolder, libFolder toSpec(path)) } } (null, null) } } Figuring out which imports are needed are left as an exercise to the reader. (Of course, it's easy to cheat) As you can see, locate takes a spec and tries to find it in our list of libFolders. That'll be useful when dealing with imports, later. As for split, as its name indicates, it splits a full path into a libFolder and a spec. This is useful in our current scenario where we'll get a whole list of paths and we'll have to figure out what their libFolders and specs are. Split uses an ooc feature known as multi-return. It works almost like a tuple, although it's restricted to that certain scenario. We can call it several ways, for example, those all valid: ooc (libFolder, spec) := sourcePath split(path) (libFolder, _) := sourcePath split(path) libFolder := sourcePath split(path) (_, spec) := sourcePath split(path) An underscore stands for "ignore this value". Put your most important return values first, so that people can disregard the others without using tuple syntax, as shown on line 3 of the example above. Let's make a class for libFolders too, in the same module: ooc LibFolder: class { path: String modules := ArrayList<Module> new() init: func (libPath: String) { path = File new(libPath) getAbsolutePath() } add: func (module: Module) { module libFolder = this modules add(module) } contains?: func (module: Module) -> Bool { modules contains?(module) } toSpec: func (path: String) -> String { path substring(this path size) trimLeft(File separator) } } Nothing too surprising here. If you didn't know, you can have a question mark or an exclamation mark at the end of your ooc functions, which is nice for those who return booleans and those who are destructive. Now you're thinking with projects We'll also need one additional class to contain all the information about our project, and we'll name it Project, in source/hopemage/project.oo: ooc Project: class { sourcePath: SourcePath mainFolder: LibFolder init: func (=sourcePath) { if (sourcePath libFolders empty?()) { raise("SourcePath is empty, bailing out!") } mainFolder = sourcePath libFolders first() parseFolder(mainFolder) } parseFolder: func (libFolder: LibFolder) { File new(libFolder path) walk(|f| if (f path endsWith?(".ooc")) { parse(f path) } true ) } parse: func (path: String) { Frontend new(sourcePath, path) } } The interesting part here is the parseFolder method, which walks a whole folder to find .ooc file, and parses all of them. We simply assume the first folder in the source path is the main one - the one we're generating the documentation of in the first place. We've modified Frontend a bit to work with SourcePath: ooc Frontend: class extends OocListener { module: Module sourcePath: SourcePath init: func (=sourcePath, path: String) { (libFolder, spec) := sourcePath split(path) module = Module new(libFolder, spec) parse(path) } // other methods (callbacks, etc.) } We're also calling parse from Frontend init init now. As you can see, the constructor from Module has changed a bit: we want it to know to which libFolder it belongs to. ooc Module: class { libFolder: LibFolder types := ArrayList<Type> new() spec: String init: func (=libFolder, =spec) { libFolder add(this) } } Again, there are a few imports you'll need to add. Playing with our new toys Now that we have all the infrastructure to handle source paths, lib folders and projects correctly, let's revamp our main class. Instaed of specifying an ooc file to parse, we'll accept a --sourcepath=blah argument for our program. Here's what Homa.ooc looks like now: ooc Homa: class { versionString := "0.1" sourcePath := SourcePath new() init: func handle: func (args: ArrayList<String>) { parseArgs(args) if (sourcePath libFolders empty?()) { usage() exit(0) } else { parse() } } parseArgs: func (args: ArrayList<String>) { args removeAt(0) for (arg in args) { tokens := arg split('=') if (tokens size != 2) { onInvalidArg(arg) continue } match (tokens[0]) { case "--sourcepath" => sourcePath add(tokens[1]) case => onInvalidArg(arg) } } } onInvalidArg: func (arg: String) { "Invalid argument: %s, ignoring.." printfln(arg) } usage: func { "homa v%s" printfln(versionString) "Usage: homa --sourcepath=FOLDERS" println() } parse: func { project := Project new(sourcePath) for (module in project mainFolder modules) { "## %s" printfln(module spec) for (type in module types) { "### %s\n\n```%s```\n\n" printfln(type name, type doc raw) } } } } ooc's match works like a switch, more powerful, but less crazy than, say, Scala's match. We have a way to complain about invalid arguments but it doesn't crash our program. String split isn't in the default imports, so you'll have to import text/StringTokenizer to have it. You can now run hopemage against itself, with homa --sourcefolder=source. For additional fun points, run hopemage's output against a markdown tool and open it in your browser: it looks already doc-y! That's it for this time! I hope you enjoy this series, please tell me if I went over some things too quickly, I'll gladly include additional information in these articles. If you liked this article, please support my work on Patreon!
https://fasterthanli.me/articles/oocdoc-part-4-sourcepath
CC-MAIN-2021-25
refinedweb
1,134
61.97
Playing With Graphs and Logic Systems Let's take a look at how one person builds a graph query engine to see how it works just for fun. Join the DZone community and get the full member experience.Join For Free Recently, I have been playing with graphs a bit, trying to understand them in more depth. Because I learn much better by doing, I thought that I would build a toy graph query engine to see how that works. I loaded the MovieLens small data set into a set of C# classes and started playing with them. Here is what the source data looks like: public class Movie { public int MovieId; public string Title; public string[] Genres; } public class UserRating { public int MovieId; public float Rating; public DateTime Timestamp; public List<string> Tags = new List<string>(); } public class User { public int UserId; public Dictionary<int, UserRating> Ratings = new Dictionary<int, UserRating>(); } var users = new Dictionary<int, User>(); var movies = new Dictionary<int, Movie>(); I’m not dealing with typical issues, such as how to fetch the data, optimizing indexes, etc. Instead, I want to focus solely on the problem of finding patterns in the graph. Here is a simple example of a pattern in the graph: (userA:User)-[:Rated]->(movie:Movie)<-[:Rated]-(userB:User) The syntax is called Cypher, which is commonly used for graph queries. What we are trying to find here is a set of triads. User A who rated a movie that was also rated by user B. The result of this query is a list of tuples matching (userA, movie, userB). This is really similar to the way I remember learning Prolog, so I thought about giving it a shot and solving the problem in this way. The first thing to do is to break the query itself into independent steps: (userA:User)-[:Rated]->(movie:Movie) AND (userB:User)-[:Rated]->(movie:Movie) Note that in this case, the first and second queries are exactly the same, but now they are somewhat easier to reason about. We just need to do the match ups property, here is how I would write the code: var user_and_movie = ( from user in users.Values from rating in user.Ratings.Values select new { user, movie = movies[rating.MovieId] } ).ToList(); var one = user_and_movie; var two = user_and_movie; var results = ( from userA in one join userB in two on userA.movie equals userB.movie where userA.user != userB.user select new { userA = userA.user.UserId, userB = userB.user.UserId, movie = userA.movie.MovieId } ).ToList(); This query can take a while to run because on the small dataset (with just 100,004 recommendations and 671 users), there are over 6.2 million such connections. And yes, I used join intentionally, because it showcases the interesting problem of cartesian product. Now, these queries aren’t really interesting and they can be quite expensive. A better query would be to find the set of movies that were rated by both user 1 and user 306. This can be done as simple as changing the previous code starting location: var one = ( from user in new[] { users[1] } from rating in user.Ratings.Values select new { user, movie = movies[rating.MovieId] } ).ToList(); var two = ( from user in new[] { users[306] } from rating in user.Ratings.Values select new { user, movie = movies[rating.MovieId] } ).ToList(); Again, this is a pretty simple scenario. A more complex one would be to find a list of movies a particular user has not rated that were rated by people who liked the same movies as this user. As a query, this will look roughly like this: (userA:User)-[:Rated(Rating >= 4)]->(:Movie)<-[:Rated(Rating >= 4)]-(userB:User) AND (userB:User)-[:Rated(Rating >= 4)]->(notRatedByA:Movie) AND NOT (userA:User)-[:Rated]->(notRatedByA:Movie) Note that this merely specifies the first part and finds me users that liked the same movies as userA. The second part is a bit more complex, we want to find movies rated by the second users and exclude movies rated by the first. Let’s break it into its component parts, shall we? Here is the code for the first clause: (userA:User)-[:Rated(Rating >= 4)]->(:Movie)<-[:Rated(Rating >= 4)]-(userB:User) var one = ( from user in new[] { users[1] } from rating in user.Ratings.Values where rating.Rating >= 4 select new { user, movie = movies[rating.MovieId] } ).ToList(); var two = ( from user in users.Values from rating in user.Ratings.Values where rating.Rating >= 4 select new { user, movie = movies[rating.MovieId] } ).ToList(); var clauseA = ( from userA in one join userB in two on userA.movie equals userB.movie where userA.user != userB.user select new { userA = userA.user, userB = userB.user } ).ToList(); As you can see, the output of this code is a set of (userA, userB). Now, let’s go to the second one, shall we? We already have a match on userB in this case, so we can start evaluating that. Here is the next stage: (userB:User)-[:Rated(Rating >= 4)]->(notRatedByA:Movie) var clauseB = from matches in clauseA from rating in matches.userB.Ratings.Values where rating.Rating >= 4 select new { matches.userA, matches.userB, notRatedByA = movies[rating.MovieId] }; Now we have the last stage, where we need to filter things out: var clauseC = from matches in clauseB where matches.userA.Ratings.ContainsKey(matches.notRatedByA.MovieId) select matches; And now we have the final results. For me, thinking about these kinds of queries as a “fill in the blanks” makes the most sense. Published at DZone with permission of Oren Eini, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/playing-with-graphs-and-logic-systems
CC-MAIN-2021-31
refinedweb
939
66.23
Introduction to Preprocessor in C The preprocessor is a processor which allows you to define abbreviations for longer constructs that can be used in the program instead of many numbers of lines of codes to less number of lines of codes. In C, the preprocessor is not a part of the compiler instead which is used to transform the code before its compiled. This is also called a macro processor because it helps you to define the code in short names called as a macro. In C, preprocessors provide few commands which begin with # (hash symbol). These preprocessor directives contain a set of statements as single macros that are used at the beginning of the program so that we can use it any number of times in the entire program. How does Preprocessor in C works? In C programming language, the preprocessor directives are defined using the # hash symbol. In general, when the C programs are written and then saved using .c and such files are then processed by the preprocessor, this expanded file is then compiled and the object file with .obj which are linked with linker which links these object file to generate an executable file with .exe files. So these preprocessor directives are having a set of codes that can be defined with a single name called as a macro that can be used any number of times in the entire program that is defined and declared at the beginning of the program. Types of Preprocessor in C There different types of preprocessor directive are as follows: 1. Macros As discussed above, macros are a piece of code in which it contains set of statements that do a particular work or contains logic that needs to be used any number of times in the program, then we can just declare this defined macro in the program whenever needed to execute this logic in the program. This is done by the compiler whenever the compiler encounters this macro name in the program then the compiler replaces this macro name with a set of code that is defined at the beginning of the program. This is done using the #define directive to define the macro name. Let us consider an example of how macro is defined and used in the program. #define macro_name macro_value Code: #include <stdio.h> #define MAX 8 int main() { printf("To print the numbers using macro definition:\n"); for (int i = 0; i < MAX; i++) { printf("%d \n",i); } return 0; } Output: Explanation: In the above program, we can see that we have defined macro with name as “MAX” which has value as 8. This means that the program takes a macro name in the program to print the numbers until the macro value defined in the beginning. In C, macros are classified into two different types they are Object- like and function-like macros. In object-like macros are symbolic constants that are used to define identifiers. For example #define PI 3.14 In function-like macros are expressions that are used to perform some particular operation. #define SQUARE (s) s*s Code: #include<stdio.h> #define SQUARE(s)s*s int main() { printf("Welcome to Educba tutorials!\n\n"); int side = 3; int area = SQUARE(side); printf("The area is: %d\n", area); return 0; } Output: Explanation: In the above program, we are defining the macro name “SQUARE” with an argument which is known as function-like macro and the above program uses the macro known as “MAX” where the value is assigned as 8 this type of the macro is known as an object-like macro. 2. Predefined macros in C In the C programming language, ANSI C provides predefined macros that can be used in the programs. There is a list of predefined macros and are as follows: - _DATE_ This macro defines the current date in the program and it will be displayed in the format “MMM DD YY”. - _FILE_ this predefined macro gives the name of the current file that the program will display. - _TIME_ this macro also defined the current time where it is displayed in the format as “HH: MM: SS”. - _LINE_ this macro defines the current line number of the program. - _STDC_ this macro has ANSI standard value as 1 when compiler compiles this ANSI standard. Let us implement all the above-predefined macros in a single program to see how they display the output. Code: #include<stdio.h> int main() { printf("Below are few predefined macros that are used in C:\n"); printf("This will print the current File name :%s\n", __FILE__ ); printf("This will print the current Date :%s\n", __DATE__ ); printf("This will print the current Time :%s\n", __TIME__ ); printf("This prints the current Line number :%d\n", __LINE__ ); printf("This prints the ANSI standard STDC :%d\n", __STDC__ ); return 0; } Output: Explanation: In the above program, we have used all 5 predefined macros of ANSI C standard and we can see the different outputs. Conclusion In this article, we conclude that preprocessor in C programming language, is nothing but a small piece of code which is used as a single name that is defined at the beginning of the program known as macro and this macro can be used in the entire program any number of times whenever the value of the macro is needed to use you can simply specify the macro name in the program. There two types of macros they are object-like and function-like macros. There are also few predefined macros provided by the ANSI C standards. Recommended Articles This is a guide to Preprocessor in C. Here we discuss introduction to Preprocessor in C, how does it work , types with respective examples. You can also go through our other related articles to learn more –
https://www.educba.com/preprocessor-in-c/
CC-MAIN-2021-04
refinedweb
969
64.85
Menus are a common user interface element in Windows applications. With the exception of simple dialog box–based applications, practically all applications written for Windows provide a menu that enables users to interact with the application. The menu that often appears along the top edge of a Windows application is known as the application’s main menu, or menu bar. Menus that are displayed when you right-click a control are known as shortcut menus, or sometimes context menus. Menus can be viewed as a series of parent-child relationships. An application’s main menu will usually consist of a number of top-level menu items. Typically, these top-level menu items have associated popup child windows that are displayed when the relevant top-level menu item is clicked. Further nesting is possible, as each menu item in a popup menu can have an associated child menu. In general, the .NET Framework will automatically render your menus in a way that’s consistent with the Windows user interface guidelines. However, there are a few user interface conventions for menus that you should follow: Menu items should be associated with a mnemonic character—that is, a character that’s part of the menu item name and that when pressed with the Alt key will invoke the menu item. For example, the top-level File menu is usually associated with F as a mnemonic character; pressing Alt+F on the keyboard typically invokes the File menu. The steps required to provide a mnemonic character are discussed later in this section. Any top-level menu items that invoke commands and don’t create a popup child menu should have an exclamation mark after their name. This alerts the user to the fact that selecting the menu item will result in a command being invoked. Popup menu items that will require additional input from the user, such as populating a dialog box, should have an ellipsis (...) after the menu item name. The following subsections describe the steps required to create both top-level menus and shortcut menus and show you how to use menus to make your programs more user-friendly. There are two basic approaches to creating menus for your applications. The first approach is to use the Visual C# .NET Menu Designer. This is the simplest and most straightforward way to add a basic menu to your application. We’ll also examine how to add menus programmatically, the second approach, so that you can gain a better understanding of the code the Menu Designer generates. To use the Menu Designer, drag a MainMenu control from the Toolbox to your form. The MainMenu control will automatically be attached to the top edge of your form and will also add an icon that represents the menu to an area under your form, as shown in Figure 13-2. As you can see, the menu initially includes a box labeled Type Here. To add a top-level menu item, click on the box, and enter the text for the menu item. After the new menu item has been added to the menu, two boxes will be labeled Type Here: one box on the top level of the menu, and one box that’s a child of the new menu item. You can continue adding menu items until the menu is populated with all the menu items you want. Figure 13-3 shows an example of a menu after several menu items have been added. To add a mnemonic character to a menu item, prefix a letter in the menu item’s text caption with an ampersand (&). The mnemonic character must be unique within its parent menu, so it’s not always possible to use the first letter in each menu item. When the menu is displayed, pressing the Alt key and the mnemonic character will select the menu item. A menu will often include a horizontal line known as a separator that serves to group related menu items. To add a separator to a menu, enter a dash (-) as the menu item’s Text property, or right-click a menu item and choose Add Separator from the shortcut menu. You also can rearrange the position of a separator or other menu items by dragging them to the desired position. The Properties window enables you to manipulate the properties for each menu item. Simply click the menu item you want to manage, and the Properties window will be updated with the properties for that menu item. The most commonly used properties for menu items are listed below. Checked Specifies whether a check mark should be displayed next to the menu item. This property is discussed later in this chapter, in the section “Adding Check Marks to Menu Items.” Enabled Enables or disables a menu item. This property is discussed later in this chapter, in the section “Disabling Menu Items.” Mnemonic Returns the mnemonic character for the menu item, or returns 0 if no mnemonic character exists. Name Returns the identifier for the menu item. Parent Returns a reference to the Menu object that’s the current item’s parent. RadioCheck Specifies whether the check mark for a menu item should be replaced by a radio button. Shortcut Specifies a keyboard sequence that executes the associated event handler just as if the menu item were clicked. Text Specifies the text caption associated with the menu item. Visible Specifies whether the menu item is visible. As with other controls in the .NET Framework, all properties that can be accessed through the Properties window can also be accessed through code. In the next section, you’ll learn how to create a menu directly in your code. Although you can easily create a menu using the Windows Forms Menu Designer, it’s a good idea to understand the steps required to create a menu programmatically. A menu bar is created in two steps: first you create MenuItem objects and assemble them into a collection of child menu items, and then you create an instance of the MainMenu class and add the MenuItem objects to it. The following four classes are used to create menus in the System.Windows.Forms namespace: Menu Represents the base class for all other menu and menu item classes MenuItem Represents a menu item in a MainMenu or ContextMenu instance MainMenu Represents a main top-level menu for a form ContextMenu Represents a shortcut menu for a form or control As mentioned, the first step in creating a menu is to create the menu item objects that will populate the menu. After the menu items have been created, they’re assembled to form a menu. Each menu item is the child of a MainMenu or ContextMenu object (for top-level items) or the child of another MenuItem object (for nested menus). The MenuItem class, which has six constructors, is used to create menu items. The commonly used constructor shown here enables you to specify the menu item’s text caption, as well as the event handler that takes care of the Click event: EventHandler fileOpenHandler = new System.EventHandler(fileOpen_Click); MenuItem fileOpen = new MenuItem("&Open...", fileOpenHandler); private void fileOpen_Click(object sender, System.EventArgs e) { } Two other forms of the constructor create menu items that require additional work before they’re usable. The simplest of these constructors creates a menu item with an empty caption and no Click event handler, as shown here: MenuItem fileOpen = new MenuItem(); The other simple MenuItem constructor enables you to pass just the menu item’s caption as a parameter, as shown here: MenuItem fileOpen = new MenuItem("&Open..."); A slightly more complex version of the MenuItem constructor enables you to specify a shortcut in addition to the text caption and event handler, as follows: EventHandler fileOpenHandler = new System.EventHandler(fileOpen_Click); MenuItem fileOpen = new MenuItem("&Open...", fileOpenHandler, Shortcut.CtrlO); One MenuItem constructor allows you to specify a caption and an array of submenu items, as the following code shows: // Create an array of menu items MenuItem [] fileMenuItems = { new MenuItem("&Open"), new MenuItem("&Save"), new MenuItem("Save &As"), new MenuItem("E&xit") }; // Create a menu item from the array of submenu items MenuItem fileMenu= new MenuItem("&File", fileMenuItems); The sixth constructor is used in Multiple Document Interface (MDI) applications, which aren’t discussed in this book, and allows you to specify how menu items should be merged. After creating your menu items, you assemble them and add them to a MainMenu object. The MainMenu object is then assigned to your form’s Menu property. There are two constructors for the MainMenu class. The simplest version of the constructor creates a MainMenu object without associating it with any menu items, as shown here: MainMenu mainMenu = new MainMenu(); The second version of the MainMenu constructor enables you to create a main menu that’s associated with an array of MenuItem children that are passed as a parameter to the constructor: MenuItem fileMenuItem = new MenuItem("&File") MenuItem viewMenuItem = new MenuItem("&View"); MenuItem editMenuItem = new MenuItem("&Edit"); MenuItem [] topLevelMenuItemArray = new MenuItem[] { fileMenuItem, viewMenuItem, editMenuItem }; MainMenu mainMenu = new MainMenu(topLevelMenuItemArray); As you’ll recall, the Menu class serves as the base class for all other menu classes. One of the features implemented by the Menu class is its ability to store child menu items in an instance of MenuItemCollection. However, you’ll seldom need to create a reference to this class directly because it’s typically accessed through the MenuItems property, as shown here: Menu.MenuItemCollection items = mainMenu.MenuItems; A more common use of the MenuItemCollection class is simply to use the MenuItems property as an array of MenuItem objects, as shown here: foreach(MenuItem item in mainMenu.MenuItems) { item.Enabled = false; } The Add method is used to add a menu item to the collection. There are five versions of the Add method. The simplest version just adds a menu item to the end of the current menu and returns the index position of the new item, as shown here: MenuItem helpMenuItem = new MenuItem("&Help"); int position = mainMenu.MenuItems.Add(helpMenuItem); Another version of the Add method creates a new menu item at the end of the current menu, using the string passed as a parameter for the menu item’s Text property, and returns a reference to the new menu item: MenuItem helpMenuItem = mainMenu.MenuItems.Add("&Help"); You can also use the Add method to create a new menu item that’s associated with a Click event handler, as shown here: EventHandler indexHandler = new EventHandler(helpIndex_Click); MenuItem indexItem = helpMenuItem.MenuItems.Add("&Index", indexHandler); The fourth version of the Add method, shown in the following code, enables you to specify the index that the menu item will occupy. Any menu items currently in the menu will be shifted down if needed. int position = helpMenuItem.MenuItems.Add(0, indexItem); The fifth and final version of the Add method enables you to add a menu item that contains a submenu. For this version of the Add method, you pass the Text property for the new menu item and an array of menu items that will form the submenu. EventHandler helpIndexHandler = new EventHandler(index_Click); EventHandler helpContentsHandler = new EventHandler(contents_Click); EventHandler helpSearchHandler = new EventHandler(search_Click); MenuItem helpIndex = new MenuItem("&Index", helpIndexHandler); MenuItem helpCont = new MenuItem("&Contents", helpContentsHandler); MenuItem helpSearch = new MenuItem("&Search", helpSearchHandler); MenuItem [] helpMenuArray = new MenuItem [] { helpIndex, helpCont, helpSearch }; MenuItem helpMenuItem = mainMenu.MenuItems.Add("&Help", helpMenuArray); The AddRange method is used to add an array of menu items to the collection. This method enables you to potentially add a large number of child menu items with one method call. The AddRange method is similar to the last version of the Add method discussed in the preceding paragraph, except that AddRange appends an array of menu items to the current menu, rather than creating a new submenu, as shown here: MenuItem helpMenuItem = mainMenu.MenuItems.Add("&Help"); helpMenuItem.MenuItems.AddRange(helpMenuArray); The Count method is used to retrieve the number of menu items in the collection, as shown here: int topLevelItemCount = Menu.MenuItems.Count; int helpItemCount = helpMenuItem.MenuItems.Count; To remove an item from the collection, pass a reference to the item to be deleted to the Remove method, as shown here: editMenuItem.MenuItems.Remove(editWrap); To remove an item at a specific index, you can use the RemoveAt method, as shown here: editMenuItem.MenuItems.RemoveAt(1); To remove all menu items from the collection, use the Clear method: editMenuItem.MenuItems.Clear(); The following code creates a main menu that has three top-level menu items, each associated with a popup submenu: // Top-level menu MenuItem fileMenuItem = new MenuItem("&File"); MenuItem viewMenuItem = new MenuItem("&View"); MenuItem editMenuItem = new MenuItem("&Edit"); MainMenu mainMenu = new MainMenu( new MenuItem[] { fileMenuItem, viewMenuItem, editMenuItem }); // Event handlers for the File popup menu EventHandler fileOpenHandler = new EventHandler(fileOpen_Click); EventHandler fileSaveHandler = new EventHandler(fileSave_Click); EventHandler fileSaveAsHandler = new EventHandler(fileSaveAs_Click); EventHandler fileExitHandler = new EventHandler(fileExit_Click); // File popup menu MenuItem fileOpen = new MenuItem("&Open...", fileOpenHandler); MenuItem fileSave = new MenuItem("&Save", fileSaveHandler); MenuItem fileSaveAs = new MenuItem("Save &As...", fileSaveAsHandler); MenuItem fileSeparator = new MenuItem("-"); MenuItem fileExit = new MenuItem("E&xit", fileExitHandler); MenuItem [] fileMenuItemArray = new MenuItem [] { fileOpen, fileSave, fileSaveAs, fileSeparator, fileExit }; fileMenuItem.MenuItems.AddRange(fileMenuItemArray); // Event handlers for the View popup menu EventHandler viewHorizHandler = new EventHandler(viewHorizontal_Click); EventHandler viewVertHandler = new EventHandler(viewVertical_Click); // View popup menu MenuItem viewScroll = new MenuItem("&Scroll bars"); MenuItem viewHorizontal = new MenuItem("&Horizontal", viewHorizHandler); MenuItem viewVertical = new MenuItem("&Vertical", viewVertHandler); MenuItem [] scrollMenuItemArray = new MenuItem [] { viewHorizontal, viewVertical }; viewScroll.MenuItems.AddRange(scrollMenuItemArray); viewMenuItem.MenuItems.Add(viewScroll); // Event handlers for the Edit popup menu EventHandler editClearHandler = new EventHandler(editClear_Click); EventHandler editWrapHandler = new EventHandler(editWrap_Click); // Edit popup menu MenuItem editClear = new MenuItem("&Clear", editClearHandler); MenuItem editWrap = new MenuItem("&Word Wrap", editWrapHandler); MenuItem [] editMenuItemArray = new MenuItem [] { editClear, editWrap }; editMenuItem.MenuItems.AddRange(editMenuItemArray); // Assign new MainMenu object to the form's Menu property. Menu = mainMenu; This code begins by creating three menu items and adding them to a MainMenu object as its top-level menu items. Next event handler delegates are created for menu items in the File popup menu and are used in the construction of the individual menu items. After the File menu items have been created, they’re packed into a MenuItem array and added to the menu item collection of fileMenuItem. Because fileMenuItem is a top-level menu item, its child menu items will form the popup menu displayed when the menu item is clicked. This process is repeated for the View and Edit popup menus. After all menu items have been created and relationships have been established, the mainMenu object is assigned to the form’s Menu property. There are two ways to handle events for menu items: programmatically and using the Forms Designer. As shown in the previous section, event handler delegates for the Click event can be attached to menu items programmatically during construction. In this section, you’ll learn how to write code to handle additional events from menu items. You’ll also learn how to use the Forms Designer to write the code required to handle menu-related events. Three events are raised by MenuItem objects in response to user actions, as follows: Popup The menu item is about to be displayed. This event is useful when you need to update the status of a menu item dynamically, as will be done in the next section. Select The menu item has been highlighted as a selection but has not yet been clicked. This event is useful when help text is displayed for a potential menu choice, such as text displayed in a status bar. Using the Select event to update status bars is discussed later in this chapter, in the section “Using Status Bar Panels.” Click The menu item has been chosen by the user. Of all the menu-related events, this is the one that you’ll probably use the most, as it indicates that the user has clicked an item on the menu and expects the application to take some sort of action. These menu item events tell you when individual menu items are selected or clicked; however, there’s no menu item event to tell you that a menu has been dismissed. Instead, the Form class raises two events that tell you when the menu is initially displayed and dismissed: MenuStart Raised when a menu initially receives the input focus MenuComplete Raised when the menu loses the input focus The MenuStart event is typically used to manage the form’s user interface. When a menu item is selected, you might want to disable specific controls. Alternatively, you might want to enable controls that are used to provide feedback about the menu selection. For example, the MenuStart event can be used to display a help balloon control that describes the currently selected menu item. The MenuComplete event is used to reverse the action taken during the MenuStart event. MenuComplete is raised when the user has finished using the menu, either because a menu item has been clicked or because the user has abandoned the menu and selected a different object. Menu items can be updated dynamically to reflect the current state of the application. You can enable and disable items, add check marks, and even add and remove menu items at run time. Updating menu items dynamically is an effective way to provide feedback to the user. For example, instead of displaying an error message if the user selects a menu item that’s not allowed in the current context, a more user-friendly approach is to simply disable the menu item. When a menu item is used to enable or disable a property or a feature of your application, a common user interface pattern is to supply a check mark to indicate that the item has been enabled. For example, applications that display a status bar typically provide a menu item that controls the status bar’s visibility. When the status bar is visible, the menu item is checked; when the status bar is hidden, the menu item is unchecked. As mentioned, the MenuItem class exposes the Checked property, which is used to add a check mark to the left of the menu item text, as shown here: statusBarMenuItem.Checked = statusBar.Visible; A common pattern is to set the state of menu items when the parent’s menu item raises the Popup event, as shown here: private void editMenuItem_Popup(object sender, System.EventArgs e) { editWordWrap.Checked = textBox.WordWrap; } In this code, the check mark on the editWordWrap menu item is set or cleared depending on the status of the WordWrap property of the textBox object. Menu items that aren’t available due to the current state of your application should be disabled to indicate that they can’t be selected. You can use the Enabled property provided by the MenuItem class to enable and disable menu items, as shown here: private void fileMenuItem_Popup(object sender, System.EventArgs e) { fileSave.Enabled = textBox.Modified; } In this code, the Save menu item is disabled if no changes have been made to the textBox object. You’re not limited to using a single main menu for your application. When applications support multiple tasks or multiple document types, it’s common practice to provide multiple menus, with the proper menu displayed according to the context. Because a menu is associated with your main form through the form’s Menu property, switching to a new menu can be done simply by assigning a new menu to the Menu property, as shown here: private void SwitchToBasicMenu() { Menu = basicMainMenu; } private void SwitchToTextDocumentMenu() { Menu = textDocumentMainMenu; } To add an additional menu for use by your application, drag a menu control from the Toolbox to your form. An additional menu icon will be added under your form in the Forms Designer. To work with a specific menu using the Menu Designer, click the appropriate icon in the Forms Designer. As an example of how menus are used in an application, the companion CD includes SimpleEdit, a small Notepad-like editor. Although this example is fairly small, it illustrates the various ways menus are used in a dynamic application. The SimpleEdit project is a Windows Forms application with two controls on its main form, as follows: A main menu control named mainMenu that provides access to application commands A text box control named textBox that fills the form’s client area and serves as the container for edited text The properties of the top-level menu items in the SimpleEdit project are listed in Table 13-10. Each of the three top-level menu items has an associated popup menu that’s displayed when the top-level menu item is selected. Table 13-11 lists the properties for the menu items on the File, View, and Edit popup menus. The View menu has one child menu item, Scroll Bars; this menu item is in turn the parent of two submenus, Horizontal and Vertical, as shown in Figure 13-4. The textBox control used by the SimpleEdit project is just a basic text box control from the Toolbox, with a few properties set to nondefault values, as listed in Table 13-12. The Dock property is used to attach a control to the edge of its container. By default, this property is set to DockStyle.None. Setting the property to DockStyle.Fill causes the control to expand to fill the entire container. Docking is simply a characteristic that specifies which edge of the current form the control will attach to. Values for the Dock property must be set to one of the DockStyle enumeration values, listed in Table 13-13. Many controls expose the Dock property; this feature will be discussed in more detail in Chapter 15. For now, it’s enough that you simply understand that the text box control uses the Dock property to completely fill the client area of the form. The File, View, and Edit popup menus each contain menu items that are updated dynamically as the menus are displayed. The Popup event for each top-level menu item is managed by the event handlers listed in Table 13-14. The event handler methods from Table 13-14 are shown in the following code: private void fileMenuItem_Popup(object sender, System.EventArgs e) { fileSaveMenuItem.Enabled = textBox.Modified; } private void editMenuItem_Popup(object sender, System.EventArgs e) { editWordWrapMenuItem.Checked = textBox.WordWrap; } private void viewScrollBarMenuItem_Popup(object sender, System.EventArgs e) { // Horizontal scrolling is n/a if the text box control // has word-wrapping enabled. hScrollMenuItem.Enabled = !textBox.WordWrap; switch(textBox.ScrollBars) { case ScrollBars.Both: hScrollMenuItem.Checked = true; vScrollMenuItem.Checked = true; break; case ScrollBars.Vertical: hScrollMenuItem.Checked = false; vScrollMenuItem.Checked = true; break; case ScrollBars.Horizontal: hScrollMenuItem.Checked = true; vScrollMenuItem.Checked = false; break; default: hScrollMenuItem.Checked = false; vScrollMenuItem.Checked = false; break; } } The handler for the File menu Popup event, fileMenuItem_Popup, enables or disables the Save menu item, depending on the state of the text box control. If the text box control’s Modified property returns true, changes have been made to the contents of the text box, and the Save menu item is enabled. If the Modified property returns false, the contents of the text box control haven’t been modified, and the Save item is disabled. The editMenuItem_Popup method, the handler for the Edit menu’s Popup event, controls whether a check mark is placed next to the Word Wrap menu item. If the text box control has its WordWrap property enabled, the check mark is set. If the property is disabled, the check mark is cleared. The final handler for a Popup event, viewScrollBarMenuItem_Popup, is more complex than the previous two handlers because it updates two menu items based on the value of the text box control’s ScrollBars property. The value of the ScrollBars property is tested to determine its value, and then check marks are either set or cleared for the vScrollMenuItem and hScrollMenuItem objects. In addition, the text box control doesn’t support horizontal scrolling if the WordWrap property is set to true, so the hScrollMenuItem object is enabled or disabled based on this property. As you’ll recall, the Click event is raised when the user chooses a menu item. The SimpleEdit application exposes eight commands through its main menu, as listed in Table 13-15. Most of the commands available to users of the SimpleEdit application are mapped directly to properties exposed by the text box control. The only exceptions are the menu items in the File popup menu, which are discussed in the next section. The following code includes the Click event handlers for menu items on the Edit and View popup menus: private void editClearMenuItem_Click(object sender, System.EventArgs e) { _fileName = ""; _pathName = ""; Text = _appName + " - " + "[Empty]"; textBox.Clear(); } private void editWordWrapMenuItem_Click(object sender, System.EventArgs e) { textBox.WordWrap = !textBox.WordWrap; } private void hScrollMenuItem_Click(object sender, System.EventArgs e) { if(hScrollMenuItem.Checked) { // Clear horizontal scroll bars. if(textBox.ScrollBars == ScrollBars.Both) textBox.ScrollBars = ScrollBars.Vertical; else textBox.ScrollBars = ScrollBars.None; } else { // Set horizontal scroll bars. if(textBox.ScrollBars == ScrollBars.Vertical) textBox.ScrollBars = ScrollBars.Both; else textBox.ScrollBars = ScrollBars.Horizontal; } } private void vScrollMenuItem_Click(object sender, System.EventArgs e) { if(vScrollMenuItem.Checked) { // Clear vertical scroll bars. if(textBox.ScrollBars == ScrollBars.Both) textBox.ScrollBars = ScrollBars.Horizontal; else textBox.ScrollBars = ScrollBars.None; } else { // Set vertical scroll bars. if(textBox.ScrollBars == ScrollBars.Horizontal) textBox.ScrollBars = ScrollBars.Both; else textBox.ScrollBars = ScrollBars.Vertical; } } In the editClearMenuItem_Click method, the text box control’s Clear method is called to remove the contents of the control. In addition, member variables that track the current file name are reset, and the application’s caption is reset to indicate that no file is currently open. The editWordWrapMenuItem_Click method toggles word wrapping by simply inverting the property’s current value. The hScrollMenuItem_Click and vScrollMenuItem_Click methods are used to manage the scroll bars attached to the text box control. The process of enabling and disabling scroll bars requires some additional code because the ScrollBars property is used to control the vertical and horizontal scroll bars. The File popup menu is used primarily for opening and saving files, as well as for closing the application, which is traditionally included as an item on this menu. The following code includes the Click event handlers for the File menu: private void fileOpenMenuItem_Click(object sender, System.EventArgs e) { OpenFileDialog dlg = new OpenFileDialog(); dlg.Filter = _fileFilter; dlg.InitialDirectory = Application.CommonAppDataPath; dlg.CheckFileExists = false; if(dlg.ShowDialog() == DialogResult.OK) { string path = dlg.FileName; if(File.Exists(path) != true) { StreamWriter writer = File.CreateText(path); writer.Close(); } _fileName = Path.GetFileName(path); _pathName = path; Text = _appName + " - " + _fileName; StreamReader reader = new StreamReader(path); textBox.Text = reader.ReadToEnd(); reader.Close(); } } private void SaveTextToPath(string path) { StreamWriter writer = new StreamWriter(path); writer.Write(textBox.Text); writer.Close(); textBox.Modified = false; } private void SaveTextToNewPath() { SaveFileDialog dlg = new SaveFileDialog(); dlg.Filter = _fileFilter; dlg.InitialDirectory = Application.CommonAppDataPath; if(dlg.ShowDialog() == DialogResult.OK) { string path = dlg.FileName; if(File.Exists(path) == true) { DialogResult result = MessageBox.Show(_overwriteWarning, _appName, MessageBoxButtons.OKCancel, MessageBoxIcon.Question); if(result == DialogResult.Cancel) return; } _fileName = Path.GetFileName(path); _pathName = path; Text = _appName + " - " + _fileName; SaveTextToPath(path); } } private void fileSaveMenuItem_Click(object sender, System.EventArgs e) { if(_fileName != null && _fileName != "") SaveTextToPath(_pathName); else SaveTextToNewPath(); } private void fileSaveAsMenuItem_Click(object sender, System.EventArgs e) { SaveTextToNewPath(); } private void fileExitMenuItem_Click(object sender, System.EventArgs e) { Close(); } The SimpleEdit project uses common dialog boxes to enable users to select files used by the application. As discussed in Chapter 11, the Application.CommonAppData property is used as a starting location for all file operations; when a common file dialog box is displayed to the user, the initial directory is set to that path. The fileOpenMenuItem_Click method creates an instance of OpenFileDialog, which enables the user to use a common file dialog box to select a file that’s to be loaded into the text box control. If the user selects a file and clicks OK, the file path is tested to determine whether the file exists by passing the path to the File.Exists method. If the file exists, an instance of StreamReader is created from the selected file’s path, and the ReadToEnd method is called to retrieve the file’s contents. If the file doesn’t exist, it’s assumed that the user wants to work with a new file, and the file is created. The SaveTextToPath and SaveTextToNewPath methods are used by Click event handlers to save the contents of the text box. The SaveTextToPath method is used by SimpleEdit to write the contents of the text box to a disk file. The method accepts a destination file path as a parameter. It then creates an instance of StreamWriter using the path, which is used to write the contents of the text box to the destination file. After the file has been closed, the text box control’s Modified flag is reset to indicate that all changes have been saved. The SaveTextToNewPath method creates an instance of SaveFileDialog, which enables the use of a common file dialog box to select a destination for the text box control’s contents. After a destination file has been selected, the SaveTextToPath method is called to actually write the contents to the file. The fileSaveMenuItem_Click event handler will call the SaveTextToPath method if a destination file has already been determined. This will be the case when a file has previously been opened for editing. If no file has been previously selected, the SaveTextToNewPath method will be called, and the user must select a destination path before the contents are saved. The fileSaveAsMenuItem_Click method always calls the SaveTextToNewPath method.
https://etutorials.org/Programming/visual-c-sharp/Part+III+Programming+Windows+Forms/Chapter+13+User+Input+and+Feedback/Using+a+Main+Menu+with+Forms/
CC-MAIN-2022-21
refinedweb
4,905
52.9
Following on from the Swipe Gesture proposal, would like to propose that a long-press recognizer be added to Xamarin.Forms. Same basic API design and commanding as the existing TapGestureRecognizer and Pan/Pinch recognizers. Two new bindable properties: /// <summary> /// Gets or sets the maximum movement in pixels before the gesture is canceled. /// </summary> public int AllowableMovement { get { return (int)GetValue(AllowableMovementProperty); } set { SetValue(AllowableMovementProperty, value); } } /// <summary> /// Gets or sets the minimum press duration in milliseconds. /// </summary> public int MinimumPressDuration { get { return (int)GetValue(MinimumPressDurationProperty); } set { SetValue(MinimumPressDurationProperty, value); } } Contextual actions on views e.g. long press for context menu, delete etc. On desktop platforms with a mouse, long-press could optionally be resolved to respond to a right-mouse click. We have already built a custom long-press recognizer which is production-grade and working across iOS, Android and UWP. Happy to implement if accepted. I would suggest using something other than pixels for the movement sensor as there are too many different screen sizes with different pixels densities. I'd prefer to see this new API use the same units as Xamarin Forms does. Otherwise, like it very much. Is there a branch for this already? I've been working with gestures and could help implement this. Yes, have already built this on all three platforms and shipped it. If approved, can put a PR up fairly quickly. Would be great to get feedback though and work along with others to improve etc. If approved, can put a branch up. Just waiting on approval for the SwipeGestureRecognief first. Yes this is very much needed. Does anyone know a way to achieve this without a custom renderer and without that MR gestures library? Any updates on this topic? Is it already approved? I'm looking for a way to implement Long Press in a ListView jep, same here!
https://forums.xamarin.com/discussion/comment/285701/
CC-MAIN-2019-43
refinedweb
309
50.63
Connect your app to ADFS First, provide this information to your ADFS administrator: - Realm Identifier: urn:auth0:YOUR_TENANT - Endpoint: Federation Metadata. Scripted setup. What the script does 1. Creates the Relying Party on ADFS The script creates the relying party on ADFS, as follows: 2. Creates rules to output common attributes The script also creates rules to output the most common attribures, such as email, UPN, given name, or surname: Manual setup If you don't feel comfortable executing the script, you can follow these manual steps:..., enter the following value in the textbox and click Next. Add a Relying party trust identifier with the following value and click Add and then Next. urn:auth0:YOUR_TENANT Leave the default option (Permit all users...) and click Next. Click Next. (Optional) Adding additional LDAP attributes The mappings created on step 15 are the most commonly used, but if you need additional LDAP attributes with information about the user, you can add more claim mappings. Create a row for every additional LDAP attribute you need, choosing the attribute name on the left column and desired claim type on the right column. If the claim type you are looking for doesn't exist, you have two options: - Type a namespace-qualified name for the new claim (i.e.). - Register a new claim type (under AD FS | Services | Claim Descriptions) on the ADFS admin console), and use the claim name in the mapping. Auth0 will use the name part of the claim type (i.e. department in) as the attribute name for the user profile. Next Steps Now that you have a working connection, the next step is to configure your application to use it. You can follow our step-by-step quickstarts or use directly our libraries and API.
https://auth0.com/docs/connections/enterprise/adfs
CC-MAIN-2017-51
refinedweb
295
61.87
Threads of execution. Methods of Thread class: join(), isAlive(), getPriority(), setPriority(). Examples It is recommended that you familiarize yourself with the following topics before exploring this topic: - Java language tools for working with threads of execution. Thread class. Runnable interface. Main thread of execution. Creating a child thread - Methods of the Thread class: getName(), start(), run(), sleep(). Examples Contents - 1. Why is it necessary for the main (calling) thread to finish first? Explanation. Method join(). General form - 2. Method join(). Example - 3. Method isAlive(). Determine if a thread is running. Example - 4. Methods setPriority(), getPriority(). Set and get the priority of the thread. Example - Related topics Search other websites: 1. Why is it necessary for the main (calling) thread to finish first? Explanation. Method join(). General form Let two threads be given: - main (caller). A child thread is started (called) from the main thread; - child (called). In order to get the result (object, data) from the child thread, the main (calling) thread must end last. If the main thread completes faster than the child thread, then most likely zero (initial) or intermediate values will be received from the child thread, which is error. In order for the main thread to end last in the Java language, the join() method is used. The general form of the method is as follows: public final void join() throws InterruptedException here - InterruptedException – the class of the exception that is thrown when some other thread has interrupted the current thread. After this exception is thrown, the interrupted status of the current thread is reset. ⇑ 2. Method join(). Example Task. Implement a call to a child thread from the main thread. The maximum value of an array of integers is determined in the child thread. The result (maximum value) should be returned to the main thread and displayed. Solution. A separate class is developed to create a child thread. In our case, the class is named Thread1. All necessary elements are declared in this class: - thr reference to a child thread of type Thread; - AI array of integers; - the variable maximum, which is the result of the thread; - constructor. The constructor performs the initial initialization of the array, as well as the creation and launch of the child thread. The thread is started by the start() method. If the constructor does not implement the creation and launch of a child thread, then the methods of the class instance will be executed in the main thread; - the run() method, which is defined in the Runnable interface. Since Thread1 implements this interface, you must implement the run() method in it. The run() method contains the main code for solving our task – the code for finding the maximum value; - accessor methods for class fields, the value of which is used in the main thread. In this task, the main thread is the main() function, in which you need to perform the following steps: - declare the array to be tested; - create a child thread (an instance of the Thread1 class) and get a reference to it; - run the join() method so that the main thread ends up last; - after the completion of the child thread, read the result obtained in that thread. The text of the program is as follows. // The class that implements the child thread in the given task class Thread1 implements Runnable { private Thread thr; // reference to the child thread private int[] AI; private int maximum; // Thread execution result // Constructor - gets an array of integers public Thread1(int[] _AI) { // Array initialization AI = _AI; // Create a thread thr = new Thread(this, "Thread1."); // Start a thread of execution thr.start(); } // The method in which the thread execution code is written. // In our case, the code for finding the minimum value // and filling in the variable maximum is entered. public void run() { int max = AI[0]; for (int i=1; i<AI.length; i++) if (max<AI[i]) max = AI[i]; maximum = max; } // Access methods for class fields public Thread getThread() { return thr; } public int getMax() { return maximum; } } public class TrainThreads2 { public static void main(String[] args) { // 1. Declare the array under test int[] AI = { 2, 3, 4, 8, -1 }; // 2. Create a child stream, get a reference to it Thread1 t1 = new Thread1(AI); // 3. Read the result try { // Waiting for the end of the thread t1 is mandatory, // otherwise you can get zeros t1.getThread().join(); } catch (InterruptedException e) { System.out.println("Error."); } // Read the result after the end of stream t1 System.out.println("max = " + t1.getMax()); } } If you remove the code in the main() function try { // Waiting for the end of the thread t1 is mandatory, // otherwise you can get zeros t1.getThread().join(); } catch (InterruptedException e) { System.out.println("Error."); } then the following result will be obtained max = 0 This means that the main thread exits first, without waiting for the child thread to finish. In this case, the original value of the variable maximum in the instance t1 is returned from the child thread. And this is a mistake. Program execution result max = 8 ⇑ 3. Method isAlive(). Determine if a thread is running. Example The isAlive() method is designed to determine the existence (execution) of a thread. The general form of the method is as follows: final boolean isAlive() The method returns true if the thread on which the method is called is still running. Otherwise, it returns false. Example. In the example, the isAlive() method waits for the child thread to terminate in order to obtain the result. In the child thread, the sum of the elements of an array of double type is calculated. To get the correct sum, you need the main thread to finish last. // A class that encapsulates the thread of execution. class SumArray implements Runnable { Thread thr; // reference to the current thread double[] AD; // internal array reference double summ; // thread result // SumArray class constructor - gets the thread name and array SumArray(double[] AD, String name) { // Assign external reference to AD array this.AD = AD; // Create an instance of thr and set it to the name of the thread of execution thr = new Thread(this, name); // Start the thread for execution thr.start(); // The run() method is called } // The entry point into the thread is the run() method public void run() { // Message about the beginning of the thread execution - calculating the sum of the array System.out.println("SumArray.run() - begin"); // in the stream, calculate the sum of the elements of the AD array // and write it to the summ variable summ = 0; for (int i=0; i<AD.length; i++) { try { // pause so that other threads can take control Thread.sleep(50); summ = summ + AD[i]; } catch (InterruptedException e) { System.out.println(e.getMessage()); } } // Thread completion message System.out.println("SumArray.run() - end"); } // A method that returns the current thread Thread getThread() { return thr; } // Method that returns the sum of elements double getSumm() { return summ; } } public class Threads { public static void main(String[] args) { // Demonstration of the isAlive() method // 1. Declare an array for which you want to calculate the sum double[] AD = { 1.1, 2.2, 3.3, 4.4, 5.5, 6.6 }; // 2. Create the instance of SumArray class SumArray sa = new SumArray(AD, "SumArray thread"); // thread is executed // 3. Call the isAlive() method in a loop // to wait for the sa stream to end while (sa.getThread().isAlive()); // 4. After the end of the child stream, get the sum double summ = sa.getSumm(); System.out.println("summ = " + summ); } } In the above example, the delay is done by calling while (sa.getThread().isAlive()); If you comment out this line, then the program can return a zero value of the sum. This is because the main thread will finish before the child thread finishes. Program execution result SumArray.run() - begin SumArray.run() - end summ = 23.1 ⇑ 4. Methods setPriority(), getPriority(). Set and get the priority of the thread. Example As you know, different priorities can be set for threads of execution. High priority threads get most of the CPU time to run. In the Java language, it is possible to set the priority of threads. There are two methods for this: - setPriority() – sets the value of the priority of the thread; - getPriority() – reads (gets) the priority value of the thread. In the Java documentation, the general form of methods is as follows: final void setPriority(int level) final int getPriority() here - level – the priority level, which is set within the constants from MIN_PRIORITY to MAX_PRIORITY. The value of MIN_PRIORITY= 1, the value of MAX_PRIORITY = 10. If you need to set the default priority value, then the static constant NORM_PRIORITY is used for this, which is 5. Example. The example demonstrates the creation of three threads with different priorities. An array of random integers is generated in each thread. To encapsulate the thread of execution, the RandomArray class is implemented, which contains the following components: - internal field thr – reference to the current thread of execution; - internal field AI – reference to the array of numbers; - constructor RandomArray(). The priority of the thread is set in the constructor and the thread is launched for execution; - the run() method, which generates an array of random numbers. This method is the entry point to the thread; - accessor methods for internal fields: getThread() and getArray(); - getPriority() method, which returns the priority of the thread of execution. This method calls the thr instance method of the same name to get the numeric value of the priority. // The class that encapsulates the thread of execution class RandomArray implements Runnable { private Thread thr; // reference to the current thread private int[] AI; // an array of numbers // Class constructor SumArray. // Constructor gets the following parameters: // - name - name of thread; // - size - the size of the array to be generated in the thread; // - priority - thread priority. RandomArray(String name, int size, int priority) { // 1. Create the instance of thread thr = new Thread(this, name); // 2. Allocate memory for array AI AI = new int[size]; // 3. Set priority // Checking the priority value for correctness if ((priority>=Thread.MIN_PRIORITY)&&(priority<=Thread.MAX_PRIORITY)) thr.setPriority(priority); else // if priority is incorrect, then set the default priority thr.setPriority(Thread.NORM_PRIORITY); // Start the thread for execution thr.start(); // the run() method is called - an array of numbers is generated } // The entry point into the thread is the run() method public void run() { // Message about the beginning of the thread execution - the name of the thread is indicated System.out.println(thr.getName() + " - begin"); // in the thread, fill the array of integers with values from 1 to AI.length for (int i=0; i<AI.length; i++) { try { // pause so other threads can take control Thread.sleep(10); AI[i] = (int)(Math.random()*AI.length+1); } catch (InterruptedException e) { System.out.println(e.getMessage()); } } // Thread completion message System.out.println(thr.getName() + " - end"); } // Method that returns a reference to the current stream Thread getThread() { return thr; } // Method that returns a reference to the array of numbers int[] getArray() { return AI; } // Method that returns the priority of a thread int getPriority() { return thr.getPriority(); } // Method that displays the AI array on the screen void Print(String text) { System.out.print(text + "."); System.out.print("AI = "); for (int i=0; i<AI.length; i++) { System.out.print(AI[i] + " "); } System.out.println(); } } public class Threads { public static void main(String[] args) { try { // 1. Create three threads with different priorities RandomArray A1 = new RandomArray("A1", 1000, 7); RandomArray A2 = new RandomArray("A2", 1000, 2); RandomArray A3 = new RandomArray("A3", 1000, 22); // Wait for threads to finish A1.getThread().join(); A2.getThread().join(); A3.getThread().join(); // 2. Display thread priorities for control System.out.println("A1 priority = " + A1.getPriority()); System.out.println("A2 priority = " + A2.getPriority()); System.out.println("A3 priority = " + A3.getPriority()); } catch (InterruptedException e) { System.out.println(e.getMessage()); } } } Program execution result A2 - begin A3 - begin A1 - begin A1 - end A3 - end A2 - end A1 priority = 7 A2 priority = 2 A3 priority = 5 ⇑ Related topics - Multitasking. Threads of execution. Basic concepts - Java language tools for working with threads of execution. Thread class. Runnable interface. Main thread of execution. Creating a child thread - Methods of the Thread class: getName(), start(), run(), sleep(). Examples - Examples of solving tasks on threads of execution (Threads). Working with files in streams. Sorting in streams ⇑
https://www.bestprog.net/en/2021/01/21/java-threads-of-execution-methods-of-thread-class/
CC-MAIN-2022-27
refinedweb
2,043
56.05
White Balance¶ Corrects the exposure of an image. A color standard can be specified. plantcv.white_balance(img, mode='hist', roi=None) returns corrected_img Parameters: - img - RGB (or grayscale, though not recommended) image data on which to perform the correction - mode - either 'hist' or 'max'. If 'hist' (default) method is used a histogram for the whole image or the specified ROI is calculated, and the bin with the most pixels is used as a reference point to shift image values. If 'max' is used as a method, then the pixel with the maximum value in the whole image or the specified ROI is used as a reference point to shift image values. - roi - A list of 4 points (x, y, width, height) that form the rectangular ROI of the white color standard. If a list of 4 points is not given, the whole image is used (default: None) Context: - Used to standardize exposure of images before thresholding Original image from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Corrects image based on color standard and stores output as corrected_img corrected_img = pcv.white_balance(img, mode='hist', roi=(5, 5, 80, 80)) Corrected image
https://plantcv.readthedocs.io/en/latest/white_balance/
CC-MAIN-2019-18
refinedweb
208
57.61
A DESCRIPTION OF THE REQUEST : a call to File.isHidden costs a call to getBooleanAttributes, even on Unix: public boolean isHidden() { SecurityManager security = System.getSecurityManager(); if (security != null) { security.checkRead(path); } return ((fs.getBooleanAttributes(this) & FileSystem.BA_HIDDEN) != 0); } it should end: return fs.isHidden(this); and UnixFileSystem should change like this: + public boolean isHidden(File f) { + return f.getName().startsWith("."); + } public int getBooleanAttributes(File f) { - int rv = getBooleanAttributes0(f); + return getBooleanAttributes0(f); - String name = f.getName(); - boolean hidden = (name.length() > 0) && (name.charAt(0) == '.'); - return rv | (hidden ? BA_HIDDEN : 0); } (or you could just make getBooleanAttributes the native method, as with the Win(32|NT)FileSystem classes.) JUSTIFICATION : getBooleanAttributes is pretty slow. if you're scanning a large file system, and trying to ignore hidden files in a platform-independent way, the calls to getBooleanAttributes can take 10% of your time, for nothing. this encourages Unix developers to ditch File.isHidden in favor of startsWith("."), and Windows users potentially lose out. EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - i'd like to see File.isHidden cost no more on Unix than startsWith("."), but still do the right thing on Windows. one shouldn't pay for what one doesn't use. ACTUAL - Unix users pay for getBooleanAttributes, the JNI transition, and the stat(2), all for nothing. CUSTOMER SUBMITTED WORKAROUND : calling startsWith(".") instead of File.isHidden, but that sucks for Windows users, as if their lives weren't bad enough already.
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6716072
CC-MAIN-2022-33
refinedweb
237
54.39
EEVblog Electronics Community Forum EDA => Proteus => Topic started by: leovivaldi on November 25, 2017, 12:07:29 pm Title: PIC184520 USART Lab - Virtual Terminal Post by: leovivaldi on November 25, 2017, 12:07:29 pm Hi, I am using the PIC18F4520 on Proteus 8. I want to use the USART to display the 'U' character all the time on the virtual terminal. The baud rate is 19200 and I changed it on the virtual terminal as it was displaying wrong characters to baud rate set as 9600 (default). I am getting 4 dashes at a time ----- instead of Us. See attached. Can you help and explain why I am getting this? Or give a hint or point to the part in the source code that is causing it to do this? The source code is: #pragma config OSC = HS //set Osc mode to HS high speed clock #pragma config WDT = OFF // set watchdog timer off #pragma config LVP = OFF // Low Voltage Programming Off #pragma config DEBUG = OFF // Compile without extra Debug compile Code // Include Files #include <p18f4520.h> // Device used is the PICF4520 #include <delays.h> // Include delays headers void main (void) { TXSTA = 0x24;// Select high baud rate, 8 bit SPBRG = 64;// 19200 bps, 20MHz clock TXSTAbits.TXEN = 1;// Transmit enable RCSTAbits.SPEN = 1; // Enable serial port while(1){ while (PIR1bits.TXIF == 0) {;} // Wait until Peripheral Interrupt Request Register Transmit Flag is set // (transmit complete, TXREG empty) TXREG = 'U'; Delay10KTCYx(50);? } } SMF 2.0.18 | SMF © 2021 , Simple Machines Simple Audio Video Embedder SMFAds for Free Forums
https://www.eevblog.com/forum/proteus/pic184520-usart-lab-virtual-terminal/?action=printpage;PHPSESSID=vq21bmk8hthc5hn8djdfh8p6o0
CC-MAIN-2021-49
refinedweb
257
60.85
Why are imported .js files duplicated in nested .vue components? I have made a small app with about 10 custom components that are in .vue files. Some components however need to use a very big library (mapbox-gl). I did this by doing 'import * as mapboxgl from ‘mapbox-gl’ in these.vue files. Some other .vue files however need to use these mapbox-components. It looks to me that each .vue file ends up in its own .js file in ‘dist/js’, is that correct? All these files are huge, so it looks like each time the entire mapbox-gl.js is included for each .vue file that includes mapbox-gl directly or even uses a component that does! This seems very inefficient, is there a better way to do this ? - benoitranque last edited by This would be a webpack issue. The expected behavior is the library being loaded and added to the bundle only once. i didn’t change anything in the default webpack configuration. By using webpack-bundle-analyzer, I can see that indeed in each generated js file the entire mapbox-gl.js is included. When I disable the import of the component that uses mapbox-gl inside the other components, these files are MUCH smaller, so it’s entirely because of doing the import. Am I doing something wrong? Are vue components that are used by other components maybe not meant to import big libraries? I think the question is better suited to stackoverflow. But do let us know if you find a solution. - rstoenescu Admin last edited by @danielo515 If you just need a few methods from lodash, it’s best if you npm install just those. Example with “map”: npm install --save lodash.map, then import map from 'lodash.map' But yet again, the browser support for lodash/underscore equivalents is top-notch, not to mention also way faster since the equivalents are native functions. I strongly suggest you go with the native support rather than bloat websites/apps. Example with plain JS on “map”: myArray.map(…). It’s that easy..
https://forum.quasar-framework.org/topic/1799/why-are-imported-js-files-duplicated-in-nested-vue-components/1
CC-MAIN-2019-22
refinedweb
347
68.97
Red Hat Bugzilla – Bug 130866 'gthumb --import-photos' fails with mounted usb-storage device Last modified: 2013-03-13 00:46:31 EDT Description of problem: gnome-volume-manager will invoke 'gthumb --import-photos' when you plug in a camera and agree to import photos into your photo album. However, this fails altogether for cameras that implement USB mass storage, since they have already been mounted (the check for 'dcim/' is what triggers the import). What is the right solution here? Should gthumb handle this (how can it know where to copy files from)? Or gphoto2 (same question)? Or should gnome-volume-manager be smarter? Version-Release number of selected component (if applicable): gphoto2-2.1.4-2.1 gthumb-2.4.1-1 gnome-volume-manager-0.9.9-2 How reproducible: 100% Steps to Reproduce: 1. Plug in a USB mass storage camera CCing dave on this. I'm guessing that it is best to for g-v-m not to mount the drive if import is selected. Dsrt on #gnome-devel is working on a dbus gphoto2 daemon which would allow multipule applications to access a device at once but I am not sure if we can get this in fc3 or not. It would also involve porting gthumb. Another option is to turn off auto importing alltogether for this release. forgot to cc david Uhm, maybe I'm missing something here. 1. If your camera is USB Mass Storage based g-v-m will think it's a disk and mount it. The presence of a DCIM tree will trigger the "You have new pictures" dialog that eventually takes you to 'gthumb --import-photos'. This will also be true for card readers with media that has a DCIM tree. I take this is what this bug is about. 2. If your camera is not based on USB Mass Storage and it got the hal capability 'camera' g-v-m also launch 'gthumb --import-photos' (Btw, I've written a hal callout that looks at /etc/hotplug/usb.usermap to use this) The bug is really upstream in g-v-m; it only gives *one* program for two *different* uses. Suggestion to write patch to g-v-m that enables the program to contain the HAL UDI (I need to write a few other small patches so I can do this and I'm confident it will be merged upstream) and write a wrapper for gthumb that does either 'gthumb /media/usbdisk1' or 'gthumb --import-photos' depending. Then the g-v-m capplet would contain 'launch-gthumb %h' and all would be well. Long term solution is more complicated, but this will at least make it work well for all digital cameras and card readers that are USB Storage + all digital cameras supported by gphoto2. Thoughts? Ok, I see. What I was missing was the discussion about unmounting and using gphoto2; I see now. This is not really feasible; we want to support card readers as well and only very few cameras support both USB Mass Storage and e.g. PTP as used by gphoto2. I got my patch merged upstream in g-v-m after nagging the maintainer; he even released 0.9.10. I've also built the callouts into hal so we properly detect gphoto2-based cameras. So as I see it only the following things need to be done 1. Add X-Red-Hat-Base to Categories in .desktop file (magicdev was in base) 2. Install the gvm-gthumb wrapper (there is one at) 3. Tweak defaults to point to gvm-gthumb wrapper (might come up with a better name than gvm-gthumb) Re: Comment #3 Can you explain scenario 1 a little more. I get the dialog with cancel/import as the options. I select import and gthumb is just completely confused because it seems to be expecting a camera and not a mounted filesystem. Gthumb's import Photos dialog shows up... with a no camera detected icon.. a destination as /home/username and blank film and categorie fields. I try to hit import and I get a 'could not import photos: no camera detected' dialog. So it appears to me like hal and g-v-m are doing the clever thing and starting gthumb. but gthumb seems to be completely confused by the idea of mounted media with a dcim directory and looks to be expecting a camnera it can probe with libgphoto. /media/usbdrive/dcim exists and there are photos on the disk. Do i need to open up a bug against gphoto about this? What is the exaxct command that gets run to call gthumb in this case? does --import-photos take a location argument? How does gthumb know to look at /media/usbdrive/dcim ? -jef Hi, this is fixed in Rawhide; you'll need gnome-volume-manager 0.9.10-2 and hal-0.2.97.cvs20040901-1. If you have changed the defatuls in the "Removable Storage" preferences capplet, you need to set 'gthumb-import %h' as the command to invoke when importing digital photo albums. confirm fixed gthumb-import %h opens in the directory. One thing.. do you want it to open into the dcim subdirectory by default? instead of the top directory of the media? -jef David, does all photo media have the dcim subdirectory? If so then yes. Actually I can just put a test in the script to see if it exists and use that directory if it does. Any objections? Yeah, although dcim may be in uppercase as well IIRC. Putting it in the script will help but the user will still have to select the subdir with the pictures and this is camera specific. My camera uses canon122 and so forth. All detailed in DCF, see (btw this doc raises some interestering requirements for the Perfect(tm) photo mgmt app; e.g. Reader 2 playback; ask to print tagged pictures etc.) Also note that there may be several subdirectories below DCIM if the media has been used with different devices. my point is... if the request for importing the pictures from media is done just by checking for a dcim or DCIM directory to even get the dialog, it makes some sense to open up to the dcim directory becuase im not all that sure that Al "the average user" Consumer is going to know that DCIM is the directory to browse down into. If the existance of the dcim directory is indeed what triggers the prompt open up to that directory. My camera, and i would imagine other cameras. let me name the subdirectories under DCIM to some extent and I can even choose which directory to use from the camera menu... the menu item right above "do my taxes" and just below "save the whales." Regardless though, seeing a set of subdirectories named NikonXXX or CanonXXX is going to be much more intutitively understood as photo directories than DCIM. -jef"my 3 cents, you own me 1 penny in change"spaleta Hehe, I guess I should have posted earlier but this is fixed and holding for after FC3 test2 is out. The bug is not critical enough to hold up test2 in QA. The wrapper script just checks to see if the dcim or DCIM directory exists and uses that as the base directory. Just as refrence this should have been opened up as a new bug, not appended to an already closed bug. Closed for real this time ;-)
https://bugzilla.redhat.com/show_bug.cgi?id=130866
CC-MAIN-2017-13
refinedweb
1,250
72.56
The initial settings should be configured, when the Sana Commerce add-on is installed on top of Microsoft Dynamics AX. Sana Commerce provides a wizard in Microsoft Dynamics AX that guides you through the main settings that are necessary for the correct functioning of the Sana Commerce webshop. All settings that are presented in the Sana Commerce startup wizard are also available on different Microsoft Dynamics AX forms and can be configured individually. To open the Sana Commerce startup wizard click: Webshop > Setup > Sana Commerce startup wizard. Sana Commerce Startup Wizard This wizard guides through the following steps: If all the necessary configurations are done, then your webshop is ready for trading. You can change any settings individually on the related forms in Microsoft Dynamics AX. If the products catalog is ready and items are set to Visible in webshop, then update the webshop index by running the Product import task in Sana Admin. For more information, see 'Product import'.
https://help.sana-commerce.com/sana-commerce-90/installation/install-sana-add-on-in-ax-environment/sana-commerce-startup-wizard
CC-MAIN-2021-43
refinedweb
160
51.28
We're creating an internet-controlled robot used to carry grocery, essential goods, medicines to whom are affected by COVID-19. We live in Italy and it's very problematic to deliver such commodities to people who are quarantined: volunteers have to carry them in places which are contaminated, risking health in first person everyday. An instrument like this one could save lives and also help the cases curve to flatten. By also creating an intuitive guide, everyone could create his robot and share it with his community. We wanted to create a project which would be more ready-to-make as we could. We know that there's a lot of people playing with electronic stuff who would be very pleased to help in such a difficult situation but they can’t because they are not familiar with controllers, web frameworks etc. With our guide enthusiasts could make robots which would carry goods to people. DIYers wouldn’t spend lots of money as they could use recycled stuff, dissembled from old wheelbarrows, tractors etc. We needed a robot capable of transporting package without exiting our home. To manage this we used long range control, heavy capability and a camera to see the surroundings. We opted for a 4 wheel and 2 motors rover drivable through internet with a Raspberry pi4. We want to clarify that the robot can be made in every dimension and configuration as the software parts are independent from the mechanical parts. You just need to choose the right motor driver and adapt it to the code.P3psi Here are some photos about our creation, P3psi, a name which came from its colors, blue and red. This is our P3psi at work, carrying goods to an old lady during quarantine:Step 1: Building the Rover - Step 1.1: Building the frame and mounting the motors Our motors are recycled from an old electric wheelchair. They work on 24 volt DC current and have a worm gear reducer which unleash 78 rpm output. The frame is made with 30x30 mm steel tubes which provide great resistance and lift capabilities. We tested the rover and managed to carry 100kg. Dimensions, 600x800 mm, are a good compromise between liter capability and portability. We opted for an external frame that guarantees a protection for wheels, chain and transmission. It also cover people surrounding itself. First thing we did was measuring and cutting 4 tubes, two of them 600 mm long and two 400 mm long. Cuts were made with an horizontal metal cutting band saw at a 45° degree angle for a better contact which guarantees a better weld result. Now the tubes are fixed in the right position with clamps and welded together with a TIG welder. Motors are mounted with nuts and bolts through aluminum plates in two other 30x30 tubes, not welded to the frame but fixed through other bolts with the possibility of disassemble to permit to work on the wheels. We also took care of mounting the motors in a way that ensure a good weight balance. We used 4 M20x100 bolts as wheel axes, compatible with the inner diameter of the choosen wheels. The process of welding these bolts on the frame is crucial. Axes must be right placed and perpendicular to the frame to ensure a good and straight motion. Lately a battery support was made and mounted in the midle of the robot, between the two electric motors. This support consists of two 30x30 steel angle bar bolted into the internal support frame, two o-rings TIG welded onto them to fix vent pipes from two 12V 14A/h scooter batteries and an elastic ring to keep the battery down. Electronics are placed inside an ABS electrical box, bolted into the frame in the front side of the rover. It is important to contain the heights of the robot, so it's easier to make space for the package rack, mounted on top of the power unit and the brain of the rover. Four 180mm long, 30x30 mm square Inox steel tubes are welded into the corners of the external frame in an upright position. In two of these hinges are welded and the other two ar drilled The rack is made with a 40x40 mm steel bar angle in the same way of the frame and with the same dimensions ( 600x800mm ). Inside this rectangular frame an alluminium plate is riveted. First make a hole through both alluminium and steel support, than rivet the piece with a rivet gun. This support is welded by one side into the two hinges and by the other side to two little steel bar angle with two holes that coincide with the two already made in the square steel tubes. Inside these holes a bolt is inserted and forms a secure system for the "tipper body". Lately a 3/4" steel tube is fixed with two pipe holder in the back left side of the rover. This will be the support for the camera bracket. We choose to paint it with a cool Pepsi red. - Step 1.2: Building the transmission This is the hardest part of the building process. As transmission system we used two recicled old bicycle chains and six 22T chainwheels, creating a four wheel traction with a 1 to 1 reduction ratio. We also tried a 1 to 1, 455 ratio using bigger chainwheels (32T) to the motors, but that setup increased the speed of the robot, compromising wheels power output. We also used two litle gear as a chain tensioner obtained form a gear shifter. Two chainwheels were welded as pinion in the original electric motors pulleys, requiring little work. Now the wheels are modified to fit the chainwheels. We made sure to buy wheels with a steel interior and luckily we found a set with rim drilled to fit 4 M8 bolts. We replaced the original ones and mounted 4 longer bolts, with a spacer made from a 10mm steel tube cutted at 30mm lenght. After mounting wheels, motors and chains we realized the necessity of adding a chain tensioner to our system. We made a 6mm wide cut on the high face of the internal frame in the side we had more room, to fit a litle sliding support. The support is made from 30mm steel bar, it's purpose is to slide left and right a little chainwheel, recovered form a gear shifter, that push the chain. - Step 1.3: Boards connections and cable management The easiest and most efficient way to control a DC motor is through a motor driver. We tested different types of motor drivers, starting with low quality and cheap ones, to more expensive and solid ones. We burned a driver and just after some tries we managed to solve the problem by investing in a good one. We suggest to you to buy a high quality component to get things done at the first attempt. A motor driver take the tension input and change the output parameters to the motor according to signal from that receive from a microcontroller. On input it need a signal for the direction (5V = forward, 0V = backward) and a PWM signal for the speed for both of the 2 channel. It also needs a 5V and a ground logic power from the same microcontroller. All of those signals came from a Raspberry where our program is running. To power our driver we used 2 motorcycle 12V batteries connected in parallel, for a total of 24V. We also modify our driver adding two big sinks that help cool down the transistor and prevent a puch through. The voltage from the battery doesn't go directly to the driver but before it passes through a connector that help to work on the electronics and also through a key switch. In addition, we added a led with a 1, 5k Ohm resistor in series to indicate when the tension is on. To power our Raspberry we used a car USB charger that can take 24v input from batteries and power a normal USB device. Than we used a classic USB type C cable. Attached to the USB charger we also connected the power supply for the camera bracket servos to the PWM extender board. The board is connected to the Raspberry through a serial connection. The wire needed are: VCC (3, 3V logic power supply), GND, SDA, SCL. In addition the board need the supply current for the servos, the 5 Volts wire is connected in the V+ pin and the ground is connected to an output servo pin to connect the servo GND to the Raspberry GND. From the board 4 wires are attached, V+ and ground to the bracket servo and a PWM signal to each of the 2 servo. We added a Led with a 100 Ohm resistor in series connected to the Raspberry, to indicate when the program in running. We also connected the Raspberry Pi4 camera flat cable which carries data to the RPI Camera. This flat cable and the cable with the 4 wire for the bracket pass through a cable conduit and than through the PVC pipe holding the bracket. At the beginning we putted the camera in the back and the cable conduit passed in the middle of the rover. We realized the camera cable was inside big electromagnetic fields produced by the motors and as soon as one starts spinning the Raspberry loses the camera signal. We tried to manage the problem by screening the cable conduit with aluminum foils connected to the battery ground, but it didn’t work for us, so we moved the camera to the front. We also started with a 3/4” steel tube as bracket holder, but in this way the tube created an antenna effect corrupting the signal from the camera, so we switched to a PVC pipe. At the bottom of the page you can find our Fritzing schematic file. - Step 1.4: Designing and 3D printing thePan and Tilt camera bracket We tried to push ourself out our boundaries and we decided to 3D print the camera bracket. This was our first time using a 3D printer, but we are pretty happy with the result. We wanted to use 2 servo motor, one for the Y axis and one for the X axis. They are easy to program, precise and reliable. Luckily we found two of this motors in an old and broken toy helicopter. We also wanted to add 2 bearing that we found in the house to make everything smoother. We did some paper design, we measured our motors, our bearings and our camera and than we launched Autodesk Fusion 360 and draw our bracket. (You can use your favorite CAD program) The final support is made from 5 different parts. The 5 pieces are exported in STL format from Fusion 360, opened with Ultimaker Cura and 3D printed with PLA with 50% infill. Than we assembled it with M8 bolt as axes and screw and mounted in the 3/4" tube fixed on the rover. At the bottom of the page you can find our Autodesk Fusion File.Step 2: Coding the rover To control the rover you have to use a Raspberry Pi or equivalent. The project was originally made with a Raspberry Pi4, but since we won the Balenafin Board, we used that interface because it had an integrated 4G module, so we could use it instead of a 4G external modem. The procedure to get things working it's completely the same. In brief we're going to create a Python server that drives motors and servos by listening data from an HTML page. In that page we have implemented two joysticks, to factually drive the rover, and the video, which is taken from another rendered Apache webserver. - Step 2.0: Setup Raspberry Pi/BalenaFin with latest Raspbian The first thing we are going to do is to install the latest version of Raspbian. I followed the official guide from the site. I recommend to download the version with desktop and recommended software from here. This is the official guide. You have to download the software, and mount it into an SD card (at least 16GB raccomanded) with the PI imager. You have to connect a screen, a mouse and a keyboard to the Pi and to boot it up. (USB-C phone charger is ok to get things done) When you install the OS after the flash, things are simpler if you choose visual install. After the process is done you should see something like this: - Step 2.1: Install Libraries This program uses lots of libraries, so, before coding, we have to install all of them. If you already have some installed, you can skip relative passages. First thing, we're going to update dependancies, by typing on a terminal: sudo apt-get update So, the first thing we're going to install is Flask. Flask is a microframework written in Python. It's used to create a simple web server to host our python program. As the official site says: “Micro” does not mean that your whole web application has to fit into a single Python file (although it certainly can), nor does it mean that Flask is lacking in functionality. The “micro” in microframework means Flask aims to keep the core simple but extensible. So we're create a simple but powerful web server which can elaborate efficient requests. Here's the code to install it: sudo apt-get install python3-flask Then we have to install the flask CORS library. This is a Flask extension for handling Cross Origin Resource Sharing (CORS), making cross-origin AJAX possible. CORS is a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served. So if you have not implemented CORS, you can't get or send data to the server. If you want to know more about CORS wiki it $ pip install -U flask-cors We are also installing a library to control the PWM expander (pca9685) to drive servo motors. I got this simple but strong library from here sudo pip install adafruit-pca9685 if you have installed a non complete Raspbian, you also have to install Numpy for maths, csv to read csv files, and also RPi.GPIO to control inputs and outputs of the Pi. - Step 2.2: InstallRPi-Cam-Web-Interface Last but not least we have to install RPi-Cam-Web-Interface. I tried lot of video stream libraries, starting from the cool project made by Miguel Gringberg who used Motion-JPG, or UV4L, but both had too high latency, so the robot was pretty undriveable in real time. So I came to this very complete and solid project. RPi-Cam-Web-Interface is hosted in an Apache web server, so there will be a second web server just to handle the video. It may seem complicated, but it's the most effective way I found out to manage the video latency issue. So, as the official documentation suggests, here is the installation guide: Then carry on with the installation: ./install.sh Make sure to choose a port which is free during the installation. In our case we chose port 8118. So when you success in installing the library, if you type in the web navigation bar, you should see something like this: I recommend from this point to know what are IP addresses and ports so how to use it to access to web servers. If you already know something about this topic it's time to get our local RPi IP address, to access to it over LAN. Type in your terminal: ifconfig From this moment, you can access to the RPi-Cam-Web-Interface web server from every device connected to your local network by typing in the web bar: :8118/html - Step 2.3: Getting started The best thing to do when you start a new project is to create a folder to organize data. For example, you can create a workspace from the terminal. In my case: cd Documents Create a new folder: mkdir Pepsi The above command will create a folder named "Pepsi". This will be the main one where we will save the main python file. /home/pi/Documents/Pepsi Now, on this folder, let's create 2 other sub-folders: static for JavaScript files and csv (you'll see below) and templates for HTML files: cd Pepsi And create the 2 new sub-folders: mkdir static and mkdir templates The final folder tree will look like: /Pepsi /static /templates - Step 2.4: Dive into Python Code We are using Python to create the main file in which we are creating a web server in port 5000 to render an HTML webpage and to manage website requests to move motors and servos. So the website will send data to the Flask python server, which will be elaborated into electrical signals for motors. Let's start from importing libraries: from flask import Flask, render_template, request import RPi.GPIO as GPIO import Adafruit_PCA9685 from time import sleep import json from flask_cors import CORS import csv import numpy Then we are going to define PINs. A stands for motor 1, B for motor 2 ledON is a led which will be always on to know if the script is running. So in the setup we'll set it to HIGH. Servo min and max are the corrisponding normal numbers for -90 and 90 degrees position. pinPWMA = 32 pinAIN = 10 pinPWMB = 33 pinBIN = 8 ledON = 7 servo_min = 150 servo_max = 600 servo_0 = (servo_min+servo_max)/2 servoRatio = (servo_max - servo_0)/100 Then we're creating the Flask server app and implementing CORS app = Flask(__name__) CORS(app) So it's time to start the input/output initialization. Here we're setting the pins we discussed before as PWM outputs to set motor speed, INs to motors directions and led. BOARD mode is the simplest pin numeration for RPi. Here is the scheme, valid for every raspberry pinout: GPIO.setwarnings(False) GPIO.setmode(GPIO.BOARD) GPIO.setup(pinPWMA,GPIO.OUT) GPIO.setup(pinPWMB,GPIO.OUT) GPIO.setup(pinAIN,GPIO.OUT, initial=GPIO.LOW) GPIO.setup(pinBIN,GPIO.OUT, initial=GPIO.LOW) GPIO.setup(ledON,GPIO.OUT, initial=GPIO.HIGH) Now it's time to setup PWM port. PWM stands for pulse-width-modulation and it's a particular kind of electrical signal which can be read by a motor driver and be used to modulate motor speed. So two PWM objects are created with a frequency of 20kHZ. Frequency varies from driver to driver, if it's too high there's a risk to get something burnt, if it's too low, motor will change speed in a non-linear way. When the frequency is set, we can change motor speed simply by adjusting PWM duty cycle, which varies from 0 to 100. We're starting from a 0 value. PWMA = GPIO.PWM(pinPWMA, 20000) PWMB = GPIO.PWM(pinPWMB, 20000) PWMA.start(0) PWMB.start(0) Then we're creating a Servo object. This Object is used to drive the PCA9685 board. This is a programmable circuit which extend the number of PWM ports of RPi. In fact the standard RPi only has 2 PWM ports and we need another 2 to drive servos. Servo object is made to drive all the 16 PWM ports of the device. We'll use 1 and 2. We'll set the frequency to 60Hz which is the standard for Servos. Then we're setting the position to the centre, which we defined as servo_0 before. servo = Adafruit_PCA9685.PCA9685() servo.set_pwm_freq(60) servo.set_pwm(1, 0, servo_0) servo.set_pwm(2, 0, servo_0) We are now importing two.csv arrays [201]x[201] from the two files in the static folder, one for each motor, which are functions that translate joysticks positions to motor outputs. We are discussing this part later in the guide. # SX motor readerSX = csv.reader(open("static/motorSX.csv", "rb"), delimiter=",") SX = list(readerSX) funcSX = numpy.array(SX).astype("float") # DX motor readerDX = csv.reader(open("static/motorDX.csv", "rb"), delimiter=",") DX = list(readerDX) funcDX = numpy.array(DX).astype("float") It's time to define our Flask Server. We create elements which are called routes. These are functions which are called when a host connects to the server by typing a certain path after its IP address in the browser navigation bar. For example in this case, we defined the route "/", which is the standard path called when host connect to. So here we will create the main function of the server, the one which is called when a new host is connecting. This is about rendering the index.html page. This is the home webpage of the server. We are discussing it later. The if clause is to make sure the host is trying to retrieve data, in this case the HTML page. If you want to know more about HTTP methods and how websites work there's a lot of documentation online, we suggest to start from here. @app.route("/", methods=['GET','POST']) def index() if request.method == "GET": return render_template("index.html") After main route was created with success, we're creating another important route, the route post. This is, probably the most important part of the program. It's the function which is called when a POST HTTP request is called from a host passing through /post url. In fact into the HTML webpage we will create 2 joysticks which will send their position data through this method with a refresh rate of 50ms. So every 50ms information will be sent from the host playing with joysticks into the HTML rendered page to this route. @app.route("/post", methods=['GET','POST']) def poster(): if request.method == "POST": So, as we said, we have to catch data, and to to do this, we are using the flask.request class. We are retrieving JSON data, which is the typical way data is sent and received in web environments. In this case we will extract from raw data variables such as motorX or servoY. content = request.json motorX = json.dumps(content['motorX']) motorY = json.dumps(content['motorY']) servo1 = json.dumps(content['servoX']) servo2 = json.dumps(content['servoY']) Then we're translating new data into INT variables to use them as matrix indexes intMotorX = int(motorX) intMotorY = int(motorY) intServo1 = int(servo1) intServo2 = int(servo2) Now we're using the motor function results matrix (See Using Matlab to translate joystick position into motor Input) to create the two variables which will drive motors. These variables vary from -100 to 100, with the + or - sign to indicate the verse, and the module to indicate the speed. For example a 100 stands for max speed clockwise, a -50 half speed counterclockwise. motor1Dir = int(funcSX[intMotorX+100][intMotorY+100]) motor2Dir = int(funcDX[intMotorX+100][intMotorY+100]) So we're translating this numbers into PWM signal to motor: # Motor1 if motor1Dir > 0: GPIO.output(pinAIN, GPIO.HIGH) PWMA.ChangeDutyCycle(abs(motor1Dir)) else: GPIO.output(pinAIN, GPIO.LOW) PWMA.ChangeDutyCycle(abs(motor1Dir)) # Motor2 if motor2Dir > 0: GPIO.output(pinBIN, GPIO.HIGH) PWMB.ChangeDutyCycle(abs(motor2Dir)) else: GPIO.output(pinBIN, GPIO.LOW) PWMB.ChangeDutyCycle(abs(motor2Dir)) To close the Post route we're matching the servos positions with joysticks ones by using the set_pwm class to write joysticks' positions multiplied by the constant servoRatio. Minus or plus is to invert axes. servo.set_pwm(1, 0, servo_0 - int(intServo1*servoRatio)) servo.set_pwm(2, 0, servo_0 - int(intServo2*servoRatio)) sleep(0.05) Last we're creating the Flask server on 0.0.0.0 address with default port 5000 if __name__ == '__main__': app.run(debug=True, host='0.0.0.0') - Step 2.4: Creating the index.html HTML main webpage In this section we're creating the webpage rendered by Flask server when a host connect to device's IP address. To better get into the program, I recommend to download the index.html file from the documentation of the project and to open it beside this tutorial. In the first part contained into the tag <style> is to organize the webpage, formed by a row with two lateral columns where we'll put two joysticks. Than as we've done with Python code we're importing some libraries, in this case 2, a joystick library and AJAX. <script src="static/joy.js"></script> <script src=""></script> The first one is a very solid and well written library made by Roberto d'Amico aka Bobbotek, which you can find here. The second is AJAX, a set of web development techniques using many web technologies on client side to create asynchronous web requests. We're using this to send POST requests to Flask server. Then we're creating into the body an iframe object that will render the RPi-Cam-Web-Interface. We are rendering the min.php file which is formed by the only video. If you'll click on the video it will occupy all the frame. <iframe id="myFrame" width=100% height=600</iframe> Now we are creating divs to insert joystick object. Then we will start coding in Javascript. function POSTer(toLolcalHosta, toLolcalHostb,toLolcalHostc, toLolcalHostd) We start from creating the POSTer function, which uses AJAX library to post data from this webpage to Flask server we created before. AJAX known as Asynchronous JavaScriptand XML is a library which can create POST request and send JSON data. In this case we are sending joysticks' X and Y coordinates to the server. formDATI = { "motorX" : toLolcalHosta, "motorY" : toLolcalHostb, "servoX" : toLolcalHostc, "servoY" : toLolcalHostd }; var formData = JSON.stringify(formDATI); This is to prepare data. JSON data has a particular structure made by a header related to a value. For example "motorX" : 3. $.ajax({ type: "POST", url: window.location.href + "post", data: formData, // Data is JSON dataset success: function(){}, dataType: "json", contentType : "application/json", secure: true, headers: { // Header is made for CORS policy 'Access-Control-Allow-Origin' : '*', } Then we are creating the AJAX request. We are sending a POST request to site/post url, in json format with a header to fit in the CORS policy. document.getElementById("myFrame").src = window.location.href.replace("5000", "8118") +"html/min.php" Now we are setting the source for the iframe object. It's the page url with changed port. var Joy1 = new JoyStick('joy1Div'); var Joy2 = new JoyStick('joy2Div'); We are creating the two joystick objects in the places we created before. They will create values from -100 to 100 depending by position. setInterval(function(){ joy1Xvar = Joy1.GetX(); joy1Yvar = Joy1.GetX(); joy2Xvar = Joy2.GetX(); joy2Yvar = Joy2.GetY(); POSTer(parseInt(joy2Xvar), parseInt(joy2Yvar), parseInt(joy1Xvar), parseInt(joy1Yvar)) }, 50); Finally we are sending data collected by joysticks in variables every 50ms with the setInterval class. - Step 2.5: Using Matlab to translate joystick position into motor Input This part is about maths. You can understand it just with a little knowledge about 2 variables function and interpolation or you can just skip it by knowing that those.csv matrixes traslate joystick positions into motors input. In the previous section we introduced DX/SX matrixes loaded from two .csv files. Probably you were asking yourself what that was about. We created with Matlab a function which translates values sent from joystick into input to motors. We interpolated a continuous two variables curve for each motor which generate a value having a limited [-100, 100] domain and codomain. We wanted the output to follow this scheme, red for left motor, blue for the right one. We use the module of the output as the PWM duty cycle of the motor, and the sign for the verse. From this values we want to create a continuous variation to get motors change speed in an analog way. So we are using MatLab to create a linear fit from these data. This is the final result: We are using MATLAB_R2020a in this guide. We are using the Curve Fitting app which provides a flexible interface where you can interactively fit curves and surfaces to data and view plots. You can open it by typing in Command Window: cftool I'll explain the procedure by creating motorSX (left) function. Before starting to fit the two curves you'll have to create data matrix to fit the curve. So, in this case: X = [ -100 0 100 -100 0 100 -100 0 100]; Y = [ 100 100 100 0 0 0 -100 -100 -100]; Z = [ 50 100 100 0 0 100 -50 -100 -100]; Now we are opening the tool to fit the curve. You'll have to select data we created before and to fit the curve with a linear method. If the curve doesn't fit as well as you expected, you'll have to add in the source data some other points. It's time to export the fitted curve as a Matlab function to use it to create the matrix. You have just to export the interpolation by clicking Fit - Save to Workspace. I'm exporting the fit as fitMotorSX. So it's time to create the matrix. A=ones(1,201); We are creating a 201 columns vector to use it in a cycle to load data into a MotorSX matrix with this cycle: MotorSX = [fitMotorSX(A*(-100) ,[-100:1:100]) ]; for k= -99:1:100 MotorSX = [MotorSX; fitMotorSX(A*(k) ,[-100:1:100]) ]; End Now we can see if the result was as expected by typing: surf ([-100:1:100], [-100:1:100], MotorSX) Last thing to do is to export it as a .csv file: xlswrite('motorSX.csv', MotorSX) You can now redo the procedure for motorDX. - Step 2.5: Exiting the LAN Now you can drive your robot connected to your wifi local network just by typing in your web browser its IP address, in my case: If you are planning to use it often in that particular network, I suggest to set a static IP distribution from your router/firewall to use always the same IP. In other cases you can always search for RPI IP address from its terminal or with lots of apps which scan local devices like Fing. If you want to go online and drive your unique Pepsi from everywhere in the world, the first thing you got to do is to get it connected to internet from everywhere. If you are using Balenafin you just have to go get a 4G SIM, because an internet modem is integrated in the board. If you are using a normal Raspberry PI, you have to buy an internet modem, which could be an USB one, a battery Router or a 220V classical modem (You'll need a voltage transformer!). A cheaper way is to use an old smartphone connected with hotspot (or also your smartphone). Now you can drive it just by connecting in local to the router's wifi as before. You are driving it everywhere, but you got to be in router's wifi range. Now comes the problem. Your IP when is Local it's simple to catch, but if you want to access to it from outside it's problematic. IP address are not always public and easy to find from everywhere in the world. This part is very tough, and it's not solvable with just some passages in a univocal way. This is about static-dynamic IP addresses, private IPs, VPN and lot of other stuff which change by the country you live in and internet providers. The easiest way to cope with it is to get a Static IP sim from your internet provider. This could be very expensive if you live in a country like Italy, because there are basically no phone companies which sell them. However, this is the easiest and the most efficient solution to drive the robot without latency from everywhere without problems. If you have a public static IP, you'll just have to connect to it as a local one to drive the robot. There are some other DIY ways to get connected to your robot. There are lot of services online like DYN.com which can make your public IP static. There are also lots of solutions very easy to use and free which tunnel your private ip like remote.it. I tried to use this one, it's easy to use, but latency starts to be high and I don't raccomend it unless if you have a very strong 4G connection. (to manage to see video you got to do some work with APIs to catch Apache server IP address) You can also create a VPN server to get your IP address. If your ip is under NAT it's the only way around to have enough mobility to get video working. It's very complicated, there are lots of guides and you need to pay for a server with a public static IP. So every way is good to get your RPI visible from everywhere.Conclusions and go further If you followed this guide until here and you got things done congratulations! You'll have your unique robot done! Thanks for giving us faith! We hope you liked our project and that you'll use the robot to help people in difficulty by carrying goods. This project was made to help to flatten the COVID-19 spreading virus curve, but can also be integrated with lots of sensors, mechanical parts and other stuff to get almost everything done. To help with Coronavirus it would be very helpful to add a thermal camera to know if people around are infected. They are a bit expensive but very easy to implement also into the html page (I suggest to use a MJPEG streaming method). You can also implement speakers and microphone in the webpage to talk with people around or to just send messages (always with post method to send commands and audios served by Python) Thanks for Reading!
https://www.hackster.io/343328/p3psi-a-web-driven-covid-19-carrier-robot-eab65a
CC-MAIN-2021-17
refinedweb
5,695
70.63
Discussion area for what enhancements to put into future releases. If you are not a JMeter committer, feel free to add comments (with initials please), but please don't reorganise the release plan. Possible enhancements This section is for listing possible enhancements. Describe them briefly. If necessary add bug ids (or links) to clarify. - JDBC connection pooling - possible enhancements - use Apache DBCP instead of Excalibur? - JDBC sampler - use different cache sizes for connection and statement? - How do these really work? Should the connection cache be a ThreadLocal item? - Need to add debugging to show when cache items are created and destroyed - difficult for pooled resources - XPath Extractor - add user-declared namespaces - Rework HTTP GUI to make it easier to use and extend - Add HTTP methods dropdown to "SOAP/XML-RPC sampler", bugzilla 42637 - Remove unnecessary code duplication in "SOAP/XML-RPC sampler", rather use code inherited from HTTPSampler2. This is almost done, not committed to svn (Alf Hogemark) - Add unit tests for SOAP/XML-RPC sampler - Change name exposed to use to "SOAP/XML-RPC/REST sampler" Make it possible to send file content for HTTP GET, PUT and DELETE using "HTTP Sampler" and "HTTP Sampler 2", and therefore also "SOAP/XML-RPC sampler" <= some of this ha been done - Make Axis2 / XFire / CFX Sampler, and possible retire existing "Webservice(SOAP) sampler". - A bit unsure about this, not sure what benefit is over "SOAP/XML-RPC Sampler". - Will we really be testing Axis / XFire client side peformance here ? - Or could we make a nice GUI, so it is easier to test web services ? - Make command line option available, to easily performance test one url - This is meant as a replacement for the "ab" command, and alternative to "faban" - I think it would broaden the user of JMeter a lot - I will look into making a "ab result listener", which presents the same statistics as "ab" initially (Alf Hogemark) - One possible implementation is - Add a simple jmeter test plan, with threadgroup, http sampler, and listener, using property values to control behavior to the bin/system_testfiles directory - Add shell scripts, for example jmeter_test_http, which just calls the standard jmeter script, with additional "-J" arguments, for example "-Jiterations=10" - Need a way of getting the listener output to the console, need some investigation - on Windows one can use CON as the file name; /dev/tty for unix - but output is not formatted nicely - Summariser can already output to console - that may be enough - Restructure HTTP Sampler / Settings GUI. bugzilla 41917. A big job, but is becoming necessary, if we are adding more options to the HTTP Sampler - Listeners: - make distinction between open file for read and write. at present the browse button does both, which is confusing (given the heading) <= this has been changed - a file that has just been read will then be used for writing if a test is run - is that sensible, or should the file name be cleared after it has been read? - Graph Transactions Per Second (TPS) like the graphing of Response Time. - Add a "Preview" button to "CSV Data Set Config", which would use the file name and parameter specification, and bring up a dialog or something to show an example of the variables available and their values. - Useful for easily checking that you have specified the columns and variables correctly - Could this limit the number of emails on the jmeter-user list asking how "CSV Data Set" works - Improve documentation - Currently, there are quite a lot of question duplication on jmeter-user mailing list, would be nice to reduce that number - Does people really read the documentation ? Judging by posts on jmeter-user mailing list, I sometime have my doubts. - Would be excellent to have documented steps for "view frontpage, log in, view a page, log out" scenario for the following frameworks - Struts - Spring - Spring Webflow - .net applications - others (please suggest) - I think that could reduce number of mails to jmeter-user mailing list - the above could start out as Wiki pages - it would be useful to provide sample JMX files as well - Additional sample JMX files for the various scenarios that are FAQed - Perhaps a giant example showing lots of different scripts? - Support for uploading multiple files in HTTPSampler / HTTPSampler2. I think it the one main missing functionality in HTTPSampler that browser provide, and JMeter not. - this is bugzilla 19128 - I think this is dependent on bugzilla 41917, mentioned above - Improve support for testing web service. This overlaps with some of the points above. - Make a "HTTP Caching Manager", this is bugzilla 28502. - It is useful for performance testing - Would also be useful for functional testing, to see if pages are "cacheable". - Make a "HTTP Proxy Manager", where the user can specify what proxy settings to use - Then use would not have to edit jmeter.properties file or command line arguments to use a proxy - We could get rid of the "HTTP Proxy" settings in the "Webservice(SOAP) sampler" - Could perhaps just extend HTTP Defaults for this? - N.B. HTTP (Java) relies on system properties to define proxy items, so it is impossible to have more than one setting. Make new, nice?, icons for the GUI elements which currently do not have a unique icon. For example the post and preprocessors. <= The Pre and Post-Processors now have different icons (corner fold is in a different place) - Would make it a lot easier to see what is what in the tree - Extend CSV Data Set to read via JDBC - Assertion Results (or similar) could be used to attach errors to samples - e.g. if a Post-Processor failed, the error could be shown there instead of in the log. Or just add string array for storing such errors? - Module Controller GUI - should it omit leading path name elements as at present? - it would be nice if it could check for ambiguous controller names: - report error if ambig. name selected? OR - flag ambiguous names in the drop-down list? - Java Sampler to invoke arbitrary Java method. Would need: - Remote Mode selection: might be useful to be able to specify the remote mode for each listener. This would allow the use of immediate mode for a few listeners and statistical mode for most of traffic. - If Controller - allow functions and variables as alternative to Javascript Rework ? - Would we gain functionality by moving to log4j ? Alternatively, move to Commons Logging (as used by HttpClient currently) Or perhaps SL4J - - What are the risks with continuing to use Avalon, if Avalon is not maintained anymore ? Remark: HttpComponents project is considering migrating HttpClient 4.0 branch off Commons Logging ] - Reorganise documentation - component_reference is getting much too big. This will require changes to the help system. - sort functions and component ref into more logical order (currently chronological) - perhaps use separate XML files for each item, included by main pages ? - Could we then add a "Help" button to each GUI element, which would bring up the correct help in the browser ? - or extend the existing help menu to load just the individual page. - If there are combined and individual help pages, there would probably need to be two copies. Maybe simplest just to split component reference by element type; keep current page as an index into the subsections - Help could perhaps be extended to allow loading of linked pages (but one would probably not want to allow external links to be loaded). There is some code in View Tree that might help here. Re-write ClassFinder: - needs general tidyup / javadoc - cache results - same classes may be requested multiple times - prepare is probably called too often; can it be done once per test? - need some more GUI types - eg. table - would be nice to be able to enable/disable fields depending on what else is selected - e.g. JDBC parameters only needed for prepared statements - ensure drop-down list size big enough for all entries (within limits!) - move from Bugzilla to JIRA? More flexible, (but attachments a bit more awkward at present?) - JMS GUIs should be loadable without needing JMS jars (needs an extra level of indirection, as is done by JMS Publisher) or as below. - How to handle Gui elements that depend on optional jars: - should these be displayable, even though the jars are missing? Convenient for creating and viewing test pans, but not so useful at run-time - should the test plan be allowed to run? - or should they be omitted as at present? - this is confusing at build time. - or perhaps generate a dummy entry in the list, with a message to say the jar is missing? his would be tricky, as the class is needed to retrieve the name and the menu category. Perhaps the way to do it is to handle it in the GUI by catching the errors, and changing the name or screen comment? May be tedious to do. - Sort test tree according to JMeter processing order? This should probably be a separate action, as it would be confusing for the tree to change as it was editted! (the menu drop-downs are now in the correct order) - GUI code refactor - there are various table implementations, could they be combined? - perhaps the table models could also be combined? - Add SVN revision number to version? (already added to Manifests) Consider migration to Maven2 as a build tool for JMeter. This should help simply dependency management and facilitate the use of JMeter for automatic load testing / integration into tools like Continuum - Maven 1 was tried a couple of years ago, and seemed incompatible with the JMeter directory layout and multiple jars; hopefully Maven 2 is more flexible. Consider renaming ThreadGroup as JMeterThreadGroup - or UserGroup ? - to avoid confusion with java.lang.ThreadGroup - Documentation refers to threads and/or users in different places; replace by users - or users(threads) - everywhere? Test elements need access to ThreadGroup and TestPlan for co-ordinating counts etc per-group and per-plan. Not much distinction is currently made between per-group and per-plan.
http://wiki.apache.org/jmeter/FutureReleases?highlight=ListenToTest
CC-MAIN-2016-36
refinedweb
1,659
60.24
Created on 2017-06-26 22:38 by terry.reedy, last changed 2020-01-27 23:38 by terry.reedy. This issue is now closed. A complete test of the GUI will simulate user interaction with every widget and then query Changes() to see that the proper changes orders have been recorded. This issue depends on #30779, factor out Changes class. Tests should be tested on MacOS before being pushed. A possible issue is including constants that are different on different systems. See #28572 I'm trying to add the test on configdialog, currently working on KeysTest. Proceed as you want. Keep in mind that the interface to the changes structure will be changed by #30779. The testing logic should not be, however. PR to fix type in moduleTearDown. New changeset 25a4206c243e3b1fa6f5b1c72a11b409b007694d by terryjreedy in branch 'master': bpo-30780: Fix error in idlelib.test_idle.test_configdialog (#2606) I posted PR 2612 for #30779 and expect to merge it tomorrow after sleep and final review. It includes passing revisions of existing configdialog tests. A PR dependent on 2612 could be posted before I do the merge. Follow the model of using xyzpage names in asserts and tests would survive the refactoring of ConfigChanges discussed as a possibility on #30779. I want to move forward on this, but not duplicate your work, so please post work done and immediate plans. New changeset df0f99329843c10701ffaefbd3948ac698c12220 by terryjreedy in branch '3.6': [3.6] bpo-30780: Fix error in idlelib.test_idle.test_configdialog (GH-2606) (#2613) This issue is almost done. There are just a few things missed by the closed dependencies. #31001 and #31002 have notes on what not tested in HighPage and KeysPage. I've started working on the missing tests for HighPage and KeysPage and also test for the functions and buttons in ConfigDialog. That led to PR3238 because the 'help' button wasn't working. Anyway, I found the following on: "An easier solution is to prevent Tkinter from propagating the event to other handlers; just return the string “break” from your event handler: def ignore(event): return "break" text.bind("<Return>", ignore) or text.bind("<Return>", lambda e: "break")" So, it seems that the 'Double-Button-1' and 'B1-Motion' bindings are to prevent those events from propagating outside of the widget. Although, I didn't notice a difference when I commented them out, so maybe they don't have a higher level binding. I just pushed the extension conversion patch. The tests we did already were greatly helpful, and some tests not done or inadequate hindered. I am now looking to polish configdialog before 3.6.3. If you have anything worth a new issue and PR, I will be ready to review. Sorry I don't have any tests yet. I've added a few, but it's taking me forever to figure out the bindings for testing the `Double-Button-1` and `B1-Motion`. I actually have a test for `Double-Button-1` now, but still working on `B1-Motion`. I've submitted a PR for the tests that I added to complete coverage for Keys and Highlights and to add some GUI tests for the buttons. It's not everything that I wanted to do, but it ended up being more substantial than I realized. Here's the issue I was having with testing `Double-Button-1` and `B1-Motion`. After the first ButtonPress and ButtonRelease, the second (or later) event-generate always looked like it was a double click (it probably was really a triple click or higher, depending how many times I tried it). I tried everything I could think of or find online, but I couldn't find how to reset the counts to start over and make it think it was a just `ButtonPress` and not a `Double-Button`. This was OK for the testing on `Double-Button`, but it was causing the `B1-Motion` not to work because it was not seeing each 'movement' as a press/move/release, but rather a double-button/move/release. It was OK because the Double also invokes the Press callback, however, since the Double event was bound to `break`, it didn't allow the text to be selected. (The difference between `Double` and `B1-Motion` is that Double selects a word and B1-Motion selects where the mouse moves). I played with it in IDLE and I could reset the Press by doing an event-generate on a different (x, y), but that didn't work in the test. I tried a delay and it still didn't fix it. Anyway, I've learned a lot about mouse binding while working on this, but without a solution, I cheated and just did ButtonPress on a different (x, y) for the second test. Sorry about the rambling, but I'd love to know the trick to resetting the number of button presses in a test. Review of overview: PR-2706 and PR-2613 fixed one line in test_configdialog for this issue. PR-2046, #30617 and PR-3238, #30781 are merged cross-references. PR-3222 was closed in favor of PR-3220, #31287, merged. PR-3592 completes highlights and keys coverage and needs my review after getting lost in the flurry of PRs. Dependency #31414 has merged PRs. I should review, close, or indicate what is left to do. Other dependencies are closed and I believe this should be after handling above 2 items. New changeset dd023ad1619b6f1ab313986e8953eea32c18f50c by Terry Jan Reedy (Cheryl Sabella) in branch 'master': bpo-30780: Add IDLE configdialog tests (#3592) New changeset 5aefee6f989821c5dc36d10a9cfd083d7aa737a5 by Miss Islington (bot) in branch '3.7': bpo-30780: Add IDLE configdialog tests (GH-3592) New changeset 7b57b15bd83879ee35f8758a84a7857a9968c145 by Miss Islington (bot) in branch '3.8': bpo-30780: Add IDLE configdialog tests (GH-3592)
https://bugs.python.org/issue30780
CC-MAIN-2020-45
refinedweb
962
64.51
You can subscribe to this list here. Showing 1 results of 1 The branch "master" has been updated in SBCL: via 4b25bb8e20bf3c1419a11b7d4cfefa23e4f3279b (commit) from e0aff99a73d836da0dad4602e5559595fbe5ba5c (commit) - Log ----------------------------------------------------------------- commit 4b25bb8e20bf3c1419a11b7d4cfefa23e4f3279b Author: Stas Boukarev <stassats@...> Date: Mon May 14 05:12:45 2012 +0400 Optimize copy-tree. copy-tree used to always call itself, even on linear lists, which caused stack exhaustion on long lists. Make it copy linear lists linearly, and recur only when necessary. This also makes it somewhat faster. Fixes lp#98926. --- NEWS | 2 ++ src/code/list.lisp | 15 ++++++++++++++- 2 files changed, 16 insertions(+), 1 deletions(-) diff --git a/NEWS b/NEWS index 51b81fa..a4f9d84 100644 --- a/NEWS +++ b/NEWS @@ -64,6 +64,8 @@ changes relative to sbcl-1.0.56: *) * documentation: ** improved docstrings: REPLACE (lp#965592) diff --git a/src/code/list.lisp b/src/code/list.lisp index d74cc1c..da6ca8e 100644 --- a/src/code/list.lisp +++ b/src/code/list.lisp @@ -445,8 +445,21 @@ #!+sb-doc "Recursively copy trees of conses." (if (consp object) - (cons (copy-tree (car object)) (copy-tree (cdr object))) + (let ((result (list (if (consp (car object)) + (copy-tree (car object)) + (car object))))) + (loop for last-cons = result then new-cons + for cdr = (cdr object) then (cdr cdr) + for car = (if (consp cdr) + (car cdr) + (return (setf (cdr last-cons) cdr))) + for new-cons = (list (if (consp car) + (copy-tree car) + car)) + do (setf (cdr last-cons) new-cons)) + result) object)) + ;;;; more commonly-used list functions ----------------------------------------------------------------------- hooks/post-receive -- SBCL
http://sourceforge.net/p/sbcl/mailman/sbcl-commits/?viewmonth=201205&viewday=14
CC-MAIN-2014-15
refinedweb
247
66.74
I am received a warning every time I use Flask Security. FlaskWTFDeprecationWarning: "flask_wtf.Form" has been renamed to "FlaskForm" and will be removed in 1.0. from flask_security import current_user, login_required, RoleMixin, Security, \ SQLAlchemyUserDatastore, UserMixin, utils It looks like 1.7.5 is the latest release of Flask-Security. And the latest version of Flask-WTF is 0.13 (make sure you have that installed by checking a pip freeze). Since you don't use Flask-WTF directly, the issue isn't your code. The issue is coming from Flask-Security's code itself, which has Flask-WTF as a dependency. The way that Flask-Security imports the Form class from Flask-WTF is deprecated, so you're seeing the error when this line runs: from flask_wtf import Form as BaseForm You can either open an issue on Flask-Security (feel free to link to this question) or submit a pull request yourself to the author updating this line to the non-deprecated import from flask_wtf import FlaskForm as BaseForm Make sure to run tests before / after too before submitting. For a little more context, you can use git blame to see the commit that last changed the deprecated import line in Flask-Security (6f68f1d) on August 15, 2013. Doing the same on Flask-WTF, you can see that the deprecation was introduced in 42cc475 on June 30, 2016.
https://codedump.io/share/lrhPhtbIiONb/1/flaskwtfdeprecationwarning-with-flasksecurity
CC-MAIN-2017-13
refinedweb
229
60.65
Hello ! I think I’ve run into a problem with my project and I need a bit of guidance . Im using an Arduino UNO to read data from some sensors and I’m sending that data to an ESP8266 through UART . So something like : #include <Arduino.h> #include <SoftwareSerial.h> SoftwareSerial espSerial(5, 6); // I made the pin 5 = Rx , 6 =Tx . espSerial.begin(9600); void loop() { espSerial.println(data); } And on the ESP side, Im using the Rx/Tx pins and Im just reading from the serial : Serial.readStringUntil('\n'); This is just an example , not exactly my code . Anyway , my problem is that I need to receive data on my esp AND send some other data back to Arduino , at the same time . Can I achieve this with UART ? Or do I need to change my aproach to I2C ? And if so, can I get some guidance , please ?
https://forum.arduino.cc/t/uart-communication-esp8266-arduinouno/672037
CC-MAIN-2021-31
refinedweb
150
76.72
More like this - SASS/SCSS include template tag. by bryanhelmig 3 months, 2 weeks ago - WTForm (What The Form) by chrj 5 years ago - custom sql without table names by robharvey 5 years ago - (Almost) Single table polymorphism in Django by julkiewicz 1 year, 1 month ago - Changing the look of newforms as_table with a custom BaseForm by mp 4 years, 11 months ago Hey there, I spent a little time getting this code integrated into my project - and thought I might share the changes I needed to perform to get it working - for all those copy-and-pasters (like myself). I did have one question for anyone who might be able to answer it. I haven't spent much time modifying this code - however I noticed that it only creates as many td tags as you have elements in your sequence. For instance - if I have 6 elements and have set the maximum columns set to 5 - how do I get the second row to print those extra 4 td tags that are missing from the element? It doesn't seem to be affecting anything negatively right now - however I'm not sure it's valid. Anyway, to get this code running: create a templatetags directory in your application folder (not project) Add an __init__.pyfile in your templatetags directory Create a new .py file in the templatetags directory with the above code in it (comment out the html in the above code - after ##Sample usage:) Add the following to the top of the file: from django import template register = template.Library() Change Nodesto template.Nodesand all instances of NodeList()to template.NodeList() In your django template page use {% load imageTable %}where 'imageTable' is the name of the file you created in step one Now everything should be solid # cannot make it work for me. # As a quick, simple way to make a table I added a custom template filter for modulo of an integer: (add to .../mysite/myapp/templatetags/myapp_tags.py) and then it's very easy in the template to make a table with n elements per row with a simple for loop by seeing if ( for_loop.counter modulo n ) is zero, and if so make a new table row: # See the update table tag for a couple more features... #
http://djangosnippets.org/snippets/296/
crawl-003
refinedweb
383
63.73
Recursion is a topic in mathematics and computer science. In computer programming languages, the term recursion refers to a function that calls itself. Another way of putting it would be a function definition that includes the function itself in its definition. One of the first warnings I received when my computer science professor talked about recursion was that you can accidentally create an infinite loop that will make your application hang. This can happen because when you use recursion, your function may end up invoking itself infinitely. So, as with any other potential infinite loop, you need to make sure you have a way to break out of the loop. The idea in most recursive functions is to break up the procedure being done into smaller pieces that we can still process with the same function. The favorite method of describing recursion is usually illustrated by creating a factorial function. A factorial normally looks something like this: 5!. Note that there is an exclamation mark after the number. That notation denotes that it is to be treated as a factorial. What this means is that 5! = 5*4*3*2*1 or 120. Let’s take a look at a simple example. # factorial.py def factorial(number): if number == 0: return 1 else: return number * factorial(number-1) if __name__ == '__main__': print(factorial(3)) print(factorial(5)) In this code, we check the number that we pass in to see if it is equal to zero. If it is, we return the number one. Otherwise, we take the number and multiply it with the result of calling the same function but with the number minus one. We can modify this code a bit to get the number of times we have recursed: def factorial(number, recursed=0): if number == 0: return 1 else: print('Recursed {} time(s)'.format(recursed)) recursed += 1 return number * factorial(number-1, recursed) if __name__ == '__main__': print(factorial(3)) Each time we call the factorial function and the number is greater than zero, we print out the number of times we recursed. The last string you should see should be “Recursed 2 time(s)” because it should only need to call factorial twice with the number 3. Python’s Recursion Limit At the beginning of this article, I mentioned that you can create an infinite recursive loop. Well, you can in some languages, but Python actually has a recursion limit. You can check it yourself by doing the following: >>> import sys >>> sys.getrecursionlimit() 1000 If you feel that limit is too low for your program, you can also set the recursion limit via the sys module’s setrecursionlimit() function. Let’s try to create a recursive function that will exceed that limit to see what happens: # bad_recursion.py def recursive(): recursive() if __name__ == '__main__': recursive() If you run this code, you should see the following exception thrown: RuntimeError: maximum recursion depth exceeded. Python prevents you from creating a function that ends up in a never-ending recursive loop. Flattening Lists With Recursion There are other things you can do with recursion besides factorials, though. A more practical example would be creating a function to flatten a nested list — for example: # flatten.py def flatten(a_list, flat_list=None): if flat_list is None: flat_list = [] for item in a_list: if isinstance(item, list): flatten(item, flat_list) else: flat_list.append(item) return flat_list if __name__ == '__main__': nested = [1, 2, 3, [4, 5], 6] x = flatten(nested) print(x) When you run this code, you should end up with a list of just integers instead of a list of integers and one list. Of course, there are many other valid ways to flatten a nested list, such as using Python’s itertools.chain(). You might want to check out the code behind the chain() class, as it has a very different approach for flattening a list. Wrapping Up Now, you should have a basic understanding of how recursion works and how you can use it in Python. I think it’s neat that Python has a built-in limit for recursion to prevent developers from creating poorly constructed recursive functions. I also want to note that in my many years as a developer, I don’t think I have ever really needed to use recursion to solve a problem. I am sure there are plenty of problems where the solution could be implemented in a recursive function, but Python has so many other ways to do the same thing that I’ve never felt the need to do so. One other note I want to bring up is that recursion can be difficult to debug since it is hard to tell what level of recursion that you have reached when the bug occurred. Regardless, I hope you have found this article useful. Happy coding! {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/python-101-rescursion?fromrel=true
CC-MAIN-2017-51
refinedweb
815
60.04
This article documents some common mistakes that I see again and again in bug reports and requests for help on sites like StackOverflow. Reading from an istream without checking the result A very frequently asked question from someone learning C++ looks like this: I wrote this program but it doesn’t work. It reads from the file correctly but does the calculation wrong. #include <iostream> #include <fstream> int calculate(int a, int b, int c) { // blah blah blah complex calculation return a + b + c; } int main() { std::ifstream in("input.txt"); if (!in.is_open()) { std::cerr << "Failed to open file\n"; return 1; } int i, j, k; in >> i >> j >> k; std::cout << calculate(i, j, k); } Why doesn’t the calculation work? In many, many cases the problem is that the in >> ... statement failed, so the variables contain garbage values and so the inputs to the calculation are garbage. The program has no way to check the assumption ‘it reads from the file correctly’, so attempts to debug the problem are often just based on guesswork. The solution is simple, but seems to be rarely taught to beginners: always check your I/O operations. The improved version of the code in Listing 1 only calls calculate(i, j, k) if reading values into all three variables succeeds. Now if any of the input operations fails you don’t get a garbage result, you get an error that makes the problem clear immediately. You can choose other forms of error handling rather than throwing an exception, the important bit is to check the I/O and not just keep going regardless when something fails. Recommendation: always check that reading from an istream succeeds. Locking and unlocking a std::mutex This is always wrong: std::mutex mtx; void func() { mtx.lock(); // do things mtx.unlock(); } It should always be done using one of the RAII scoped lock types such as lock_guard or unique_lock e.g. std::mutex mtx; void func() { std::lock_guard<std::mutex> lock(mtx); // do things } Using a scoped lock is exception-safe, you cannot forget to unlock the mutex if you return early, and it takes fewer lines of code. Recommendation: always use a scoped lock object to lock and unlock a mutex. Be careful that you don’t forget to give a scoped lock variable a name! This will compile, but doesn’t do what you expect: std::mutex mtx; void func() { std::unique_lock<std::mutex> (mtx); // OOPS! // do things, but the mutex is not locked! } This default-constructs a unique_lock object called mtx, which has nothing to do with the global mtx object (the parentheses around (mtx) are redundant and so it’s equivalent to simply std::unique_lock<std::mutex> mtx;). A similar mistake can happen using braces instead of parentheses: std::mutex mtx; void func() { std::unique_lock<std::mutex> {mtx}; // OOPS! // do things, but the mutex is not locked! } This does lock the global mutex mtx, but it does so in the constructor of a temporary unique_lock, which immediately goes away and unlocks the mutex again. Testing for istream.eof() in a loop A common mistake when using istreams is to try and use eof() to detect when there is no more input: while (!in.eof()) { in >> x; process(x); } This doesn’t work because the eofbit is only set after you try to read from a stream that has already reached EOF. When all the input has been read, the loop will run again, reading into x will fail, and then process(x) is called even though nothing was read. The solution is to test whether the read succeeds, instead of testing for EOF: while (in >> x) { process(x); } You should never read from an istream without checking the result anyway, so doing that correctly avoids needing to test for EOF. Recommendation: test for successful reads instead of testing for EOF. Inserting into a container of smart pointers with emplace_back(new X) When appending to a std::vector<std::unique_ptr<X>>, you cannot just say v.push_back(new X), because there is no implicit conversion from X* to std::unique_ptr<X>. A popular solution is to use v.emplace_back(new X) because that compiles ( emplace_back constructs an element in-place from the arguments, and so can use explicit constructors). However, this is not safe. If the vector is full and needs to reallocate memory, that could fail and throw a bad_alloc exception, in which case the pointer will be lost and will never be deleted. The safe solution is to create a temporary unique_ptr that takes ownership of the pointer before the vector might try to reallocate: v.push_back(std::unique_ptr<X>(new X)) (You could replace push_back with emplace_back but there is no advantage here because the only conversion is explicit anyway, and emplace_back is more typing!) In C++14 you should just use std::make_unique and it’s a non-issue: v.push_back(std::make_unique<X>()) Recommendation: do not prefer emplace_back just because it allows you to call an explicit constructor. There might be a good reason the class designer made the constructor explicit that you should think about and not just take a short cut around it. (Scott Meyers discusses this point as part of Item 42 in Effective Modern C++.) Defining ‘less than’ and other orderings correctly When using custom keys in maps and sets, a common mistake is to define a ‘less than’ operation as in Listing 2. This operator< does not define a valid ordering. Consider how it behaves for X{1, 2} and X{2, 1}: X x1{1, 2}; X x2{2, 1}; assert( x1 < x2 ); assert( x2 < x1 ); The operator< defined above means that x1 is less than x2 but also that x2 is less than x1, which should be impossible! The problem is that the operator< says that l is less than r if any member of l is less than the corresponding member of r. That’s like saying that 20 is less than 11 because when you compare the second digits '0' is less than '1' (or similarly, that the string "20" should be sorted before "11" because the second character '0' is less than the second character '1'). In technical terms the operator< above fails to define a Strict Weak Ordering. Another way to define a bogus order is: inline bool operator<(const X& l, const X& r) { return l.a < r.a && l.b < r.b; } Where the first example gave the nonsensical result: x1 < x2 && x2 < 1 this definition gives the result: !(x1 < x2) && !(x1 < x2) In other words, the two values are considered to be equivalent, and so only one of them could be inserted into a unique associative container such as a std::set. But then if you compare those values to X x3{1, 0}, you find that x1 and x3 are equivalent, but x1 and x2 are not. So depending which of x1 and x2 is in the std::set affects whether or not you can add x3! An invalid order like the ones above will cause undefined behaviour if it is used where the Standard Library expects a correct order, e.g. in std::sort, std::set, or std::map. Trying to insert the x1 and x2 values above into a std::set<X> will give strange results, maybe even crashing the program, because the invariant that a set’s elements are always sorted correctly is broken if the comparison function is incapable of sorting correctly. This is discussed further in Effective STL by Scott Meyers, and in the CERT C++ Coding Standard. A correct implementation of the function would only consider the second member when the first members are equal i.e. inline bool operator<(const X& l, const X& r) { if (l.a < r.a) return true; if (l.a == r.a && l.b < r.b) return true; return false; } Since C++11 defining an order correctly is trivial, just use std::tie: inline bool operator<(const X& l, const X& r) { return std::tie(l.a, l.b) < std::tie(r.a, r.b); } This creates a tuple referring to the members of l and a tuple referring to the members of r, then compares the tuples, which does the right thing (only comparing later elements when the earlier ones are equal). Recommendation: When writing your own less-than operator make sure it defines a Strict Weak Ordering, and write tests to ensure that it doesn’t give impossible results like (a < b && b < a) or (a < a). Prefer using std::tie() to create tuples which can be compared, instead of writing your own error-prone comparison logic. Consider whether defining a hash function and using either std::unordered_set std::unordered_map would be better anyway than std::set std::map Unordered containers require an equality operator, but that’s harder to get wrong. In summary The only common theme to the items above is that I see these same mistakes made again and again. I hope documenting them here will help you to avoid them in your own code, and in code you review or comment on. If you know of other antipatterns that could be covered let me or the Overload team know about them, so they can be included in a follow-up piece (or write a follow up piece yourself !).
https://accu.org/index.php/journals/2271
CC-MAIN-2018-30
refinedweb
1,562
58.72
Exploring Bureau of Labor Statistics Time Series Machine learning models benefit from an increased number of features --- "more data beats better algorithms". In the financial and social domains, macroeconomic indicators are routinely added to models particularly those that contain a discrete time or date. For example, loan or credit analyses that predict the likelihood of default can benefit from unemployment indicators or a model that attempts to quantify pay gaps between genders can benefit from demographic employment statistics. The Bureau of Labor Statistics (BLS) collects information related to the labor market, working conditions, and prices into periodic time series data. Moreover, BLS provides a public API making it very easy to ingest essential economic information into a variety of analytics. However, while they provide raw data and even a few reports that analyze employment conditions in the United States, the tables they provide are more suited towards specialists and the information can be difficult to interpret at a glance. In this post, we will review simple data ingestion of BLS time series data, enabling routine collection of data on a periodic basis so that local models are as up to date as possible. We will then visualize the time series using pandas and matplotlib to explore the series provided with a functional methodology. At the end of this post, you will have a mechanism to fetch data from BLS and quickly view and explore data using the BLS series id key. The BLS API The BLS API currently has two versions, but it is strongly encouraged to use the V2 API, which requires registration. Once you register, you will receive an API Key that will authorize your requests, ensuring that you get access to as many data sets as possible at as high a frequency as possible. The API is organized to return data based on the BLS series id, a string that represents the survey type and encodes which version or facet of the data is being represented. To find series ids, I recommend going to the data tools section of the BLS website and clicking on the "top picks" button next to the survey you're interested in, the series id is provided after the series title. For example, the Current Population Survey (CPS), which provides employment statistics for the United States, lists their series; here are a few examples: - Unemployment Rate - LNS14000000 - Discouraged Workers - LNU05026645 - Persons At Work Part Time for Economic Reasons - LNS12032194 - Unemployment Rate - 25 Years & Over, Some College or Associate Degree - LNS14027689 The series id, in this case, starts with LNS or LNU: LNS14000000, LNU05026645, LNS12032194, and LNS14027689. There are two methods to fetch data from the API. You can GET data from a single series endpoint, or you can POST a list of up to 25 ids to fetch multiple time series at a time. Generally, BLS data sets are fetched in groups, so we'll look at the multiple time series ingestion method. Using the requests.py module, we can write a function that returns a JSON data set for a list of series ids: import os import json import requests BLS_API_KEY = os.environ.get('BLS_API_KEY') BLS_ENDPOINT = "" def fetch_bls_series(series, **kwargs): """ Pass in a list of BLS timeseries to fetch data and return the series in JSON format. Arguments can also be provided as kwargs: - startyear (4 digit year) - endyear (4 digit year) - catalog (True or False) - calculations (True or False) - annualaverage (True or False) - registrationKey (api key from BLS website) If the registrationKey is not passed in, this function will use the BLS_API_KEY fetched from the environment. """ if len(series) < 1 or len(series) > 25: raise ValueError("Must pass in between 1 and 25 series ids") # Create headers and payload post data headers = {'Content-Type': 'application/json'} payload = { 'seriesid': series, 'registrationKey': BLS_API_KEY, } # Update the payload with the keyword arguments and convert to JSON payload.update(kwargs) payload = json.dumps(payload) # Fetch the response from the BLS API response = requests.post(BLS_ENDPOINT, data=payload, headers=headers) response.raise_for_status() # Parse the JSON result result = response.json() if result['status'] != 'REQUEST_SUCCEEDED': raise Exception(result['message'][0]) return result This script looks up your API key from the environment, a best practice for handling keys which should not be committed to GitHub or otherwise saved in a place that they can be discovered publicly. You can either change the line to hard code your API key as a string, or you can export the variable in your terminal as follows: $ export BLS_API_KEY=yourapikey The function accepts a list of series ids and a set of generic keyword arguments which are stored as a dictionary in the kwargs variable. The first step of the function is to ensure that we have between 1 and 25 series passed in (otherwise an error will occur). If so, we create our request headers to pass and receive JSON data as well as construct a payload with our request parameters. The payload is constructed with the keyword arguments as well as the registration key from the environment and the list of series ids. Finally, we POST the request, check to make sure it returned successfully, and return the parsed JSON data. To run this function for the series we listed before: >>> series = ['LNS14000000', 'LNU05026645', 'LNS12032194', 'LNS14027689'] >>> data = fetch_bls_series(series, startyear=2000, endyear=2015) >>> print(json.dumps(data, indent=2)) You should see something similar to the following result: { "Results": { "series": [ { "seriesID": "LNS14027689", "data": [ { "year": "2009", "period": "M12", "periodName": "December", "footnotes": [ {} ], "value": "8.7" }, { "year": "2009", "period": "M11", "periodName": "November", "footnotes": [ {} ], "value": "8.8" } [… snip …] ]}]}} From here it is a simple matter to operationalize the routine (monthly) ingestion of new data. One method is to store the data in a relational database like PostgreSQL or SQLite so that complex queries can be run across series. As an example of database ingestion and wrangling, see the github.com/bbengfort/jobs-report repository. This project was a web/D3 visualization of the BLS time series data, but it utilized a routine ingestion mechanism as described in the README of the ingestion module. To simplify data access, we'll use a database dump from that project in the next section, but you can also use the data downloaded as JSON from the API if you wish. For this section, we have created a database of BLS data (using the API) that has two tables: a series table that has information describing each time series and a records table where each row is essentially a tuple of (blsid, period, value) records. This allows us to aggregate and query the timeseries data effectively, particularly in a DataFrame. For this section we've dumped out the two tables as CSV files, which can be downloaded here: BLS time series CSV tables. Querying Series Information The first step is to create a data frame from the series.csv file such that we can query information about each time series without having to store or duplicate the data. import pandas as pd info = pd.read_csv('../data/bls/series.csv') info.head() Working backward, we can create a function that accepts a BLS ID and returns the information from the info table: def series_info(blsid, info=info): return info[info.blsid == blsid] # Use this function to lookup specific BLS series info. series_info("LAUST280000000000003") I utilize this function a fair amount to check if I have a time series in my dataset or to lookup seemingly related time series. In fact, we can see a pattern starting to emerge from function and the API fetch function from the last section. Our basic methodology is going to be to create functions that accept one or more BLS series ids and then perform some work on them. Unifying our function signatures in this way and working with our data on a specific key type dramatically simplifies exploratory workflows. However, the BLS ids themselves aren't necessarily informative, so like the previous function, we need an ability to query the data frame. Here are a few example queries: info[info.source == 'LAUS'] This query returns all of the time series whose source is the Local Area Unemployment Statistics (LAUS) program, which breaks down unemployment by state. However, you'll notice from the previous section that the prefixes of the series seem to be related but not necessarily to the source. We could also query based on the prefix to find related series: info[info.blsid.apply(lambda r: r.startswith('LNS14'))] Combining queries like these into a functional methodology will easily allow you to explore the 3,368 series in this dataset and more as you continue to ingest series information using the API! The next step is to load the actual time series data into Pandas. Pandas implements two primary data structures for data analysis --- the Series and DataFrame objects. Both objects are indexed, meaning that they contain more information about the underlying data than simple one or two-dimensional arrays (which they wrap). Typically the indices are simple integers that represent the position from the beginning of the series or frame, but they can be more complex than that. For time series analysis, we can use a Period index, which indexes the series values by a granular interval (by month as in the BLS dataset). Alternatively, for specific events you can use the Timestamp index, but periods do well for our data. To load the data from the records.csv file, we need to construct a Series per time series data structure, creating a collection of them. Here's a function to go about this: import csv from itertools import groupby from operator import itemgetter # Load each series, grouping by BLS ID def load_series_records(path='../data/bls/records.csv'): with open(path, 'r') as f: reader = csv.DictReader(f) for blsid, rows in groupby(reader, itemgetter('blsid')): # Read all the data from the file and sort rows = list(rows) rows.sort(key=itemgetter('period')) # Extract specific data from each row, namely: # The period at the month granularity # The value as a float periods = [pd.Period(row['period']).asfreq('M') for row in rows] values = [float(row['value']) for row in rows] yield pd.Series(values, index=periods, name=blsid) In this function we use the csv module, part of the Python standard library, to read and parse each line of our opened CSV file. The csv.DictReader generates rows as dictionaries whose keys are based on the header row of the csv file. Because each record is in the format (blsid, period, value) (with some extra information as well), we can groupby the blsid. This does require the records.csv file to be sorted by blsid since the groupby function simply scans ahead and collects rows into the rows variable until it sees a new blsid. Once we have our rows grouped by blsid we can load them into memory and sort them by time period. The period value is a string in the form 2015-02-01, which is a sortable format; however, if we create a pd.Period from this string the period will have the daygranularity. For each row, we create a period using the .asfreq('M') to transform the period into the month granularity. Finally, we parse our values into floating points for data analysis and construct a pd.Series object with the values, the monthly period index, and assign a name to it --- the string blsid, which we will continue to use to query our data. This function uses the yield statement to return a generator of pd.Series objects. We can collect all series into a single data frame, indexed correctly as follows: series = pd.concat(list(load_series_records()), axis=1) If you're using our data, the series data frame should be indexed by period, and there should be roughly 183 months (rows) in our dataset. There are also 3366 time series in the data frame represented as columns whose column id is the BLS ID. If any of the series did not have a period matched by the global period index, the concatfunction correctly fills in that value as np.nan. As you can see from a simple head: the data frame contains a wide range of data, and the domain of every series can be dramatically different. Visualizing Series with Matplotlib Now that we've gone through the data wrangling hoops, we can start to visualize our series using matplotlib and the plotting library that comes with Pandas. The first step is to create a function that takes a blsid as input and uses the series and info data frames to create a visualization: def plot_single_series(blsid, series=series, info=info): title = info.get_value(info[info.blsid == blsid].title.index[0], 'title') series[blsid].plot(title=title) The first thing this function does is look up the title of the series using the series info data frame. To do this it uses the get_value method of the data frame which will return the value of a particular column for a particular row by index. To look up the title by blsid, we will have to query the info data frame for that row, info[info.blsid == blsid], then fetch the index, and then we can then use that to get the 'title' column. After that we can simply plot the series, fetching it directly from the series data frame and plotting it using Pandas. Warning: Don't try series.plot(), which will try to plot a line for every series (all 3366 of them); I've crashed a few notebooks that way! >>> plot_single_series("LNS12300000") This function is certainly an enhancement of the series_infofunction from before, allowing us to think more completely about the domain, range, and structure of the time series data for a single blsid. I typically use this function when adding macroeconomic features to datasets to decide if I should use a simple magnitude, or if I should use a slope or a delta, or some other representation of the data based on its shape. Even better though would be the ability to compare a few time series together: def plot_multiple_series(blsids, series=series, info=info): for blsid in blsids: title = info.get_value(info[info.blsid == blsid].title.index[0], 'title') series[blsid].plot(label=title) plt.title("BLS Time Series: {}".format(", ".join(blsids))) plt.legend(loc='best') In this function instead of providing a single blsid, the argument is a list of blsid strings. For each series, we plot them but add their title as a label. This allows us to create a legend with all the series names. Finally, we add a title that indicates the series blsid for reference later. We can now start making visual series comparisons: >>> plot_multiple_series(["LNS14000025", "LNS14000026"]) One thing to note is that comparing these series worked because they had the approximately the same range. However, not all series in the BLS data set are in the same range and can be orders of magnitude different. One method to combat this is to provide a normalize=Trueargument to the plot_multiple_series function and then use a normalization method to bring the series into the range [0,1]. Conclusion The addition of macroeconomic features to models can greatly expand their predictive powers, particularly when they inform the behavior of the target variable. When instances have some time element that can be mapped to a period, then the economic data collected by Census and BLS can be easily incorporated into models. This post was designed to equip a workflow for ingesting and exploring macroeconomic data from BLS in particular. By centering our workflow on the blsid of each time series, we were able to create functions that accepted an id or a list of ids and work meaningfully with it. This allows us to connect exploration both on the BLS data explorer, as well as in our data frames. Our exploration process was end-to-end with respect to the data science pipeline. We ingested data from the BLS API, then stored and wrangled that data into a database format. Computation on the timeseries involved the pd.Period and pd.Series objects, which were then aggregated into a single data frame. At all points we explored querying and limiting the data, culminating with visualizing single and multiple timeseries using matplotlib. Helpful Links - BLS Timeseries CSV Tables - BLS Developer Documentation - Jobs Report BLS Database Ingestion - Interactive Exploration of the Employment Situation Report Acknowledgements Nicole Donnelly reviewed and edited this post, Tony Ojeda helped wrangle the datasets.!
https://www.districtdatalabs.com/exploring-bureau-of-labor-statistics-time-series
CC-MAIN-2018-17
refinedweb
2,747
50.46
I was building a form with Formik and I needed a single checkbox to mark a post as "published". In Formik 1.5.8, my values values weren't mapping correctly to checkboxes, so I created a generic Checkbox component to use instead of the Formik Field component. import { Field } from "formik"; export default function Checkbox({ id, name, className }) { return ( <> <Field name={name} render={({ field, form }) => { return ( <input type="checkbox" id={id} className={className} checked={field.value} {...field} /> ); }} /> </> ); } I only used for a single true/false value, so your mileage may vary if you're working on something else. I extracted the code above from this CodeSandbox, so please check it out. I think it'll show you how to do a little more than my implementation does. It looks like the checkbox issue will be fixed in version 2 of Formik according to its author Jared Palmer, but this should be a workable solution until then. Top comments (4) Working great thank you, I'm using it with TypeScript so here is my component for anybody that may be interested. This post helped me out of a jam, thanks! I had to modify the classprop into classNamebut otherwise it worked great! Glad it helped, and good catch! I changed it to className on my snippet. Cool, but the field can't be unchecked with this solution 😂
https://dev.to/tylerlwsmith/how-to-implement-a-working-checkbox-component-in-formik-1-5-8-5dmj
CC-MAIN-2022-40
refinedweb
227
72.97
Full-stack. Remote-work. Based in Phoenix, AZ. Specializing in APIs, service integrations, DevOps, and prototypes. Salesforce developers have contributed much to the open-source community. Among their many contributions is an important, but perhaps lesser-known, project named oclif. The Open CLI Framework was announced in early 2018 and has since grown to become the foundation for the Salesforce CLI and the Heroku CLI. In this post, we will provide a brief overview of oclif, and then we’ll walk through how to build a simple CLI with oclif. A Brief History of oclif Oclif started as an internal Heroku project. Heroku has always been focused on developer experience, and its CLI sets the standard for working with a service via the API. After all, Heroku is the creator of git push heroku for deployment—a standard now widely used across the industry. If you've ever run heroku ps or sfdx auth:list, then you've used oclif. From the start, oclif was designed to be an open, extensible, lightweight framework for quickly building CLIs, both simple and complex. More than four years after release, oclif has become the authoritative framework for building CLIs. Some of the most popular oclif components see more than a million weekly downloads. The oclif project is still under active development. Some examples of high-profile companies or projects built via oclif include: Why would a developer choose oclif today? There are many reasons one might want to build a CLI. Perhaps your company has an API, and you'd like to make it easier for customers to consume it. Maybe you work with an internal API, and you'd like to run commands via the CLI to automate daily tasks. In these scenarios, you could always write Powershell or Bash scripts or build your own CLI from scratch, but oclif is the best option. Oclif is built on Node.js. It runs on all major operating systems and has multiple distribution options. Along with being fast, oclif is also self-documenting and supports plugins, allowing developers to build and share reusable functionality. As oclif rapidly gains adoption, more and more libraries, plugins, and useful packages are becoming available. For example, cli-ux comes pre-packaged with the @oclif/core package and provides common UX functionality such as spinners and tables, and progress bars, which you can add to your CLI. It's easy to see why oclif is a success and should be your choice for building a CLI. Introduction to Our Mini-project Let’s set the scene for the CLI you will build. You want to build your own CLI for one of your passions: space travel. You love space travel so much that you watch every SpaceX launch live, and you check the HowManyPeopleAreInSpaceRightNow.com page more than you care to admit. You want to streamline this obsession by building a CLI for space travel details, starting with a simple command that will show you the number of people currently in space. Recently, you discovered a service called Open Notify that has an API endpoint for this purpose. We'll use the oclif generate command to create our project, which will scaffold a new CLI project with some sensible defaults. Projects created with this command use TypeScript by default—which is what we'll use for our project—but can be configured to use vanilla JavaScript as well. Creating the Project To start, you’ll need Node.js locally if you don’t already have it. The oclif project requires the use of an active LTS version of Node.js. You can verify the version of Node.js that you have installed via this command: / $ node -v v16.15.0 Next, install the oclif CLI globally: / $ npm install -g oclif Now, it’s time to create the oclif project using the generate command: / $ oclif generate space-cli _-----_ | | ╭──────────────────────────╮ |--(o)--| │ Time to build an oclif │ `---------´ │ CLI! Version: 3.0.1 │ ( _´U`_ ) ╰──────────────────────────╯ /___A___\ / | ~ | __'.___.'__ ´ ` |° ´ Y ` Cloning into '/space-cli'... At this point, you will be presented with some setup questions. For this project, you can leave them all blank to use the defaults (indicated by the parentheses), or you can choose to fill them out yourself. The final question will ask you to select a package manager. For our example, choose npm. Starting with oclif’s hello world command From here, oclif will finish creating your CLI project for you. In the bin/ folder, you'll find some node scripts that you can run to test out your CLI while you're developing. These scripts will run the command from the built files in the dist/ folder. If you just run the script as is, you'll see something like this message: / $ cd space-cli/ /space-cli $ ./bin/run oclif example Hello World CLI VERSION space-cli/0.0.0 darwin-arm64 node-v16.15.0 USAGE $ space-cli [COMMAND] TOPICS hello Say hello to the world and others plugins List installed plugins. COMMANDS hello Say hello help Display help for space-cli. plugins List installed plugins. By default, if you don’t specify a command to run for the CLI, it will display the help message. Let’s try again: /space-cli $ ./bin/run hello > Error: Missing 1 required arg: > person Person to say hello to > See more help with --help This time, we received an error. We’re missing a required argument: We need to specify who we’re greeting! /space-cli $ ./bin/run hello John > Error: Missing required flag: > -f, --from FROM Whom is saying hello > See more help with --help We received another helpful error message. We need to specify the greeter as well, this time with a flag: /space-cli $ ./bin/run hello John --from Jane hello John from Jane! (./src/commands/hello/index.ts) Finally, we’ve properly greeted John, and we can take a look at the hello command’s code, which can be found in src/commands/hello/index.ts. It looks like this: import {Command, Flags} from '@oclif/core' export default class Hello extends Command { static description = 'Say hello' static examples = [ `$ oex hello friend --from oclif hello friend from oclif! (./src/commands/hello/index.ts) `, ] static flags = { from: Flags.string({char: 'f', description: 'Whom is saying hello', required: true}), } static args = [{name: 'person', description: 'Person to say hello to', required: true}] async run(): Promise<void> { const {args, flags} = await this.parse(Hello) this.log(`hello ${args.person} from ${flags.from}! (./src/commands/hello/index.ts)`) } } As you can see, an oclif command is simply defined as a class with an async run() method, which unsurprisingly contains the code that is executed when the command runs. In addition, some static properties provide additional functionality, although they're all optional. - The descriptionand examplesproperties are used for the help message. - The flagsproperty is an object which defines the flags available for the command, where the keys of the object correspond to the flag name. We'll dig into those a bit more later. - Finally, argsis an array of objects representing arguments the command can take with some options. The run() method parses the arguments and flags and then prints out a message using the person argument and from flag using this.log() (a non-blocking alternative to console.log). Notice both the flag and argument are configured with required: true, which is all it takes to get validation and helpful error messages like those we saw in our earlier testing. Creating our own command Now that we understand the anatomy of a command, we’re ready to write our own. We’ll call it humans, and it will print out the number of people currently in space. You can delete the hello folder in src/commands, since we won't need it anymore. The oclif CLI can help us scaffold new commands, too: /space-cli $ oclif generate command humans _-----_ | | ╭──────────────────────────╮ |--(o)--| │ Adding a command to │ `---------´ │ space-cli Version: 3.0.1 │ ( _´U`_ ) ╰──────────────────────────╯ /___A___\ / | ~ | __'.___.'__ ´ ` |° ´ Y ` create src\commands\humans.ts create test\commands\humans.test.ts No change to package.json was detected. No package manager install will be executed. Now we have a humans.ts file we can edit, and we can start writing our command. The Open Notify API endpoint we will use can be found at the following URL: As you can see in the description, the endpoint returns a simple JSON response with details about the humans currently in space. Replace the code in src/commands/humans.ts with the following: import {Command} from '@oclif/core' import {get} from 'node:http' export default class HumanCommand extends Command { static description = 'Get the number of humans currently in space.' static examples = [ '$ space-cli humans\nNumber of humans currently in space: 7', ] public async run(): Promise<void> { get('', res => { res.on('data', d => { const details = JSON.parse(d) this.log(`Number of humans currently in space: ${details.number}`) }) }).on('error', e => { this.error(e) }) } } Here’s a breakdown of what we’re doing in the code above: - Send a request to the Open Notify endpoint using the httppackage. - Parse the JSON response. - Output the number with a message. - Catch and print any errors we may encounter along the way. For this first iteration of the command, we didn’t need any flags or arguments, so we’re not defining any properties for those. Testing our basic command Now, we can test out our new command. First, we’ll have to rebuild the dist/ files, and then we can run our command just like the hello world example from before: /spacecli $ npm run build > [email protected] build > shx rm -rf dist && tsc -b /spacecli $ ./bin/run humans Number of humans currently in space: 7 Pretty straightforward, isn’t it? You now have a simple CLI project, built via the oclif framework, that can instantly tell you the number of people in space. Enhancing our command with flags and a nicer UI Knowing how many people are currently in space is nice, but we can get even more space data! The endpoint we’re using provides more details about the spacefarers, including their names and which spacecraft they are on. We’ll take our command one step further, demonstrating how to use flags and giving our command a nicer UI. We can output our data as a table with the cli-ux package, which has been rolled into @oclif/core (as of version 1.2.0). To ensure we have access to cli-ux, let’s update our packages. /spacecli $ npm update We can add an optional --table flag to our humans command to print out this data in a table. We use the CliUx.ux.table() function for this pretty output. import {Command, Flags, CliUx} from '@oclif/core' import {get} from 'node:http' export default class HumansCommand extends Command { static description = 'Get the number of humans currently in space.' static examples = [ '$ space-cli\nNumber of humans currently in space: 7', ] static flags = { table: Flags.boolean({char: 't', description: 'display who is in space and where with a table'}), } public async run(): Promise<void> { const {flags} = await this.parse(HumansCommand) get('', res => { res.on('data', d => { const details = JSON.parse(d) this.log(`Number of humans currently in space: ${details.number}`) if (flags.table) { CliUx.ux.table(details.people, {name: {}, craft: {}}) } }) }).on('error', e => { this.error(e) }) } } In our updated code, our first step was to bring back the flags property. This time we're defining a boolean flag—it's either there, or it isn’t—as opposed to string flags which take a string as an argument. We also define a description and a shorthand -t for the flag in the options object that we're passing in. Next, we parse the flag in our run method. If it’s present, we display a table with CliUx.ux.table(). The first argument, details.people, is the data we want to display in the table, while the second argument is an object that defines the columns in the table. In this case, we define a name and a craft column, each with an empty object. (There are some We can build and rerun the command with the new table flag to see what that looks like: /spacecli $ ./bin/run humans --table Number of humans currently in space: 10 Name Craft ───────────────── ──────── Oleg Artemyev ISS Denis Matveev ISS Sergey Korsakov ISS Kjell Lindgren ISS Bob Hines ISS Samantha Cristoforetti ISS Jessica Watkins ISS Cai Xuzhe Tiangong Chen Dong Tiangong Liu Yang Tiangong Beautiful! Add some more functionality on your own At this point, our example project is complete, but you can easily build more on top of it. The Open Notify service provides an API endpoint to get the space-cli iss to return the location when run. What about distribution? You might be thinking about distribution options for sharing your awesome new CLI. You could publish this project to npm via a simple command. You could create a tarball to distribute the project internally to your team or coworkers. You could also create a Homebrew formula if you wanted to share it with macOS users. Oclif can Conclusion We started this article by reviewing the history of oclif, along with the many reasons why it should be your first choice when creating a CLI. Some of its advantages include speed, extensibility, and a variety of distribution options. We learned how to scaffold a CLI project and add new commands to it, and built a simple CLI as an example. Now that you’ve been equipped with knowledge and a new tool, go out and be dangerous. Encode, Stream, and Manage Videos With One Simple Platform
https://hackernoon.com/building-a-simple-cli-with-oclif
CC-MAIN-2022-33
refinedweb
2,289
65.01
Axis2 Method call on autogenerated WSDL uses schema targetnamespace !?? Discussion in 'Java' started by Robsy, Dec 17, 2006.39 - milfar - Nov 14, 2008 Re: transforming to an XML Schema - targetNamespaceC. M. Sperberg-McQueen, Jul 29, 2003, in forum: XML - Replies: - 0 - Views: - 568 - C. M. Sperberg-McQueen - Jul 29, 2003 xsd:schema targetNamespace="nondomainstring" possible ?Markus Meckler, Jun 27, 2004, in forum: XML - Replies: - 3 - Views: - 744 - Priya Lakshminarayanan [MSFT] - Jun 29, 2004 Apache Axis2 - org.apache.axis2.AxisFault: Transaction not activeMichael Post, Aug 20, 2009, in forum: Java - Replies: - 5 - Views: - 1,208 - Arne Vajhøj - Aug 22, 2009 - Replies: - 0 - Views: - 231 - Bill - Nov 11, 2009
http://www.thecodingforums.com/threads/axis2-method-call-on-autogenerated-wsdl-uses-schema-targetnamespace.389482/
CC-MAIN-2015-18
refinedweb
107
57.27
Getting Started Welcome to Vue Test Utils, the official testing utility library for Vue.js! This is the documentation for Vue Test Utils v2, which targets Vue 3. In short: - Vue Test Utils 1 targets Vue 2. - Vue Test Utils 2 targets Vue 3. What is Vue Test Utils? Vue Test Utils (VTU) is a set of utility functions aimed to simplify testing Vue.js components. It provides some methods to mount and interact with Vue components in an isolated manner. Let's see an example: import { mount } from '@vue/test-utils' // The component to test const MessageComponent = { template: '<p>{{ msg }}</p>', props: ['msg'] } test('displays message', () => { const wrapper = mount(MessageComponent, { props: { msg: 'Hello world' } }) // Assert the rendered text of the component expect(wrapper.text()).toContain('Hello world') }) What Next? To see Vue Test Utils in action, take the Crash Course, where we build a simple Todo app using a test-first approach. Docs are split into two main sections: - Essentials, to cover common use cases you'll face when testing Vue components. - Vue Test Utils in Depth, to explore other advanced features of the library. You can also explore the full API. Alternatively, if you prefer to learn via video, there is a number of lectures available here.
https://test-utils.vuejs.org/guide/
CC-MAIN-2022-40
refinedweb
209
65.73
On Tue, 11 May 1999, Andrea Arcangeli wrote:> >> 1) there are no idle cpu in the system> >> 2) the prev task was in general the less priority one> >> >wrong. We actively _preempt_ processes on other CPUs, this means that a> > If I understand well the only wrong thing I said is point (2). But if you> would read better my email I stated clearly that point (2) it not the pure> reality, but instead I said that it's a very good approximation of the> pure reality.> > >preempted process has all rights to try to replace even lesser priority> >processes on other CPUs. Your problem might be that you are thinking in> > You should rereadd my previous email. I never said it's useless, I have> said that it's not worthwile.it is always 'worthwile' to have a correct scheduler. This was the solepurpose of all the 2.2.8 scheduler changes!> You mean that a preemted process has all rights to preempt a even lesser> priority CPU. But ask you _why_ such process is been preempted. Simply> because in general we can see it as the _less_ priority one.not necesserily, it might as well just be replaced by a RT process ... > I repeat that as global design I prefer to have such call in schedule_tail> even if according to me it's only a performance _hit_.it is not a performance hit at all because most processes reschedule'voluntarily', ie. they get removed from the runqueue.There is one inconsistency left though, if the previous process wasSCHED_YIELD then we should obviously not push it to other CPUs, because ithas just given up it's timeslice. (the attached untested patch fixes this) -- mingo--- linux/kernel/sched.c.orig Tue May 11 13:29:39 1999+++ linux/kernel/sched.c Tue May 11 13:38:32 1999@@ -194,10 +194,8 @@ static inline int prev_goodness (struct task_struct * prev, struct task_struct * p, int this_cpu) {- if (p->policy & SCHED_YIELD) {- p->policy &= ~SCHED_YIELD;+ if (p->policy & SCHED_YIELD) return 0;- } return goodness(prev, p, this_cpu); } @@ -659,10 +657,16 @@ */ static inline void __schedule_tail (struct task_struct *prev) {+ if (prev->policy & SCHED_YIELD)+ prev->policy &= ~SCHED_YIELD;+ else {+#ifdef __SMP__+ if ((prev->state == TASK_RUNNING) &&+ (prev != idle_task(smp_processor_id())))+ reschedule_idle(prev);+#endif+ } #ifdef __SMP__- if ((prev->state == TASK_RUNNING) &&- (prev != idle_task(smp_processor_id())))- reschedule_idle(prev); wmb(); prev->has_cpu = 0; #endif /* __SMP__ */-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] read the FAQ at
http://lkml.org/lkml/1999/5/11/53
CC-MAIN-2015-14
refinedweb
415
63.59
Parsing search strings We've all had to write a search form at some point. Beyond simple cases, you reach for the big guns, like haystack, et al. But what about when it's just something simple? What if you want to, for instance, let people search your blog posts? In django, that can be done simply with: Blog.objects.filter(Q(title__icontains=word) | Q(body__icontains=word)) Which is fine, until someone wants to look for a post on, for instance "django" and "cookies", as opposed to the phrase "django cookies". So, we need to split up the search terms, and iteratively filter for them: qset = Blog.posts.all() for term in terms.split(' '): qset = qset.filter(Q(title__icontains=term) | Q(body__icontains=term)) Which is all great, until you want to search for "apt-get install" and "postgres" as separate terms. Reflexively you're probably thinking you'd do just that - put quotes around the terms to force groupings. But how do you parse that? Stdlib to the rescue! Of course, someone's had to do this before, and there's a lib for that. But it's not made for searching. It's the shlex library. >>> import shlex >>> shlex.split('"apt-get install" django') ['apt-get install', 'django'] And it's that simple? Well, not quite. If someone leaves unbalanced quotes [or is trying to search for a measurement in inches] shlex will not be happy: >>> shlex.split('macbook 13"') Traceback (most recent call last): ... ValueError: No closing quotation To solve this we need to go just a little bit deeper. The shlex.split function is really a wrapper for the shlex.shlex class, in a default and [generally] useful configuration. We need to configure a shlex to our needs. from shlex import shlex def parse_search_terms(terms): lexer = shlex(terms) lexer.commenters = '' lexer.quotes = '"\'' while True: term = lexer.get_token() if not term: break yield term Now we have a generator what will yield search terms, and can now easily filter our queryset! >>> list(parse_search_terms('13" macbook')) ['13"', 'macbook']
http://musings.tinbrain.net/blog/2015/may/03/parsing-search-strings/
CC-MAIN-2018-22
refinedweb
339
77.94
By: Charlie Calvert Abstract: Read a gentle introduction to the basic facts about an exciting new feature found in the Java 1.5 beta. Formerly generics were only available to C++ and Python programmers. Now Java has them too! This article is designed to show you the simplest possible example of creating a Generic class in Java 1.5. The text was written against Java 1.5 Beta 1, which can be downloaded generically from, or more specifically from. There are various complexities involved with using generics, and various philosophical issues surrounding the particular implementation of generics chosen by the designers of languages such as Java or C#. In this article I'm going to side step those issues and just show you the basics of how to use generics. I will, however, emphasize the role that type checking plays in both the Java and C# implementation of generics. This emphasis skirts around the edges of several controversial matters without ever quite cluttering this article with confusing issues not relevant to an understanding of the basics. The primary goal of generics in languages such as C++ and Java is to allow a single class to work with a wide variety of types. For instance, the ArrayList object has always had the ability to store a list of any type of class. However, ArrayList has always forced you to typecast the objects that you pulled out of the list: String myString = (String)myArrayList.get(0); This system effectively destroys the benefits of a strongly typed language. Java statements like the one just shown end up nullifying the safety that accompanies built in type checking. As a result, the generic version of the ArrayList class would be designed to work natively with any type of class, put to preserve the benefits of type checking. With the generic version of the ArrayList class, your code would look like this: String myString = myArrayList.get(0); Notice the abscence of the typecast. It would be a mistake, however, to assume that you could assign anything to the return value of the get method without needing a typecast. If you tried to assign anything else besides a String to the output of the get method, you would encounter a compile time type mismatch such as this one: found : java.lang.Stringrequired: java.lang.Integer Integer data = list.get(0); ^ Take a brief look at the complete code for the example we have just been discussing, but don't dwell on it too long: ArrayList <String> list = new ArrayList<String>(); list.add("Generic String"); String data = list.get(0); JOptionPane.showMessageDialog(this, data); Glancing at this code, you probably noticed the word <String> appearing in brackets. You can think about that syntax as simply being a way of telling the Java compiler that you want this generic ArrayList to hold objects of type String. You are, in a sense, binding the ArrayList object to the String type via the use of this syntax. After you have done this, it would be illegal to try to bind an Integer or some other non-String type to the result of the get function: int data = list.get(); // illegal!!! must be a String because of ArrayList <String> list declaration. For me, the easiest way to come to grips with generics, or many other new technologies, is to see a very simple example. Starting from this foundation, I can begin to build a deeper understanding of how the technology works. Below you will find the declaration for a very simple generic class called BasicGeneric. Beneath it you will find another simple class called GenSample that contains two methods, test01 and test02, that exercise the BasicGeneric class. Note that this second class has a static main method so that it can be run as an application. I suggest putting both classes in a single file called GenSample.java. When compiling the example shown in Listing 1, you must add source "1.5" when invoking javac. It is probably also helpful to add the -version flag to the call. In most SDKs, version was not supported in Javac before 1.5, so it can help alert you if you are accessing the wrong version of the binary: javac -version -source "1.5" -sourcepath src -d classes src/applet_generics01/AppletGenerics.java Listing 1: GenSample.java contains a very simple generic class called BasicGeneric, and a second class called GenSample that calls it. class BasicGeneric <A>{ private A data; public BasicGeneric(A data) { this.data = data; } public A getData() { return data; }}public class GenSample{ public GenSample() { } public String test01(String input) { String data01 = input; BasicGeneric<String> basicGeneric = new BasicGeneric<string>(data01); String data02 = basicGeneric.getData(); return data02; } public int test02(int input) { Integer data01 = new Integer(input); BasicGeneric <Integer> basicGeneric = new BasicGeneric<Integer>(data01); Integer data02 = basicGeneric.getData(); return data02; } public static void main(String [] args) { GenSample sample = new GenSample(); System.out.println(sample.test01("This generic data")); System.out.println(sample.test02(12)); }} Notice the declaration for BasicGeneric: class BasicGeneric <A> Here you can see the brackets that surround the capital letter A: <A>. This syntax specifies that the class is a generic type. Notice also that the class declares a variable of type A: private A data; This syntax can be confusing at first glance. But it is quite sensible when you begin to understand generics. BasicGeneric does not work with any specific type. As a result, we don't assign a type to A. It is a generic type. But when you declare an instance of this class, you must specify the type with which you want to work: BasicGeneric<String> basicGeneric Notice the <String> syntax after the declaration BasicGeneric. That is where you declare that this instance of BasicGeneric is going to work variables of type String. You can also ask it work with variables of type Integer, or of any other type that catches your fancy. Here is an example of how to declare an instance of this type that will work with an Integer: BasicGeneric<Integer> basicGeneric Once you understand this much, the rest should start to fall into place. Notice, for instance, the declaration of the getData method: public A getData() { return data; } This method returns a value of type A, which is to say that it will return a generic type. But that does not mean that the function will not have a type at run time, or even at compile time. After you declare an instance of BasicGeneric, you will have specified the type of A. After that, BasicGeneric will act as if it were declared from the very beginning to work with that specific type, and that type only. You should now be able to look at the two instances of using the BasicGeneric type and make sense of them: BasicGeneric<String> basicGeneric = new BasicGeneric<string>(data01); String data02 = basicGeneric.getData(); BasicGeneric <Integer> basicGeneric = new BasicGeneric<Integer>(data01); Integer data02 = basicGeneric.getData(); In one case the BasicGeneric type is designed to work with Strings, in the other case it is designed to work with Integers. What could be simpler or more straight forward? In the end, the basic facts about generics turn out to be fairly easy to understand. The syntax for using them is clean, and relatively intuitive once you get over a few fairly low conceptual hurdles. The power of the syntax should also be fairly obvious. I should add, however, that this subject can get a bit trickier once you start digging beneath the surface that we have mapped out in this simple introductory article. Also, there are certain intentional limitations in the Java and C# implementations of generics which are not inherent in the C++ implementation. Hopefully in the future I will find time to explain some of these complexities. However, it should be clear that the basic syntax for using this tool is simple and straight forward. It should also not be hard to imagine some obvious ways to use this syntax in your own programs. The best thing to do then, is to download a copy of the Java J2SE 1.5 and copy my example and get to work experimenting with this very fascinating new tool. Server Response from: SC1
http://edn.embarcadero.com/article/32054
crawl-002
refinedweb
1,372
54.32
Please avoid all use of the "LL" suffix for long-long integer literals (1) By Nick Gammon (nickgammon) on 2021-04-07 01:36:11 [link] [source] I recently upgraded SQLite3 from version 3.16.2 to 3.35.3. Somewhere in those changes were introduced compiler errors for my compiler (MS Visual Studio 6.0) on four lines, namely the ones with long-long constants with the LL suffix. In the (current) amalgamation (3.35.4) these lines are, in sqlite3.c: 32373: if( v>4294967296LL ){ *pI = 0; return 0; } 77185: && i >= -2251799813685248LL && i < 2251799813685248LL); 88849: if( pIn1->u.i<=140737488355327LL && pIn1->u.i>=-140737488355328LL ){ 89051: }else if( uu<=140737488355327LL ){ I request that the suffix LL be removed from these four lines (and neighouring ones calling testcase - however that is not impacting me personally), in order to avoid the compiler error: C2059 'bad suffix on number' According to the C standard, decimal integer constants without a suffix are automatically promoted to int, long int or long long int as required to hold the value in the constant. Thus the explicit use of LL is not required. I note that there was previously a ticket along similar lines: "Avoid all use of the "LL" suffix for long-long integer literals.". This is referenced here: Doing this would not impact more modern compilers, and would avoid compiler errors on older compilers. Thanks in advance. (2) By Warren Young (wyoung) on 2021-04-07 02:01:34 in reply to 1 [link] [source] Ironically, not having them also causes problems. Visual Studio 6.0 is from 1998. Are you absolutely certain you can't upgrade? Current Microsoft tools have been free for quite a long time now. (3.1) By Nick Gammon (nickgammon) on 2021-04-07 04:12:52 edited from 3.0 in reply to 2 [link] [source] Lol, some people want LL and some don't! You can't win, eh? I can upgrade, it's just a pain as I usually use Ubuntu for my development, and I have Visual Studio installed in a virtual Windows image. There isn't enough disk free to install later versions. I know that's my problem, but if it could be fixed in the source that would have been good. I can always fix those 4 lines manually from time to time. I should point out that there is another place in the source where a long long constant is not so qualified. So right now, neither party is satisfied. At sqlite3.c at line 44310: sqlite3_int64 t64; t64 = *t; t64 = (t64 + 11644473600)*10000000; // <---- HERE (4) By Warren Young (wyoung) on 2021-04-07 05:06:25 in reply to 3.1 [link] [source] There isn't enough disk free to install later versions. Two suggestions: Switch to the Ubuntu-hosted MinGW cross-compiler. Install the Build Tools package, which is the command-line only version of VC++. To replace the IDE bits, you could run whatever text editor you like on the Ubuntu side, then Alt-Tab over to the VM to build. I can always fix those 4 lines manually from time to time. Better, clone the SQLite source repo, make your changes, then commit the changes to a branch. Then you simply pull the upstream changes from time to time and merge them into your branch: $ fossil clone # needs Fossil 2.14+ $ cd sqlite3 $ vi src/whatever.c # make your changes $ ./configure && make sqlite3.c # test it $ fossil set autosync pullonly # needed once only $ fossil ci --branch my-special-version -m "initial version commit message" Then later, automatically merge your changes into the current version: $ fossil merge trunk # pull upstream & merge $ make sqlite3.c # test again $ fossil ci -m "updated to latest upstream" # commit once happy This will work automatically as long as the upstream version doesn't also change your changed lines, or anything sufficiently close to them. If you get a merge conflict, you can do the manual merge once, then probably be fine for years again. (5) By Keith Medcalf (kmedcalf) on 2021-04-07 05:18:16 in reply to 3.1 [source] The current MinGW64/GCC (10.2) compiler set for Windows is just over 1 GB in size and includes the C/C++/ADA compiler sets. Other versions can be found here: Note that this is the compiler sets only and not the MSYS/MSYS2 thing. (6) By Nick Gammon (nickgammon) on 2021-04-07 07:28:28 in reply to 5 [link] [source] Thank you everyone for your very kind suggestions. :) Unfortunately I am <sigh> using MFC so it isn't as simple as just finding a compiler that runs on Ubuntu. I think I'll just live with changing four lines every 6 months or so, as I don't upgrade to the latest SQLite code as soon as it is released. One suggestion though: Could the five constants in question be placed in a suitable place, guarded by #if defined(_MSC_VER) so that two versions could exist: one with LL and one without? Then the defined versions could be used in the code rather than the literal constants. The constants being: 4294967296 // 2**32 2251799813685248 // 2**51 -2251799813685248 // -2**51 140737488355327 // 2**47 - 1 -140737488355328 // -2**47 Doing this looks better than using "magic numbers" in the code, and the comments could be used to explain why that number and not some other one. (7) By dab on 2021-04-13 01:06:32 in reply to 6 [link] [source] If it makes you feel better, I am also a VC6 user (for complex reasons). You are not alone. (8) By Scott Robison (casaderobison) on 2021-04-13 01:30:11 in reply to 6 [link] [source] Or use a macro to qualify them on a platform basis: #if defined(_MSC_VER) #define LL(x) (x##I64) #else #define LL(x) (x##LL) #endif if (somevar == LL(123456789012)) { somecode }
https://sqlite.org/forum/info/9ba1033f71384776
CC-MAIN-2022-21
refinedweb
989
71.34
I've been trying to work out this error with packages, changing the classpath name in a number of different ways (using set, declaring it in the javac command), but I always get the following errors when I try to compile the following code: import kalut.*; import java.lang.Math.*; public class Koetesti { public static void main(String[] args) { Koe a = new Koe(); a.x = 5; System.out.println("a.x: " + a.x); } } ku-hupnet193-75:~/lahdekoodit/javaohj dshaw$ javac Koetesti.java Koetesti.java:1: package kalut does not exist import kalut.*; ^ Koetesti.java:6: cannot access Koe bad class file: ./Koe.java file does not contain class Koe Please remove or make sure it appears in the correct subdirectory of the classpath. Koe a = new Koe(); ^ 2 errors It seems as if the it's looking for the class file in the current direction, despite the fact that I have the classpath set to ./classes (more specifically, /Users/dshaw/lahdekoodit/javaohj/classes). It almost seems like I have some stupid error like incorrect case but I don't, still nothing helps. What am I not doing? Edit: I realised that it's the .java file, not the .class file, that the compiler uses for packages. I also noticed that the compiler wants the package files to be in a directory that is named after the package, in my case, in the directory "kalut". But for some reason it only works when folder "kalut" is in the current working directory, not the "./classes" directory as I have the classpath defined. Any reasons for this?
https://www.daniweb.com/programming/software-development/threads/94687/classpath-errors
CC-MAIN-2018-43
refinedweb
264
66.54
sqlite3 Provides Dart bindings to SQLite via dart:ffi. For an example on how to use this library from Dart, see the example. Supported platforms You can use this library on any platform where you can obtain a DynamicLibrary of sqlite. Here's how to use this library on the most popular platforms: - Android: Flutter users can depend on the sqlite3_flutter_libspackage to ship the latest sqlite3 version with their app - iOS: Contains a built-in version of sqlite that this package will use by default. When using Flutter, you can also depend on sqlite3_flutter_libsto ship the latest sqlite3 version with your app. - Linux: You need to install an additional package (like libsqlite3-devon Debian), or you manually ship sqlite3 with your app (see below) - macOS: Contains a built-in version of sqlite that this package will use by default. - Windows: You need to manually ship sqlite3 with your app (see below) On Android and iOS, you can depend on the sqlcipher_flutter_libs package to use SQLCipher instead of SQLite. Just be sure to never depend on both sqlcipher_flutter_libs and sqlite3_flutter_libs! Manually providing sqlite3 libraries Instead of using the sqlite3 library from the OS, you can also ship a custom sqlite3 library along with your app. You can override the way this package looks for sqlite3 to instead use your custom library. For instance, if you release your own sqlite3.so next to your application, you could use: import 'dart:ffi'; import 'dart:io'; import 'package:sqlite3/open.dart'; import 'package:sqlite3/sqlite3.dart'; void main() { open.overrideFor(OperatingSystem.linux, _openOnLinux); final db = sqlite3.openInMemory(); // Use the database db.dispose(); } DynamicLibrary _openOnLinux() { final scriptDir = File(Platform.script.toFilePath()).parent; final libraryNextToScript = File('${scriptDir.path}/sqlite3.so'); return DynamicLibrary.open(libraryNextToScript.path); } Just be sure to first override the behavior and then use sqlite3. Supported datatypes When binding parameters to queries, the supported types are ìnt, double, String, List<int> (for BLOB) and null. Result sets will use the same set of types. Libraries - open - Utils to open a DynamicLibrary on platforms that aren't supported by default. - sqlite3 - Dart bindings to sqlite3.
https://pub.dev/documentation/sqlite3/latest/
CC-MAIN-2021-21
refinedweb
350
57.98
Recording Results¶ import trueq as tq To begin, we initialize a 3-qubit circuit. It is important that we add three measurement operations to the circuit because the total number of measurement operations dictates the length of bitstrings that are allowed by the circuit. circuit = tq.Circuit({(0, 1): tq.Gate.cz, (2,): tq.Gate.x}) circuit.measure_all() circuit We can manually set the value of results with anything dict-like. This is cast to a Results object when stored in the circuit. The order of the bits in the bitstring, from left to right, corresponds to the order of the measurement operations in the circuit with respect to sorted qubit labels. Thus the left-most bit in “011” corresponds to the measurement of qubit 0 in this example. circuit.results = {"011": 5, "101": 15} circuit.results Out: Results({'011': 5, '101': 15}) The results object has many convenience methods. For example, we can add the “111” result and print the updated results. circuit.results["111"] = 10 circuit.results Out: Results({'011': 5, '101': 15, '111': 10}) Increment the “111” result and print the updated results. circuit.results["111"] += 5 circuit.results Out: Results({'011': 5, '101': 15, '111': 15}) Add multiple results and print. circuit.results += {"010": 1, "110": 10} circuit.results Out: Results({'011': 5, '101': 15, '111': 15, '010': 1, '110': 10}) Bitstrings can be replaced by their decimal equivalent. circuit.results += {3: 1, 6: 10} circuit.results Out: Results({'011': 6, '101': 15, '111': 15, '010': 1, '110': 20}) Results can also be set by passing vectors of the correct size. circuit.results.from_vec([0, 0, 0, 1, 0, 0, 10, 0]) circuit.results.vec Out: array([ 0, 0, 0, 1, 0, 0, 10, 0]) Here, we display the results entered so far as a bar plot. circuit.results.plot() Total running time of the script: ( 0 minutes 0.204 seconds) Gallery generated by Sphinx-Gallery
https://trueq.quantumbenchmark.com/examples/fundamentals/recording_results.html
CC-MAIN-2021-49
refinedweb
320
67.45
On Thu, May 23, 2002 at 04:32:39PM -0700, Karl E. Jorgensen wrote: > On Thu, May 23, 2002 at 02:38:15PM -0700, Petro wrote: <major snipage> > > Yes. Or just figuring out if there is even a wreck, how it > happened > > etc. > > > with the intent of restoring the "wreckage" rather than scrapping > it. > > > Reread the quoted text from Karl. > > I know what he is saying, and he's right in a limited way. If your > > entire ability to administer a system envolves unpacking .debs and > > answering the configure questions they ask, a static shell is > > pointless. > > I'm not in that position. > > I have to disagree with your implication ("entire ability"....). I > suspect that you have some high levels of frustration showing through > here. Yes, there is some level of frustration. > My previous post was assuming: > a) you have hosed some essential library - ld-linux, libc, whatever. > b) you want to repair it (later on it transpired that you were happy > just > to rescue the data before scrapping the lot, which will mean a > different approach). > > With that in mind, you may well want to re-extract the damaged file(s) > out of a .deb (e.g. one you have conveniently left floating around in > /var/cache/apt/archives). If one has a small set of utilities that do not depend on external libraries, a SA with a bit of creative thinking and nimble fingers can accomplish a lot. The others (those without creative thinking and nimble fingers) are screwed blue anyway. The desire is for a small set of standard tools statically compiled so *IF NOTHING ELSE* I can determine just how badly a system is horked. There are about 37.38 billion ways a system can wind up in an unstable or stochastic state, from mv * .. in /lib (I a much more complex and lengthy equivelent of that in a shell script on a OS X box 2 weeks ago) to memory or filehandle exhaustion, to a corrupt file etc. Some of these *do* require a reinstall. Some of these just require a reboot. Others require different handling. In an installation with more than 1 computer, and the right tools statically linked, most would not require the ability to extract .debs, but if all that requires is ar and gzip, then that's not too much to ask. I'm starting a list of tools that should be available in a static format. I don't know what I'm going to so about it yet, but I have discovered a wierdness in bash debian source package. There is a define in debian/rules in that package that is: # build a statically linked bash? with_static = yes However, the default rules file doesn't use it anywhere, and adding: ifeq ($(with_static),yes) conf_args += --enable-static-link endif to what appears to be the appropriate section gives this diff: 115,117c115,117 < LIBS = $(BUILTINS_LIB) $(LIBRARIES) < LDFLAGS = -static $(STATIC_LD) $(LOCAL_LDFLAGS) $(PROFILE_FLAGS) $(CFLAGS) < STATIC_LD = -static --- > LIBS = $(BUILTINS_LIB) $(LIBRARIES) -ldl > LDFLAGS = $(STATIC_LD) $(LOCAL_LDFLAGS) $(PROFILE_FLAGS) > $(CFLAGS) > STATIC_LD = Which, oddly enough doesn't seem to build a static bash executable. Hmmm... -- My last cigarette was roughly 32 days, 14 hours, 7 minutes ago. YHBW -- To UNSUBSCRIBE, email to [email protected] with a subject of "unsubscribe". Trouble? Contact [email protected]
https://lists.debian.org/debian-user/2002/05/msg03323.html
CC-MAIN-2017-47
refinedweb
551
72.97
Mustache. Overview Think of Mustache as a replacement for your views. Instead of views consisting of ERB or HAML with random helpers and arbitrary logic, your views are broken into two parts: a Ruby class and an HTML template. We call the Ruby class. Why? I like writing Ruby. I like writing HTML. I like writing JavaScript. I don't like writing ERB, Haml, Liquid, Django Templates, putting Ruby in my HTML, or putting JavaScript in my HTML. Usage Quick example: >> require 'mustache' => true >> Mustache.render("Hello {{planet}}", :planet => "World!") => "Hello World!" We've got an examples folder but here's the canonical one: class Simple < Mustache def name "Chris" end def value 10_000 end def taxed_value value * 0.6 end def in_ca true end end We simply create a normal Ruby class and define methods. Some methods reference others, some return values, some return only booleans. Now let's write the template: Hello {{name}} You have just won {{value}} dollars! {{#in_ca}} Well, {{taxed_value}} dollars, after taxes. {{/in_ca}} This template references our view methods. To bring it all together, here's the code to render actual HTML; Simple.render Which returns the following: Hello Chris You have just won 10000 dollars! Well, 6000.0 dollars, after taxes. Simple. Tag Types For a language-agnostic overview of Mustache's template syntax, see the mustache(5) manpage or. Escaping Mustache does escape all values when using the standard double Mustache syntax. Characters which will be escaped: & \ " < >. To disable escaping, simply use triple mustaches like {{{unescaped_variable}}}. Example: Using {{variable}} inside a template for 5 > 2 will result in 5 > 2, where as the usage of {{{variable}}} will result in 5 > 2. Dict-Style Views ctemplate and friends want you to hand a dictionary to the template processor. Mustache supports a similar concept. Feel free to mix the class-based and this more procedural style at your leisure. Given this template (winner.mustache): Hello {{name}} You have just won {{value}} bucks! We can fill in the values at will: view = Winner.new view[:name] = 'George' view[:value] = 100 view.render Which returns: Hello George You have just won 100 bucks! We can re-use the same object, too: view[:name] = 'Tony' view.render Hello Tony You have just won 100 bucks! Templates A word on templates. By default, a view will try to find its template on disk by searching for an HTML file in the current directory that follows the classic Ruby naming convention. TemplatePartial => ./template_partial.mustache You can set the search path using Mustache.template_path. It can be set on a class by class basis: class Simple < Mustache self.template_path = File.dirname(__FILE__) ... etc ... end Now Simple will look for simple.mustache in the directory it resides in, no matter the cwd. If you want to just change what template is used you can set Mustache.template_file directly: Simple.template_file = './blah.mustache' Mustache also allows you to define the extension it'll use. Simple.template_extension = 'xml' Given all other defaults, the above line will cause Mustache to look for './blah.xml' Feel free to set the template directly: Simple.template = 'Hi {{person}}!' Or set a different template for a single instance: Simple.new.template = 'Hi {{person}}!' Whatever works. Views Mustache supports a bit of magic when it comes to views. If you're authoring a plugin or extension for a web framework (Sinatra, Rails, etc), check out the view_namespace and view_path settings on the Mustache class. They will surely provide needed assistance. Helpers What about global helpers? Maybe you have a nifty gravatar function you want to use in all your views? No problem. This is just Ruby, after all. module ViewHelpers def gravatar gravatar_id = Digest::MD5.hexdigest(self[:email].to_s.strip.downcase) gravatar_for_id(gravatar_id) end def gravatar_for_id(gid, size = 30) "#{gravatar_host}/avatar/#{gid}?s=#{size}" end def gravatar_host @ssl ? '' : '' end end Then just include it: class Simple < Mustache include ViewHelpers def name "Chris" end def value 10_000 end def taxed_value value * 0.6 end def in_ca true end def users User.all end end Great, but what about that @ssl ivar in gravatar_host? There are many ways we can go about setting it. Here's on example which illustrates a key feature of Mustache: you are free to use the initialize method just as you would in any normal class. class Simple < Mustache include ViewHelpers def initialize(ssl = false) @ssl = ssl end ... etc ... end Now: Simple.new(request.ssl?).render Finally, our template might look like this: <ul> {{# users}} <li><img src="{{ gravatar }}"> {{ login }}</li> {{/ users}} </ul> Sinatra Mustache ships with Sinatra integration. Please see lib/mustache/sinatra.rb or for complete documentation. An example Sinatra application is also provided: If you are upgrading to Sinatra 1.0 and Mustache 0.9.0+ from Mustache 0.7.0 or lower, the settings have changed. But not that much. See this diff for what you need to do. Basically, things are named properly now and all should be contained in a hash set using set :mustache, hash. Rack::Bug Mustache also ships with a Rack::Bug panel. In your config.ru add the following code: require 'rack/bug/panels/mustache_panel' use Rack::Bug::MustachePanel Using Rails? Add this to your initializer or environment file: require 'rack/bug/panels/mustache_panel' config.middleware.use "Rack::Bug::MustachePanel" Vim Thanks to Juvenn Woo for mustache.vim. It is included under the contrib/ directory. See for installation instructions. Emacs mustache-mode.el is available at TextMate See for installation instructions. Command Line See mustache(1) man page or for command line docs. Installation RubyGems $ gem install mustache Acknowledgements Thanks to Tom Preston-Werner for showing me ctemplate and Leah Culver for the name "Mustache." Special thanks to Magnus Holm for all his awesome work on Mustache's parser. Contributing Once you've made your great commits: - Fork Mustache - Create a topic branch - git checkout -b my_branch - Push to your branch - git push origin my_branch - Create an Issue with a link to your branch - That's it! You might want to checkout Resque's Contributing wiki page for information on coding standards, new features, etc. Mailing List To join the list simply send an email to [email protected]. This will subscribe you and send you information about your subscription, including unsubscribe information. The archive can be found at. Meta - Code: git clone git://github.com/defunkt/mustache.git - Bugs: - List: [email protected] - Gems: You can also find us in #{ on irc.freenode.net.
https://devhub.io/repos/jdberry-mustache
CC-MAIN-2019-51
refinedweb
1,076
69.07
span8 span4 span8 span4 Although FME offers close to 500 different transformers, in some cases a user may wish to apply a specific ArcGIS geoprocessing tool to their data. Using ArcPy within FME's PythonCaller transformer a workspace author may incorporate such a tool directly into their FME workflow, effectively extending FME's geoprocessing capabilities. However, ArcPy features and FME features have very different structures, and applying ArcPy tools to data using FME's feature-by-feature basis is quite a complex task. So, instead of trying to convert data from one structure to another, the method we recommend is: In this example we are going to use the ArcGIS Dissolve tool on a set of input polygons. Note: Functionality mentioned in this article requires a licensed version of ArcGIS be installed on the same computer as FME. Example workspace: arcpy-example.fmw The source dataset is a set of zoning polygons that we want to dissolve on the basis of a common Zoning Category. Input Output To do this we will use the Arcpy Dissolve tool (documentation). The completed workspace wraps the entire process into an FME custom transformer: This creates a generic solution where, for example, it is easier for the user to choose attributes to group features to be dissolved. Inside the custom transformer, a number of steps must be carried out to prepare the data for geoprocessing. The first step is to change the feature type (layer/table name) to something consistent. In this case we use an AttributeCreator to tag each feature with a feature type value called ‘pre-dissolve’: This will be the name of the table created in the temporary geodatabase. The next step is to create a temporary file location for this data. A special transformer called the TempPathnameCreator is used to create the temporary location. However, because we only wish to generate a single location, we first separate a single feature from the rest with a Sampler and send that one feature to the TempPathnameCreator. A VariableSetter and VariableRetriever are then used to copy the temporary location to all the other features. NB: The temporary folder created by the TempPathnameCreator will be automatically cleaned up when the workspace finishes, removing all the files you have written there. Since we are going to put an entire Geodatabase folder in there, this is very useful. The final step required to create the temporary Geodatabase is to create a schema. To do this the sampled feature is passed through an FME Hub transformer called the SchemaSetter. After creating the schema, all features are sent to a FeatureWriter, to write them to the temporary Geodatabase: Notice that the temporary pathname is used to define the location to write to, and fme_feature_type is used to define the table name. Also notice that the Dynamic Schema Definition setting is applied, with the source schema (‘Schema from Schema Feature’) coming from the schema created by the SchemaSetter transformer. NB: Using a dynamic schema is another way in which we produce a generic solution that can be used on any source data. Before starting the ArcGIS geoprocessing, two more transformers are required. After creating the Geodatabase, the FeatureWriter outputs a summary feature containing the output dataset location as an attribute. Because (on a Windows computer) this has a combination of forward and backward slashes, a StringReplacer transformer is used to clean up the path. Finally a ParameterFetcher transformer is used to retrieve the list of attributes to dissolve by, extracting it from the User Parameter 'Group By'. A PythonCaller transformer carries out the ArcPy processing. The code is as follows: import fme import fmeobjects import arcpy def processFeature(feature): # Get dissolve FC and settings from feature attributes dataset = feature.getAttribute('_dataset') dissolveFields = feature.getAttribute('_Group_Attrs').split(",") arcpy.env.workspace = dataset # Set local variables inFeatures = "pre_dissolve" outFeatureClass = dataset + "/dissolved" # Execute Dissolve using group by attributes as Dissolve Fields arcpy.Dissolve_management(inFeatures, outFeatureClass, dissolveFields, "", "MULTI_PART", "DISSOLVE_LINES") This code fetches information about the dataset from FME attributes (the feature.getAttribute function) and then executes the arcpy dissolve tool using that information. Once the Python script's work is complete, and the dissolve done, to get the features back into the FME workspace requires a FeatureReader transformer. The summary feature that triggered the Python script is used as input, providing the source Geodatabase location through an attribute. We specify the dissolved feature class in the Feature Types to Read so we don’t also read the original feature class as well. After cleaning up the temporary attributes, the dissolved features are output from the transformer. This same process can be used for any ArcPy feature processing that does an entire feature class at a time. If ArcGIS is available on the same computer as FME, its geoprocessing tools can be used to extend FME's functionality through the use of the PythonCaller, FeatureWriter and FeatureReader. 2017-10-04 13:33:05| 56.8| 0.0|FATAL |Factory proxy not initialized 2017-10-04 13:33:05| 56.8| 0.0|FATAL |f_106(PythonFactory): PythonFactory failed to process featureAny suggestions? 2017-10-04 13:54:59| 14.3| 0.0|ERROR |Python Exception <ImportError>: bad magic number in 'arcgisonlinefeatures': b'\x16\r\r\n' 2017-10-04 13:54:59| 14.3| 0.0|ERROR |Could not create Python Reader `ARCGISONLINEFEATURES' 2017-10-04 13:54:59| 14.5| 0.2|STATS |Stored 71487 feature(s) to FME feature store file `E:\FME_TEMP\full-inspect--1 125-1507143282390_35568.ffs' 2017-10-04 13:54:59| 14.5| 0.0|INFORM|Saving spatial index into file 'E:\FME_TEMP\full-inspect--1 125-1507143282390_35568.fsi' 2017-10-04 13:54:59| 14.5| 0.0|INFORM|Finished saving spatial index into file 'E:\FME_TEMP\full-inspect--1 125-1507143282390_35568.fsi' 2017-10-04 13:54:59| 14.5| 0.0|ERROR |Could not create Python Reader `%0'Thanks in advance Adding Fields to an Existing Feature Class with FME and ArcPy Pyfme cannot be called from the command line with Data Interoperability Command Line Tools Integration FME Support for MapGuide Enterprise 2011 The FeatureWriter Transformer Update Plugin Warning for Third Party Plugins FME Desktop can no longer extend GeoMedia Pro as of FME 2018 Does the ArcGIS Server Support Custom Formats? Working with Foreign Keys: Writing Database Tables ArcCatalog Unable to View Attribute Table Data
https://knowledge.safe.com/articles/47216/using-arcpy-for-fme-feature-processing.html?smartspace=fme-desktop-apiscripting
CC-MAIN-2020-05
refinedweb
1,059
53.21
Odds and LogOdds Odds The probabilty of an event occuring is a simple ratio of “instances where it happens” divided by “all possibilities”, or $\frac{ObservedTrue}{AllObservations}$ For example, rolling a 1 on a 6-sided die is $\frac{1}{6} = .166667$ By contrast, the odds of an event is a flat look at “instances that it happens” against “instances that it doesn’t happen”. In our dice example, we’d simply have $1:5$ Odds Ratio Alternatively, we could express the odds of an event as a ratio of $\frac{Pr(Occurring)}{Pr(NotOccurring)}$ This gives us a single number for ease of interpretation– think 199:301 vs 0.661 Log Odds However, it should be immediately obvious that interpretation of the scale of the odds ratio leaves something to be desired. For instance, if something happens with 1:6 odds, the odds ratio is .166. Conversely, if we were looking at 6:1 odds, the odds ratio would be 6.0. Indeed, if something is more likely to happen, the odds ratio will be some value between 1 and infinity. On the other hand, if it’s less likely, it will simply be bounded between 0 and 1. This is where the log function proves to be particularly useful, as it gives a symmetric interpretation of two numbers in odds, symmetric around 0. Taking the log of the above, we’ve got: 1:6 -> 0.166 -> log(0.166) -> -0.77 6:1 -> 6.0 -> log(6.0) -> 0.77 Log Odds and Logistic Regression Another useful application of the log odds is in expressing the effect of one unit change in a variable. Because the logit function has a non-linear shape %pylab inline X = np.linspace(-3, 3, 100) y = 1 / (1 + np.exp(-X)) plt.plot(X, y); Populating the interactive namespace from numpy and matplotlib one step in the X direction will yield a variable change in y, depending on where you started. This short video does a great job running through the math of it, but the log odds can be expressed linearly, with resepect to X, as $\ln(\frac{p}{1-p}) = \beta_0 + \beta_1 X$ Exponentiating both sides, we can see that a unit increse in X is equivalent to multiplying the odds by exp(beta1) $\frac{Pr(happening)}{Pr(notHappening)} = \frac{p}{1-p} = e^{\beta_0 + \beta_1 X}$
https://napsterinblue.github.io/notes/stats/basics/odds_log_odds/
CC-MAIN-2021-04
refinedweb
397
57.81
Laziness: Clojure vs Haskell Last week I punted on randomness, and just made my genetic_search function take a [Double]. While that was convenient, it is unfortunately not as general as I thought at the time. I'm still in the process of learning Haskell, and I got confused between laziness in Clojure and laziness in Haskell. So how do they differ? The one-word answer is "purity", but let me try to expand on that a little bit. Clojure has the "seq" abstraction, which is defined by the two functions first and next, where first gives you an element and next gives you another seq if there are more elements to be had, or nil if there are none. When I think of a lazy list, I think in terms of Clojure seqs, even though Clojure lists are actually not lazy. How is that different from Haskell? In Clojure, a lazy seq is one where the elements are explicitly produced on demand, and then cached. This sounds a lot like laziness in Haskell, except for one crucial difference: Clojure does not mind how these elements are produced, and in particular, whether that involves any kind of side effects. In Haskell, on the other hand, laziness means that the elements of a list will be computed on demand, but that only applies to pure computation. There is no lazy side effect in Haskell. Here is a short Clojure program to illustrate this notion further. This program will ask the user for positive integers and then print a running total. First, we start with a seemingly pure function which, given a possibly lazy seq of numbers, returns its sum so far: (defn sum-so-far [ls] (reductions + 0 ls)) Testing this out yields the expected behaviour: t.core=> (sum-so-far [1 2 3 4 5]) (0 1 3 6 10 15) t.core=> The range function, with no argument, returns an infinite lazy seq of integers, starting with 0. We can use our sum-so-far function on it: t.core=> (take 10 (sum-so-far (range))) (0 0 1 3 6 10 15 21 28 36) t.core=> We can also construct a lazy seq by asking the user for input: (defn read-nums [] (print "Please enter a number: ") (flush) (cons (Integer/parseInt (read-line)) (lazy-seq (read-nums)))) Testing it works as expected: t.core=> (take 5 (read-nums)) Please enter a number: 1 Please enter a number: 2 Please enter a number: 3 Please enter a number: 4 Please enter a number: 5 (1 2 3 4 5) t.core=> Similarly, we can easily compute the running sum: t.core=> (take 5 (sum-so-far (read-nums))) Please enter a number: 1 Please enter a number: 2 Please enter a number: 3 Please enter a number: 4 (0 1 3 6 10) t.core=> So we have this one function, sum-so-far, that can handle any Clojure seq, regardless of how it is processed, and produces a new seq itself. Such a function is better thought of as a filter acting on a stream than as a function taking an argument and returning a result. Let's look at the Haskell equivalent. The sum-so-far function seems easy enough: sum_so_far :: [Int] -> [Int] sum_so_far is = loop 0 is where loop :: Int -> [Int] -> [Int] loop sum [] = [sum] loop sum (h:t) = sum : loop (sum + h) t I don't know if Haskell has a direct equivalent to Clojure's reductions, but that's not the point here and it's easy enough to code our own. This obvisouly works as intended on both finite and infinite lists: *Main> sum_so_far [1, 2, 3, 4, 5] [0,1,3,6,10,15] *Main> let ints = 1:map (+1) ints *Main> take 10 $ sum_so_far ints [0,1,3,6,10,15,21,28,36,45] *Main> But what about read-nums? It's not too hard to replicate the beginning of the function: read_nums :: IO [Int] read_nums = do int <- read <$> getLine return int : ??? How can we replace those ???? Well, the : ("cons") function needs a list as its second argument, so we could try constructing that: read_nums :: IO [Int] read_nums = do int <- read <$> getLine tl <- read_nums return $ int : tl That typechecks. But the whole point of monads is to sequence computation: there is no way we can reach the return line without having first produced the entire tl, and thus we're not lazy anymore and can't pile on more processing on top of this. What other option do we have? Perhaps Hoogle knows of a functiont that would help here? The type we'd need would look something like: Int -> IO [Int] -> IO [Int] so we could replace : and use that instead. Hoogle does find a result for that, but it's not quite what we need here. We could of course write a function with that signature easily enough: ignore_tl :: Int -> IO [Int] -> IO [Int] ignore_tl i _ = pure [i] but that's obviously not what we want. Are we stuck? Let's take a step back. The solution here is to realize that Clojure seqs are not the same as Haskell lists. Instead of thinking of sum-so-far as a function, let's go back to the idea of thinking of it as a filter between two streams. What would it take to construct such a filter in Haskell? We'd need a type with the following operations: - Produce an element in the "output" stream. - Request an element from the "input" stream. - Let my consumer know that I will not be producing further elements. The decoupling Clojure gives us is to be completely independent of how the input elements are produced and to produce the output elements on-demand. Let's model this a bit more precisely. We need a data definition OnDemand that represents a filter between two streams of elements. The filter could change the type, so we'll make it take two type parameters: an input one and an output one. We start with: data OnDemand input output Next, we need to be able to express "here is an element" and "there are no more elements". We can take a page from the List book and express those exactly like Nil and Cons: = Halt | Out output (OnDemand input output) Finally, we need to be able to ask for a new element, wait for it, and then keep going. This phrasing suggests we need to suspend the current computation to give our supplier the opportunity to manufacture an input, and keep going after that. A generally good way to model suspended computations is with a continuation: | In (Maybe input -> OnDemand input output) where the input parameter is wrapped in a Maybe because the input stream may have ran out. Using this definition, we can rewrite our sum_so_far as a filter: {-# LANGUAGE LambdaCase #-} {- ... -} sum_so_far :: OnDemand Int Int sum_so_far = loop 0 where loop :: Int -> OnDemand Int Int loop sum = Out sum (In $ \case Nothing -> Halt Just n -> loop (sum + n)) We can make this work on lists again with a simple adapter: convert_list :: OnDemand input out -> [input] -> [out] convert_list = \case Halt -> \_ -> [] Out out cont -> \ls -> out : convert_list cont ls In f -> \case [] -> convert_list (f Nothing) [] hd:tl -> convert_list (f $ Just hd) tl and we can use (convert_list sum_so_far) as we did before, with both finite and infinite lists: *Main> (convert_list sum_so_far) [1, 2, 3, 4, 5] [0,1,3,6,10,15] *Main> let ints = 1:map (+1) ints *Main> take 10 $ (convert_list sum_so_far) ints [0,1,3,6,10,15,21,28,36,45] *Main> But let's stay in the realm of streams for a bit. First, let's define a simple function to produce a stream from a list: out_list :: [a] -> OnDemand () a out_list [] = Halt out_list (h:t) = Out h (out_list t) Then, let's define a function to drain a stream into a list: drain :: OnDemand () b -> [b] drain = \case Halt -> [] Out b kont -> b : drain kont In f -> drain $ f Nothing Now, we can do fun stuff like *Main> drain $ out_list [1, 2, 3, 4] [1,2,3,4] *Main> Ok, so maybe that's not so much fun yet. We'd like to be able to express the equivalent of take 10 $ sum_so_far $ ints. Let's first work on each of these pieces. We can get a stream of naturals with ints :: OnDemand () Int ints = loop 1 where loop n = Out n (loop (n + 1)) and we can limit a stream to a given number of elements with: take_od :: Int -> OnDemand a a take_od 0 = Halt take_od n = In (\case Nothing -> Halt Just a -> Out a (take_od $ n - 1)) We now have all the pieces. What's the equivalent of $? We need to take two filters and return a filter than combines them. Here is the code:) We start from the outer filter. If that one says to stop, we don't need to look into any more input from the inner filter. If we have an output ready, we can just produce that. So far, so good. What happens if the outer filter needs an input? Well, in that case, we need to look at the inner one. Does it have an output ready? If so, we can just feed that into the outer filter. Is it halted? We can feed that information into the outer filter by calling its continuation with Nothing. Finally, if the inner filter itself is also waiting for an input, we have no other choice but to ask for more input from the context. We can now have a bit more fun: *Main> drain $ ints `join` sum_so_far `join` take_od 20 [0,1,3,6,10,15,21,28,36,45,55,66,78,91,105,120,136,153,171,190] *Main> This may look like the beginnings of a useful abstraction. But can we do IO with it? Let's try to write read_nums. We still cannot write a OnDemand () (IO Int) that would be useful, just like we could not write a useful IO [Int]. But the whole point of this OnDemand stuff is to do operations one at a time. So let's define a function that gets a single integer: import qualified Text.Read {- ... -} read_num :: IO (Maybe Int) read_num = Text.Read.readMaybe <$> getLine We cannot create an infinite stream of lazy IO actions. We've already gone through that rabbit hole. But what we can do is define a function that will run a filter within the IO context and generate all the required elements on demand: process :: IO (Maybe a) -> OnDemand a b -> IO [b] process io = \case Halt -> return [] Out hd k -> do tl <- process io k return $ hd : tl In f -> do input <- io process io (f input) Now we can use the exat same sum_so_far filter with values coming from pure and impure contexts: *Main> drain $ ints `join` sum_so_far `join` take_od 10 [0,1,3,6,10,15,21,28,36,45] *Main> process read_num $ sum_so_far `join` take_od 5 1 2 3 4 [0,1,3,6,10] *Main> Here is the full code for reference: {-# LANGUAGE LambdaCase #-} module Main where import qualified Text.Read data OnDemand a b = Halt | Out b (OnDemand a b) | In (Maybe a -> OnDemand a b) sum_so_far :: OnDemand Int Int sum_so_far = loop 0 where loop :: Int -> OnDemand Int Int loop sum = Out sum (In $ \case Nothing -> Halt Just n -> loop (sum + n)) convert_list :: OnDemand input out -> [input] -> [out] convert_list = \case Halt -> \_ -> [] Out out cont -> \ls -> out : convert_list cont ls In f -> \case [] -> convert_list (f Nothing) [] hd:tl -> convert_list (f $ Just hd) tl out_list :: [a] -> OnDemand () a out_list [] = Halt out_list (h:t) = Out h (out_list t) drain :: OnDemand () b -> [b] drain = \case Halt -> [] Out b kont -> b : drain kont In f -> drain $ f Nothing ints :: OnDemand () Int ints = loop 1 where loop n = Out n (loop (n + 1)) take_od :: Int -> OnDemand a a take_od 0 = Halt take_od n = In (\case Nothing -> Halt Just a -> Out a (take_od $ n - 1))) print_od :: Show b => OnDemand a b -> IO () print_od = \case Halt -> return () In _ -> print "Error: missing input" Out b k -> do print b print_od k read_num :: IO (Maybe Int) read_num = Text.Read.readMaybe <$> getLine process :: IO (Maybe a) -> OnDemand a b -> IO [b] process io = \case Halt -> return [] Out hd k -> do tl <- process io k return $ hd : tl In f -> do input <- io process io (f input) main :: IO () main = do let same_filter = sum_so_far `join` take_od 10 let pure_call = drain $ ints `join` same_filter sidef_call <- process read_num $ same_filter print pure_call print sidef_call And here is a sample invocation: $ stack run <<< $(seq 1 9) [0,1,3,6,10,15,21,28,36,45] [0,1,3,6,10,15,21,28,36,45] $ So, how hard is it to map all of these learnings to our little genetic algorithm from last week? A lot easier than it may seem, actually. First, we need to add the OnDemand data definition: +data OnDemand a b + = Halt + | Out b (OnDemand a b) + | In (a -> OnDemand a b) + Next, we need to change the exec_random function: since we're going to ask for random values from our caller explicitly, we don't need to carry around a list anymore. In fact, we don't need to carry any state around anymore, which makes this monad look almost unnecessary. Still, it offers a slightly nicer syntax for client functions ( GetRand instead of explicit continuations). It's also quite nice that almost none of the functions that use the monad need to change here. -exec_random :: WithRandom a -> [Double] -> ([Double] -> a -> b) -> b -exec_random m s cont = case m of - Bind ma f -> exec_random ma s (\s a -> exec_random (f a) s cont) - Return a -> cont s a - GetRand -> cont (tail s) (head s) +exec_random :: WithRandom a -> (a -> OnDemand Double b) -> OnDemand Double b +exec_random m cont = case m of + Bind ma f -> exec_random ma (\a -> exec_random (f a) cont) + Return a -> cont a + GetRand -> In (\r -> cont r) The biggest change is the signature of the main genetic_search function: instead of getting a [Double] as the last input and returning a [(solution, Double)], we now just return a OnDemand Double (solution, Double). - -> [Double] - -> [(solution, Double)] -genetic_search fitness mutate crossover make_solution rnd = - map head $ exec_random init - rnd - (\rnd prev -> loop prev rnd) + -> OnDemand Double (solution, Double) +genetic_search fitness mutate crossover make_solution = + exec_random init (\prev -> Out (head prev) (loop prev)) where - loop :: [(solution, Double)] -> [Double] -> [[(solution, Double)]] - loop prev rnd = prev : exec_random (step prev) - rnd - (\rnd next -> loop next rnd) + loop :: [(solution, Double)] -> OnDemand Double (solution, Double) + loop prev = exec_random (step prev) (\next -> Out (head next) (loop next)) The changes here are mostly trivial: we just remove the manual threading of the random list, and add one explicit Out to the core loop. Finally, we of course need to change the call in the main function to actually drive the new version and provide random numbers on demand. This is a fairly trivial loop: - print $ map snd - $ take 40 - $ genetic_search fitness mutate crossover mk_sol rands + loop rands 40 $ genetic_search fitness mutate crossover mk_sol + where + loop :: [Double] -> Int -> OnDemand Double ((Double, Double), Double) -> IO () + loop rs n od = + if n == 0 + then return () + else case od of + Halt -> return () + In f -> do + next_rand <- pure (head rs) + loop (tail rs) n (f next_rand) + Out v k -> do + print v + loop rs (n - 1) k Obviously in an ideal scenario the next_rand <- pure (head rs) could be more complex; the point here is just to illustrate that we can do any IO we want to produce the next random element. The full, updated code can be found here (diff).
https://cuddly-octo-palm-tree.com/posts/2021-03-28-lazy-io/
CC-MAIN-2022-40
refinedweb
2,615
61.9
Mingw MinGW (historically, MinGW32) is a way to cross-compile Windows binaries on Linux or any other OS. It can also be used natively on Windows to avoid using Visual Studio, etc. More information on the MinGW project can be found at MinGW.org. Contents MinGW32 Toolchain Start with emerging the crossdev tool: root # emerge --ask sys-devel/crossdev This article assumes you want to build a 32Bit toolchain. If you want to compile for a 64Bit target instead, replace the crossdev target i686-pc-mingw32 with x86_64-w64-mingw32. Now with this tool, emerge the mingw32 toolchain: root # crossdev -t i686-pc-mingw32 You may try adding --ex-insight and/or --ex-gcc. These have not been known to build. --ex-gdb will give you GDB and likely will work, but it is not very useful on Linux because MinGW GCC by default makes PE's (EXE files), not ELF files, and gdb has no idea how to run an EXE file on Linux. A remote debugger (with a Windows target machine) is a possibility but a long shot. Notes about the toolchain: - GCJ sources will not compile due to missing makespec files that do not get installed (copying from MinGW from Windows does not work either) - OpenMP is forcefully disabled in the ebuild for the time being even if you enable it in your USE flags Uninstallation root # crossdev -C i686-pc-mingw32 If files are left over (such as libraries and things you have added), you will be prompted to remove the /usr/i686-pc-mingw32 directory recursively. Using Portage Some things work. Most things do not. Try with USE="-*" after a failed build, then selectively add USE flags you need. If that does not work, then you probably cannot use Portage to install the package desired for use with MinGW. Using Portage, you may run into problems such as the following: - Application wants GDBM (see below) - Application wants to link with ALSA/OSS/Mesa/other library only useful to X or Linux Emerging sys-libs/zlib: root # i686-pc-mingw32-emerge sys-libs/zlib GDBM These are "Standard GNU database libraries" according to Portage. Many libraries and applications depend on this. Successfully compiled before, but the current version in Portage does not compile. A patch is very much needed. build.logexcerpt i686-pc-mingw32-gcc -c -I. -I. -march=k8 -msse3 -O2 -pipe gdbmfetch.c -DDLL_EXPORT -DPIC -o .libs/gdbmfetch.lo gdbmopen.c: In function 'gdbm_open': gdbmopen.c:171: error: storage size of 'flock' isn't known gdbmopen.c:171: error: 'F_RDLCK' undeclared (first use in this function) gdbmopen.c:171: error: (Each undeclared identifier is reported only once gdbmopen.c:171: error: for each function it appears in.) gdbmopen.c:171: error: 'F_SETLK' undeclared (first use in this function) gdbmopen.c:177: error: storage size of 'flock' isn't known gdbmopen.c:177: error: 'F_WRLCK' undeclared (first use in this function) To get around this problem for the moment, try building with USE="-*". Libraries OpenSSL SDL Example Emerge SDL: root # i686-pc-mingw32-emerge media-libs/libsdl Try compiling this source code (save to test.c). test.c #include <SDL/SDL.h> #include <windows.h> void cool_wrapper(SDL_Surface **s, int flags) { *s = SDL_SetVideoMode(640, 480, 32, flags); return; } int main(int argc, char *argv[]) { int flags; SDL_Surface *s; SDL_Init(SDL_INIT_VIDEO); flags = SDL_OPENGL; /* Enable OpenGL */ flags |= SDL_GL_DOUBLEBUFFER; /* Enable double-buffering */ flags |= SDL_HWPALETTE; /* Enable storing palettes in hardware */ flags |= SDL_RESIZABLE; /* Enable window resizing */ cool_wrapper(&s, flags); Sleep(5000); SDL_FreeSurface(s); SDL_Quit(); return 0; } Use the following command to build: user $ i686-pc-mingw32-gcc -o test.exe test.c `/usr/i686-pc-mingw32/usr/bin/sdl-config --libs` Test with Wine (requires SDL.dll to be somewhere in Wine's %PATH%, which includes the same directory as the EXE): user $ cp /usr/i686-pc-mingw32/usr/bin/SDL.dll . user $ wine test.exe If you get a window named SDL_app, then it worked. The window will automatically exit after about 5 seconds (the Windows Sleep() function halts execution for 5000 milliseconds). Porting POSIX Threads to Windows Windows thread functions seem to work fine with MinGW. The following example code will compile without error: win32_threads.c #include <windows.h> #include <stdio.h> #include <stdlib.h> #define NUM_THREADS 5 DWORD print_hello(LPVOID lpdwThreadParam); int main(int argc, char *argv[]) { int i; DWORD dw_thread_id; for (i = 0; i < NUM_THREADS; i++) { if (CreateThread(NULL, /* Default security level */ 0, /* Default stack size */ (LPTHREAD_START_ROUTINE)&print_hello, /* Routine to execute */ (LPVOID)&i, /* Thread paramater */ 0, /* Run immediately */ &dw_thread_id /* Thread ID */ ) != NULL) { printf("In main: Creating thread %d\n", i); Sleep(1000); } else { printf("Error: Failed to create the %d\n", i); exit(EXIT_FAILURE); } } exit(EXIT_SUCCESS); } /* Thread routine */ DWORD print_hello(LPVOID lpdwThreadParam) { printf("Thread #%d responding\n", *(int*)lpdwThreadParam); return 0; } Compile with: user $ i686-pc-mingw32-gcc -o win32_threads.exe win32_threads.c (The call to Sleep() will make the thread creation a little more closer to POSIX, more in order, and there will not be duplicate runs.) However, many applications rely upon POSIX threads and do not have code for Windows thread functionality. The POSIX Threads for Win32 project provides a library for using POSIX thread-like features on Windows (rather than relying upon Cygwin). It basically wraps POSIX thread functions to Win32 threading functions ( pthread_create()-> CreateThread() for example). Be aware that not everything is implemented on either end (however do note that Chrome uses this library for threading on Windows). Regardless, many ported applications to Windows end up using POSIX Threads for Win32 because of convenience. With this library you can get the best of both worlds as Windows thread functions work fine as show above. To get Pthreads for Win32: - Go to the Sourceware FTP and download the header files to your includes directory for MinGW (for me this is /usr/i686-pc-mingw32/usr/include). - Go to the Sourceware FTP and download only the .a files to your lib directory for MinGW (for me this is /usr/i686-pc-mingw32/usr/lib).' - At the same directory, get the DLL files (only pthreadGC2.dll and pthreadGCE2.dll; others are for Visual Studio) and place them in the bin directory of your MinGW root (for me this is /usr/i686-pc-mingw32/usr/bin). Example POSIX threads code: win32_posix_threads.c #include <pthread.h> #include <stdio.h> #include <stdlib.h> #define NUM_THREADS 5 void *print_hello(void *thread_id) { long tid; tid = (long)thread_id; printf("Thread #%ld responding.\n", tid); pthread_exit(NULL); return NULL; } int main(int argc, char *argv[]) { pthread_t threads[NUM_THREADS]; pthread_attr_t attr; int rc, status; long i; for (i = 0; i < NUM_THREADS; i++) { printf("In main: creating thread %ld\n", i); rc = pthread_create(&threads[i], NULL, print_hello, (void *)i); if (rc) { printf("Error: return code from pthread_create() is %d\n", rc); exit(EXIT_FAILURE); } } pthread_attr_destroy(&attr); for (i = 0; i < NUM_THREADS; i++) { rc = pthread_join(threads[i], (void **)&status); if (rc) { printf("Error: return code from pthread_join() is %d\n", rc); exit(EXIT_FAILURE); } printf("Completed join with thread %d, status = %d\n", i, status); } pthread_exit(NULL); exit(EXIT_SUCCESS); } Compile with: user $ i686-pc-mingw32-gcc -o posix_threads.exe -mthreads posix_threads.c -lpthreadGC2 It is VERY important that -lpthreadGC2or -lpthreadGCE2is at the END of the command. With i686-pc-mingw32-objdump -p posix_threads.exe we can see that we need pthreadGC2.dll. If you linked with -lpthreadGCE2 (exception handling POSIX threads), you will need mingwm10.dll, pthreadGCE2.dll, and possibly libgcc_s_sjlj-1.dll (last one only if you do not compile with CFLAG -static-libgcc with g++). Copy the DLL file(s) required to the directory and test with Wine. For example: user $ cp /usr/i686-pc-mingw32/usr/bin/pthreadGC2.dll . user $ wine posix_threads.exe If all goes well, you should see output similar to the following: In main: creating thread 0 In main: creating thread 1 Thread #0 responding. In main: creating thread 2 Thread #1 responding. In main: creating thread 3 Thread #2 responding. In main: creating thread 4 Thread #3 responding. Thread #4 responding. Completed join with thread 0, status = 0 Completed join with thread 1, status = 0 Completed join with thread 2, status = 0 Completed join with thread 3, status = 0 Completed join with thread 4, status = 0 You will probably always want to include -mthreadsin your CFLAGS for any code that relies on thread-safe exception handling. From the manpage: -mthreads- Support thread-safe exception handling on MinGW 32. Code that relies on thread-safe exception handling must compile and link all code with the -mthreads option. When compiling, -mthreads defines: -D_MT; when linking, it links in a special thread helper library -lmingwthrdwhich cleans up per thread exception handling data. Wine and %PATH% Like Windows, Wine supports environment variables. You may specify the path of your DLLs (for example, the MinGW bin directory) in the registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment (for me this value would be Z:\usr\i686-pc-mingw32\usr\bin). I recommend against this as you might forget to distribute DLLs with your application binaries. No need for -lm If you #include <math.h> and make use of any of its functions, there is no need to link with the standard C math library using the -lm switch with gcc or g++. DirectX DirectX 9 headers and libs are included. Link with -ldx9. For the math functions (such as MatrixMult, unlike Windows, you need to dynamically link with -ld3dx9d and then include d3dx9d.dll (where you get this file SHOULD be from Microsoft's SDK). This is the same for DirectX 8. There is no support for DirectX 10 or 11 yet. Minimal support for Direct2D has been implemented via a patch (search the official mailing list of MinGW).
https://wiki.gentoo.org/wiki/Mingw
CC-MAIN-2015-22
refinedweb
1,615
64.91
Let’s talk about the comma operator , in C. You’ve seen in plenty of times already: In variable initialization int a = 5, b = 6; and in for loops. for (int i = 0, j = 3; i < 5 && j < 5; i++, j++) But do you really know what’s going on? Haven’t you been the least bit curious? This class is a “death to abstraction” after all, so let’s clear up this bit of haze surrounding this comma operator as well. While there is no overloading in C, the comma is actually overloaded to be both an operator and a separator. In the above two examples, it is serving as a separator between multiple expressions. The case of being a separator is not so interesting. Let’s look at the operator example instead. What is the output of the following program comma.c? #include <stdio.h> int main() { int a = 1, 2, 3; printf("a : %d\n", a); return 0; } Trick question! Doesn’t compile. Oops. What about this? Here is comma1.c: #include <stdio.h> int main() { int a; a = 1, 2, 3; printf("a : %d\n", a); return 0; } And comma2.c: #include <stdio.h> int main() { int a; a = (1, 2, 3); printf("a : %d\n", a); return 0; } So what’s the output? Go on, take a case – don’t just lazily scroll down. Guess! The answer is that comma1.c outputs 1 and comma2.c outputs 3. So what’s going on? As it turns out, the comma ( , ) has lower precedence than the assignment operator ( = ). This means that in comma1.c, a gets assigned to 1, and only afterwards the other two expressions are evaluated. If you compile this code with -Wall, it will show you the following warning: warning: expression result unused [-Wunused-value] The other two expressions in the line simply return 2 and 3, but since the return values are not assigned to anything, the compiler just disregards them. In comma2.c, however, the ( ) operator has higher precedence than the assignment operator. So in this case, the final return value is the result of the grouped expression; thus, 3 is returned to be assigned to a, and the return values 1 and 2 are ignored. A similar warning is generated with this code as well. Remember that expressions are evaluated left to right and the return value is that of the rightmost expression. With this in mind, the following statement makes more sense all of a sudden. int array[] = {23,34,12,17,204,99,16}; The braces { } group statements together. The comma operator returns a value, and the braces group these comma statements together, and the result is a new statement that groups the statements within together; we call such a grouping an array. Let’s return to the second case, where the comma can be used as a separator for expressions as well. This can be a particularly useful way of grouping together multiple short, concise statements. int j = (func1(), func2()); Or for a more context-filled example: int main() { int x = 10, y; // The following is equavalent to y = x++ y = (x++, printf("x = %d\n", x), ++x, printf("x = %d\n", x), x++); // Note that last expression is evaluated // but side effect is not updated to y printf("y = %d\n", y); printf("x = %d\n", x); return 0; } I hope this puzzle clears up some of the mystery with the comma operator! Here are some further resources to understand commas.
https://cs50.notablog.xyz/puzzle/Puzzle5.html
CC-MAIN-2018-43
refinedweb
584
65.12
Before we dive into the actual topic Comments in Python, let’s understand why we need to add comments in our code. Purpose of adding Comments in Python In a programming language, we can define comments as an explanation about the source code of a program. In general, comments are added for the purpose of making the source code easier to understand by programmers. The comments should explain the programmer’s intent. It should explain the logic behind the code, rather than the code. Comments are generally ignored by compilers and interpreters Its always a good programming practice to use comments. Because in the future if someone wants to modify the code, it’s easier for them to understand the source code with the help of comments. Uses of adding Comments in Python Debugging Sometimes, you find it difficult to debug a large piece of code. In those cases, you can comment out a block of code, which will be ignored during the program execution. So that, its easier to find the source of error. Metadata Comments in Python are often used to store metadata of a program file. The metadata contains information like - the name of the person who created the program file. - date when it was created. - names of other people who have edited the code so far. Syntax of Python comments The syntax for the comments varies according to the programming language. In C++, we use // for single-line comments and /*…/* for multi-line comments. Similarly in Python, we have different syntax for single-line and multi-line comments in Python. Single-line comment in Python In Python, we use # to create a single-line comment. Whatever we add after the # will be ignored by the interpreter. # assigning string to a variable var='Hello world' #printing a variable print(var) Multi-line comment in Python Using # We can use # at the beginning of every line for a multi-line comment. #It is an #example of #multi-line #comment Using string literals We can also use string literals as a comment. The string literals that are not assigned to any variable will be ignored by the interpreter. 'this is a comment' print('Hello World!') Hello World! When we run the program, the interpreter ignores the string literal and executes only the print statement. We can use single quotes, double quotes, or triple quotes for a string literal. In general, we use triple quotes for a multi-line string literal. Hence we can use triple quotes for a multi-line comment as well. """ Example of multi-line comment """ print('Hello World!') Docstrings Docstrings can be defined as a documentation Python Strings. The triple quotes that appear after the definition of a function, class, or method are docstrings. def func(a): """Accepts a variable and prints it""" print(a) func("Hello World") We can check the docstring of a function, class, or method using the __doc__ attribute. def func(a): """Accepts a variable and prints it""" print(a) print(func.__doc__) Accepts a variable and prints it
https://www.tutorialcup.com/python/comments-in-python.htm
CC-MAIN-2021-31
refinedweb
507
65.12
Cheat Sheet VSTO For Dummies Cheat Sheet Visual Studio Tools for Office (VSTO) extends some of the existing objects in the Office object model. You can use the classes in VSTO namespaces for the Word and Excel application to add or extend the functionality. Excel Namespace Classes for Extending the Office Object Model in VSTO. Word Namespace Classes for Extending the Office Object Model with VSTO The Microsoft.Office.Tools.Word namespace contains classes that represent objects in the Word object model. Here is a list of the class names, along with their descriptions, that you can use in VSTO to extend the Office Object Model: Document: Document class represents the document in Word 2010. Bookmark: Bookmark control represents the bookmark class from the Word object model. The bookmark control has a unique name, exposes events, and can be bound to data as well. Windows Forms Controls in VSTO VSTO contains two namespaces that hold classes that contain Windows Forms controls, which you can use on the Word documents or Excel worksheets. All controls in the Microsoft.Office.Tools.Word.Controls and Microsoft.Office.Tools.Excel.Controls namespaces are derived from the Windows Forms base classes and function in the same way as regular Windows Forms controls. You can find the following controls in the Microsoft.Office.Tools.Word.Controls namespace:
https://www.dummies.com/programming/visual-basic/vsto-for-dummies-cheat-sheet/
CC-MAIN-2019-22
refinedweb
221
55.44
Already read it all? use: Update History You'll need a background of computers, Networks, Socket Programming, VC++(MFC) before reading this article. The problem is to find a way to turn on other machines in a local area network, from our machine which might or might not be the Server. The Solution is known as Wake On LAN. WakeOnLan (or for short just WOL) is a mechanism with which a network Interface Card (NIC) could turn a machine on by receiving a special packet through the LAN. While your computer is Turned Off, The Network Interface Card remains on and looks forward to hearing a message! More accurately, a packet does, which is called a magic packet. Whenever the card receives information, a magic packet tries to switch on the computer. This packet must contain a certain byte-sequence, but can be encapsulated in any kind of packet (IPX, IP, anything). So the mechanism relies on the Hardware's ability, and that's why we need some ingredients! Both motherboard and NIC must support WOL. If you have a built-in NIC with WOL support, it's almost done! But many PCI NICs come with a connector and a wire, which has to be connected to the motherboard's WOL connector. After these, your power supply and OS must support WOL. And finally, you should enable Wake-On-LAN in your systems BIOS (or whatever called). That's it. I'm not sure, but it seems that the name comes from AMD. I did not do a search but José Pedro Oliveira (second link above) says that there is more documentation about the protocol in their web site. As I mentioned above, you need to send a special packet in your LAN network, so that the remote computer will receive it and wake up. This so-called Magic packet consists of the following parts: (This part added on an update on: 2005/09/03. Note that 'PowerOn' sample has been updated in all downloads either. [as well as executables!]) Some clients require a password in the packet to be turned on, otherwise they simply won't! This password is also known as SecureOn, and will be attached at the end of the packet. In this case, a packet will look something like this: So we have six cells, each capable of saving an integer number between 0 and 255(Just like MAC bytes). MAC stands for Media Access Control, and is a unique 48-bit (6 Bytes) hexadecimal number assigned to a NIC when it is manufactured. And it is unique; you'll not find two NICs with the same MAC Address in the world. The first three octets (24 bits) are known as the Organizationally Unique Identifier (OUI) and identifies its manufacturer. It means the MAC address of all cards of a company that say X start with a particular fixed number at the beginning. I'm not sure if this address is just assigned to Ethernet Cards or any other types of network Interface or equipment. This address is typically written as six colon/dash-separated hexadecimal numbers. In Microsoft Windows (98 SE and above, perhaps winipcfg.exe ) there's a tool called ipconfig.exe which could be used to obtain a MAC address. Ipconfig refers to this address as Physical Address. In order to see the MAC address of your NIC, just type this in the command line: ipconfig /all. There are several alternative ways of finding a MAC address, like looking for an ARP table (if any exists! or it has any entries), ifconfig (just in Linux), netstat, GetMAC (XP), all of which are enough to be a topic for a new article! So let's come back to our subject! For example, and according to the above figure, you should send a packet like : ff-ff-ff-ff-ff-ff 02-00-4c-4f-4f-50 02-00-4c-4f-4f-50 ... 02-00-4c-4f-4f-50 to the computer with the above MAC address to turn it on. But the question is: "How to obtain this address in your app". To be honest, before Googling the web I tried to do a disassembly on ipconfig.exe and I found GetAdaptersAddresses and some other functions between the functions it uses which comes from iphlpapi.dll. I guess this is the key to our problem! Then I searched the web and found an article on CodeGuru. You could find a link to this article at the top of this page. I just selected his third way which is: GetAdaptersInfo but since I was unable to compile it (it needed some header files and a lib (iphlpapi.lib) which could only be found in MS VC++ .NET), I changed it in order to be able to compile it with my compiler (VC++ 6.0). GetAdaptersAddresses GetAdaptersInfo iphlpapi.lib // Allocate information for NIC IP_ADAPTER_INFO AdapterInfo; // Save memory size of buffer DWORD dwBufLen = sizeof(AdapterInfo); // Call GetAdapterInfo DWORD dwStatus = GetAdaptersInfo(&AdapterInfo,&dwBufLen); // Verify return value is valid ASSERT(dwStatus == ERROR_SUCCESS); // Contains pointer to current adapter info PIP_ADAPTER_INFO pAdapterInfo = (&AdapterInfo); pAdapterInfo->Address;//Is the MAC Address of our NIC //for a more acurate implementation and my modifications, //Please see the demo project. We have the address, but still have the problem of sending data to a switched off computer! The solution is to broadcast a UDP packet. Did this already? Add a new feature to your local network project: Turn On All Network Machines. Did not do this already? Take a look at the demo project -PowerOn. After publishing this article on 2005/08/29, I found out that most readers wanted two major features: Thanks to Mr. Tupack Mansur, and other readers who created this temptation in my heart! (See discussion and comments below) Based on the above demands, I started another project: Remote MAC finder (third project to download), which is designed to find the MAC of a computer which we have it's IP or Host Name. Why I didn't upgrade old PowerOn project? Just to keep it simple. I could do something with the second request if I was IEEE! The problem is that to send a packet over the internet, routers need an IP address, but a turned off machine has not need one, so there is not any way to address a remote machine. As far as I know, routers will remove broadcast packets from the Internet, so don't think about it. One way would be to send a WOL packet to your LAN router which might or might not support this. An internally broadcast address like 192.168.1.255 must forwarded. Two other methods might be using telnet (wol up) directly or indirectly by 'Remote Desktop' to an always on machine and turning on others from there(Thanks to an anonymous reader who described his solution.). Remote MAC Finder works based on ARP. If you already know ARP protocol, you can ignore this next part, but since I'm still a beginner in Networks, Programming ,and even worse, self-educated (my university did not teach me these things), there might be mistakes and it will be greatly appreciated if you help to correct me. Thank you. ARP(RFC 826) stands for Address Resolution Protocol. Although all host machines and network tools use unique IP addresses, but any IP packet will cross the first layer of TCP/IP model before going to channel. This first layer is called the Network Interface Layer. The Network Interface layer will work just with a physical address known as MAC. Any IP packet containing an IP address will be placed in a Data Field(Payload) of the first layer frames and a header containing a MAC address of the destination will be added later. In other words, each machine is a network which has a packet to send to another, and should know both destination's MAC and IP addresses. Network interfaces will also recognize packets on the net, containing their MAC address and will take them for further processing. But what if a computer in a network doesn't have its peer MAC address? This is what ARP was created for. The duty of ARP is to broadcast a packet over a network. This packet actually asks: "Any one whose IP address is (say) '192.168.1.5', What is it's MAC address?". Broadcasted packets will be received by all Network machines, and as soon as a machine sees its own IP in the packet will send a reply to the requester and places its MAC address in the reply packet. In order to increase ARP speed, each time a machine finds a MAC address corresponding to an IP, the protocol will save these numbers in a table in main memory. The table is called ARP cache. So one way of finding the MAC of a remote computer is by enumerating IP's from this table and when a match is found, taking it's MAC. But the problem is that the life of this table (ARP Cache) is too small (we should exclude routers, I guess). (My test on Windows XP SP1 was about 2 Minutes!) Some commands will update entries of this table. I found that the function gethostbyaddr does that. So I first call this function both to retrieve HostName, and IP of remote machine, and update the ARP table. Then I use GetIpNetTable to enumerate IPs and find an appropriate MAC. gethostbyaddr GetIpNetTable But this process is not always reliable! The reason is that with this mechanism, an attack could be shaped :ARP Spoofing! I don't know much a lot about how it works, but it might be a good idea for an administrator to think about it! That's why I say it's not always reliable. (added upon update: I did not have the chance to test the new uploaded 'PowerOn' to see how it works with a password. It would be appreciated if anyone does can tell if his/her test was successful. Thank you. I did not provide any C++ classes, because I didn't want to hide such easy stuff from readers and write a class for one function! There are two demos included in this article. In the PowerOn demo I tried to turn on a computer in our LAN network, using the mechanism I described above. The job is completed in two functions OnPowerOn() and HexStrToInt(CString hexStr). The second function's job is to convert a hex string say "E9" to an integer value, 233 in our example. It uses the strtol function to do the duty. OnPowerOn() HexStrToInt(CString hexStr) strtol The OnPowerOn does the following: OnPowerOn //Socket to send magic packet CAsyncSocket s; //Buffer for packet BYTE magicP[102]; ... //Fill in magic packet with 102 Bytes of data //Header //fill 6 Bytes with 0xFF for (int i=0;i<6;i++) magicP[i] = 0xff; //First 6 bytes (these must be repeated!!) //fill bytes 6-12 for (i=0;i<6;i++) { //Get 2 charachters from mac address and convert it to int to fill //magic packet magicP[i+6] = HexStrToInt(macAddr.Mid(i*2,2)); } //fill remaining 90 bytes (15 time repeat) for (i=0;i<15;i++) memcpy(&magicP[(i+2)*6],&magicP[6],6); ... //Broadcast Magic Packet, Hope appropriate NIC will take it ;) s.SendTo(magicP,102,atol(m_port)); At first I thought I couldn't add the following project, but thought more and decided to do it because you might want to find a MAC automatically, and probably give the found address to your peer (TCP socket) application in a LAN, and use it later! Whatever you want to do, this might be helpful. I used Mr. Khalid Shaikh's code, but his code was not 'compileable' in my VC++ 6.0, so I found the appropriate DLL where the function had been located and used MSDN to reproduce the needed structures (hope this is not a copyright problem) and then used a pointer to the function and LoadLibrary to get the function. It seems that the function is added as a resource to the DLL (it's not documented) so we can't use GetModuleHandle. I tested the result on MS Windows 98, Me, 2K, XP(SP1). It worked fine! All the processes is done in a function named GetMacAddress(). This function is called in OnInitDialog of MAC Finder. LoadLibrary GetModuleHandle GetMacAddress() OnInitDialog There was a very interesting point I encountered, while working with MACs! Look at the above MAC address (in the figure). It's the MAC of a Microsoft loopback adapter. Since I do not have 2 (or more) computers, I tested my network applications (bot PowerOn) using this NIC. Look carefully at its MAC address:02-00-4c-4f-4f-50, and try converting numbers to characters using ASCII standard. What do you see? : 02= 00= blank 4C= L 4F= O 50= P = > Microsoft Loop back Adapter's MAC address is LOOP Who decided on these numbers, that's the mystery! First published 2005/08/29 Update #1 2005/09/02 Update #2 2005/09/20 This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) C unsigned char coldfusion #include <limits.h> CHAR_MIN char c; c = 127; c+=1; // Now look at value of c, is it a positive or negative number? signed char unsigned int char charFF = -1; 0xFF char charFF = -1; char *pBytes = (char*)malloc(33); memset(pBytes, -1, 32); pBytes[32]=0; // Use 4 bytes of 0xFF here // Maybe call to SendTo... // Don't forget to free dynamically allocated memory memory! free(pBytes); SendTo char c = -1; char c = 0xff; General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/11469/Wake-On-LAN-WOL?fid=211282&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&fr=11
CC-MAIN-2016-07
refinedweb
2,324
71.75
may or may not be writeups above which attempt to discuss this data structure. Obviously, I wasn't satisfied with them, and decided to present my own.) NIST1 gives the following defintion for a trie: A tree for storing strings in which there is one node for every common prefix. The strings are stored in extra leaf nodes. Very succinctly put, but perhaps this leaves (pardon the pun) you a little cold. It's also more limiting than it needs to be. A trie (also known as a "radix tree") is a type of search tree; that is, it maps values of a "key" type (in this case, strings) to values of another type, and it has a tree structure which increases the speed of searches at the cost of increased memory usage. Other types of search trees include btrees (and related structures like ISAM trees) and quadtrees, and the red-black tree structure which is the most common implementation of the C++ standard library collection std::map. Tries are different from other search trees, in that the "key" type must be a string type of some sort. We mean a "string type" in a much more general sense than "character string" types such as char[] and std::string (although those are the most common key types used). In order to be considered a "string" type, values are always ordered sequence of some smaller "atom" type. In addition, the sequences are usually allowed to reach an arbitrary length. Thus, any sequence type (such as C++'s std::vector, std::list or std::deque) may be considered a "string". Other search trees can use strings for their key types; however, tries exploit the fact of having strings for keys to radically (cough) speed up searches to O(1) complexity, more specifically O (keylength). They do this by unstringing the elements of the key sequence and storing later elements further down in the trie. Each node of a trie represents a particular key string; all of the node's child nodes represent longer key strings, all of which contain with the parent node's key string as an initial section (or "prefix"). For example, consider the words Aardvark Armadillo Armero Arquebus Astronaut Bagel Bakelite Baker Bastard Bastion Camembert One logical "trie" structure for these words would look like this: * | ABC ||`--------------------------------------------------. |`----------------------------------. | | | | ARS A (AMEMBERT) ||`----------------------. | |`--------. | | | | | | (RDVARK) MQ (TRONAUT) GKS |`--------. ||`-----------. | | |`---. | | | | | | AE (UEBUS) (EL) (ELITE) T |`------. | | | | (DILLO) (RO) AI |`----. | | (RD) (ON) This particular trie would store the unique terminal sections of strings in special leaf nodes (represented by strings in parentheses), but that's just a detail I added to make the diagram more readable. The important thing to notice is that you read a trie's key strings "down" the tree, as opposed to a btree, which would store entire key values in each node. So, what to we give up, in order to achieve faster searches? The tree's branching factor is much higher, resulting in a higher memory or disk space cost. In a simple trie implementation, each node would have as many branches as there are possible values of the atom type! In addition, tries are much deeper than btrees, and can be highly unbalanced, resulting in inconsistent search times. This means we should limit the use of tries to applications where This pretty much restricts the use of tries to dictionaries and word lists. Although the method of storing keys is the trie's key (ahem) feature, a trie is an associative array, and we should spend a little time discussing the value-type side of the array. Since each node represents one possible string in the key space, each node can associate with at most one possible value. This is probably where NIST's definition is too restrictive: In a tree that stores only strings, the "value type" is boolean (in the same manner that a C++ std::set<T> is like a std::map<T, bool>). But we can associate each string with any type we want. A VERY simplistic trie class implementation follows, with most of the details and algorithms left to your imagination: // // Specialize several versions of TrieNode // to take advantage of particular types' // values meaning "no, nothing here" // template<class V> struct TrieNode; template<> struct TrieNode<bool>; template<> struct TrieNode<int>; // similarly, for long int, etc. template<class V> struct TrieNode { V *terminal; TrieNode<V> *branch[256]; bool isvalue () { return (terminal != 0); } V const &value() const { return *terminal; } V &value () { if (!terminal) terminal = new V; return *terminal; } }; template<> struct TrieNode<bool> { bool terminal; TrieNode<bool> *branch[256]; bool isvalue () { return terminal; } bool value() const { return terminal; } bool &value() { return terminal; } }; template<> struct TrieNode<int> { int terminal; TrieNode<int> *branch[256]; bool isvalue () { return (terminal!=0); } int value() const { return terminal; } int &value() { return terminal; } }; // // The actual trie // Notice my pathetic attempt to make it look // like a standard C++ Container // template<class V> class Trie { typedef TrieNode<V> *iterator; typedef TrieNode<V> const *const_iterator; typedef char const *key_type; typedef V value_type; TrieNode<V> *root; const_iterator seek (key_type key) const; // // Non-const seek must "create nodes as it goes" // iterator seek (key_type key); V const &operator[](key_type key) const { static V dummy(); const_iterator it = seek (key); if (!it) return dummy; } V &operator[](key_type key) { return seek (key) -> value(); } }; 1National Institute of Standards and Technology: Dictionary of Algorithms and Data Structures Log in or register to write something here or to contact authors. Need help? [email protected]
https://everything2.com/title/Trie
CC-MAIN-2018-34
refinedweb
910
56.79
hypex alternatives and similar packages Based on the "Algorithms and Data structures" category. Alternatively, view hypex alternatives based on common mentions on social networks and blogs. flow9.6 3.9 hypex VS flowComputational parallel flows on top of GenStage witchcraft9.4 3.8 hypex VS witchcraftMonads and other dark magic for Elixir fuse8.9 7.7 hypex VS fuseA Circuit Breaker for Erlang matrex8.8 0.0 hypex VS matrexA blazing fast matrix library for Elixir/Erlang with C implementation using CBLAS. simple_bayes8.6 0.0 hypex VS simple_bayesA Naive Bayes machine learning implementation in Elixir. fsm8.3 0.0 hypex VS fsmFinite State Machine data structure monadex8.1 0.0 hypex VS monadexUpgrade your pipelines with monads. exconstructor8.1 4.3 hypex VS exconstructorAn Elixir library for generating struct constructors that handle external data with ease. loom7.8 0.0 hypex VS loomA CRDT library with δ-CRDT support. datastructures7.8 0.0 hypex VS datastructuresDatastructures for Elixir. erlang-algorithms7.7 0.0 hypex VS erlang-algorithmsImplementations of popular data structures and algorithms monad7.5 0.0 hypex VS monadMonads and do-syntax for Elixir trie7.4 0.3 hypex VS trieErlang Trie Implementation remodel7.0 0.0 hypex VS remodel:necktie: An Elixir presenter package used to transform map structures. "ActiveModel::Serializer for Elixir" lz46.9 0.0 L1 hypex VS lz4LZ4 bindings for Erlang parallel_stream6.5 0.0 hypex VS parallel_streamA parallelized stream implementation for Elixir merkle_tree6.4 0.0 hypex VS merkle_tree:evergreen_tree: Merkle Tree implementation in pure Elixir Exads6.3 0.0 hypex VS ExadsAlgorithms and Data Structures collection in Elixir bloomex6.3 1.7 hypex VS bloomex:hibiscus: A pure Elixir implementation of Scalable Bloom Filters sfmt6.3 5.2 hypex VS sfmtsfmt-erlang: SIMD-oriented Fast Mersenne Twister (SFMT) for Erlang graphmath6.1 5.7 hypex VS graphmathAn Elixir library for performing 2D and 3D mathematics. MapDiff6.1 0.8 hypex VS MapDiffCalculates the difference between two (nested) maps, and returns a map representing the patch of changes. exmatrix5.9 0.0 hypex VS exmatrixElixir library implementing a parallel matrix multiplication algorithm and other utilities for working with matrices. Used for benchmarking computationally intensive concurrent code. the_fuzz5.9 0.0 hype hypex VS DeepMergeDeep (recursive) merge for maps, keywords and others in Elixir dataframe5.6 0.0 hypex VS dataframePackage providing functionality similar to Python's Pandas or R's data.frame() ecto_materialized_path5.4 0.0 hypex VS ecto_materialized_pathTree structure & hierarchy for ecto models parex5.3 0.0 hypex VS parexAn elixir module for parallel execution of functions/processes blocking_queue5.2 0.0 hypex VS blocking_queueA blocking queue written in Elixir. red_black_tree5.1 0.0 hypex VS red_black_treeRed-black tree implementation for Elixir. hash_ring_ex4.9 0.0 hypex VS hash_ring_exA consistent hash ring implemention for Elixir sleeplocks4.9 1.2 hypex VS sleeplocksBEAM friendly spinlocks for Elixir/Erlang simhash4.8 0.0 hypex VS simhashElixir implementation of Simhash array4.4 0.0 hypex VS arrayAn Elixir wrapper library for Erlang's array cuid4.4 0.0 hypex VS cuidCollision-resistant ids, in Elixir murmur4.4 3.2 hypex VS murmur:speech_balloon: An implementation of the non-cryptographic hash Murmur3 gen_fsm4.3 0.0 hypex VS gen_fsmElixir wrapper around OTP's gen_fsm memoize4.2 0.0 hypex VS memoizeDefMemo - Ryuk's little puppy! Bring apples. bitmap4.2 0.0 hypex VS bitmapBitmap implementation in Elixir using binaries and integers. Fast space efficient data structure for lookups ratio4.2 0.8 hypex VS ratioRational number library for Elixir. cuckoo4.0 1.7 hypex VS cuckoo:bird: Cuckoo Filters in Elixir sorted_set3.9 0.0 hypex VS sorted_setSorted Set library for Elixir aruspex3.9 0.0 hypex VS aruspexA configurable constraint solver tinymt3.7 0.0 hypex VS tinymtTiny Mersenne Twister (TinyMT) for Erlang. paratize3.6 0.0 hypex VS paratizeElixir library providing some handy parallel processing facilities that supports configuring number of workers and timeout. eastar3.6 0.0 hypex VS eastarA* graph pathfinding in pure Elixir combination3.4 0.0 hypex VS combinationA simple combinatorics library providing combination and permutation. luhn3.4 0.7 hypex VS luhnLuhn algorithm in Elixir exor_filter3.4 2.5 hypex VS exor_filterErlang nif for xor_filter. 'Faster and Smaller Than Bloom and Cuckoo Filters'. Mappable3.3 0.0 hypex VS MappableSimple module that provides unified, simple interface for converting between different dictionary-like data types hypex or a related project? Popular Comparisons README Hypex Hypex is a fast HyperLogLog implementation in Elixir which provides an easy way to count unique values with a small memory footprint. This library is based on the paper documenting the algorithm written by Philippe Flajolet et al. Installation Hypex is available on Hex. You can install the package via: - Add hypex to your list of dependencies in mix.exs: def deps do [{ :hypex, "~> 1.1" }] end - Ensure hypex is started before your application: def application do [applications: [:hypex]] end Usage Hypex is extremely straightforward to use, you simply create a new Hypex instance and start adding values to it: iex> hypex = Hypex.new(4) {Hypex.Array, 4, {:array, 16, 0, 0, 100}} iex> hypex = Hypex.update(hypex, "my term") {Hypex.Array, 4, {:array, 16, 0, 0, {10, {0, 2, 0, 0, 0, 0, 0, 0, 0, 0}, 10, 10, 10, 10, 10, 10, 10, 10, 10}}} iex> hypex |> Hypex.cardinality |> round 1 The 4 being passed to Hypex.new/1 is the width which determines the underlying memory structure of a Hypex instance. This value can be within the range 4 <= width <= 16, per the HyperLogLog algorithm. If you don't provide a width, it defaults to 16. Be aware that you should typically scale this number higher based upon the more unique values you expect to see. For any other examples of how to use Hypex, please read the documentation. Memory Optimization As of v1.1.0, the default implementation has moved from a Bitstring to an Erlang Array. This is mainly due to Arrays performing faster on all operations when compared with Bitstrings. However in the case that you're operating in a low-memory environment (or simply want predictable memory usage), you might still wish to use the Bitstring implementation. You can do this by simply using Hypex.new(4, Bitstring) when creating a Hypex. A rough memory estimate (in bytes) for a Bitstring Hypex can be calculated using the formula ((2 ^ width) * width) / 8 - although this will only include the memory of the registers and not the rest of the tuple structure (which should be minimal). This means that using the highest width available of 16, your memory usage will still only be 131,072 bytes. At this point I don't know of a good way to measure the size of the Array implementation, but a rough estimate would suggest that it's probably within the range of 6-8 times more memory (if anyone can help measure, I'd appreciate it). Still, this amount of memory shouldn't pose an issue for most systems, and the throughput likely matters more to most users. Rough Benchmarks Below are some rough benchmarks for Hypex instances with the different underlying structures. Note that the update/2 tests are inserting a unique value - in the case a duplicate value is inserted, the operation is typically constant across widths at under 0.5 µs/op. These tests use a width of 4, so it should be noted that larger widths will have slower performance. However, these benchmarks are for reference only and you should gauge which widths work best for the data you're operating with, rather than the performance shown below. ## Array Hypex Array Hypex.new/1 0.53 µs/op Array Hypex.update/2 2.13 µs/op Array Hypex.cardinality/1 6.87 µs/op Array Hypex.merge/2 16.61 µs/op ## Bitstring Hypex Bitstring Hypex.new/1 0.46 µs/op Bitstring Hypex.update/2 2.13 µs/op Bitstring Hypex.cardinality/1 6.70 µs/op Bitstring Hypex.merge/2 8.69 µs/op Contributions If you feel something can be improved, or have any questions about certain behaviours or pieces of implementation, please feel free to file an issue. Proposed changes should be taken to issues before any PRs to avoid wasting time on code which might not be merged upstream. If you do make changes to the codebase, please make sure you test your changes thoroughly, and include any unit tests alongside new or changed behaviours. Hypex currently uses the excellent excoveralls to track code coverage. $ mix test $ mix coveralls $ mix coveralls.html && open cover/excoveralls.html
https://elixir.libhunt.com/hypex-alternatives
CC-MAIN-2021-43
refinedweb
1,431
50.12
There are many situations when writing React where you’ll want to pass a function to a prop. Usually it’s to pass a callback down to a child component so that the child can notify the parent of some event. It’s important to keep in mind the binding of the function – what its this object will point to when it’s called. There are a few ways to make sure the binding is correct, some better than others. This post will go over the options. Way #1: Autobinding (good, only with React.createClass) If you’re using React.createClass, the member functions in your component are automatically bound to the component instance. You can freely pass them around without calling bind, and you’re always passing the same exact same function. var Button = React.createClass({ handleClick: function() { console.log('clickity'); }, render: function() { return ( <button onClick={this.handleClick}/> ); } }); Way #2: Calling .bind Within render (bad, ES6) When using ES6 classes, React does not automatically bind the member functions inside the component. Binding at the last second like this is one way to make it work correctly, but it will hurt performance slightly because a new function is being created every time it re-renders (which could be pretty often). The trouble isn’t really that creating a function is an expensive operation. It’s that by creating a new function every time, the component you’re passing it to will see a new value for that prop every time. When it comes time to tune performance by implementing shouldComponentUpdate, that constantly-changing prop will make it look like something changed when really it’s the same as before. class Button extends React.Component { handleClick() { console.log('clickity'); } render() { return ( <button onClick={this.handleClick.bind(this)}/> ); } } Here’s another variant that is doing the same thing, creating a function every time render is called: class Button extends React.Component { handleClick() { console.log('clickity'); } render() { var handleClick = this.handleClick.bind(this); return ( <button onClick={handleClick}/> ); } } Way #3: Arrow Function in render (bad, ES6) Similar to the above example, except this one uses an arrow function instead of calling bind. It looks nicer, but it still creates a function every time render is called! No good. class Button extends React.Component { handleClick() { console.log('clickity'); } render() { return ( <button onClick={() => this.handleClick()}/> ); } } Way #4: Property Initializers (good, ESnext) This method works by setting handleClick to an arrow function one time when the component is created. Inside render and in other functions, this.handleClick can be passed along without fear because the arrow function preserves the this binding. This one is labelled “ESnext” because it’s not technically part of ES6, ES7, or ES8. ES2016 and ES2017 have been finalized already, so if and when this makes it into the spec, it’ll likely be ES2018 or beyond. Even though this is supported by Babel, there’s a (small) risk that this feature could be taken out of the spec and require some refactoring, but a lot of people are using it so it seems likely that it’ll stay put. class Button extends React.Component { // Use an arrow function here: handleClick = () => { console.log('clickity'); } render() { return ( <button onClick={this.handleClick}/> ); } } Way #5: Binding in the Constructor (good, ES6) You can set up the bindings once in the constructor, and then use them forevermore! Just don’t forget to call super. class Button extends React.Component { constructor(props) { super(props); this.handleClick = this.handleClick.bind(this); } handleClick() { console.log('clickity'); } render() { return ( <button onClick={this.handleClick}/> ); } } Way #6: Using Decorators (good, ES8+) There’s a nice library called autobind-decorator which makes it possible to do this: import autobind from 'autobind-decorator'; class Button extends React.Component { @autobind handleClick() { console.log('clickity'); } render() { return ( <button onClick={this.handleClick}/> ); } } The @autobind decorator binds the handleClick method and you’re all set. You can even use it on the entire class, if you’re lazy: import autobind from 'autobind-decorator'; @autobind class Button extends React.Component { handleClick() { console.log('clickity'); } handleOtherStuff() { console.log('also bound'); } render() { return ( <button onClick={this.handleClick}/> ); } } Once again, ES2016/ES7 doesn’t include this feature so you’re accepting a bit of risk by using it in your code, even though Babel does support it. Bonus: Passing Arguments Without Bind As Marc mentioned in the comments, it’s pretty common to use .bind to preset the arguments for a function call, especially in lists, like this: var List = React.createClass({ render() { let { handleClick } = this.props; return ( <ul> {this.props.items.map(item => <li key={item.id} onClick={handleClick.bind(this, item.id)}> {item.name} </li> )} </ul> ); } }); As explained here, one way to fix this and avoid the bind is to extract the <li> into its own component that’ll call the click handler you pass in, with its id: var List = React.createClass({ render() { let { handleClick } = this.props; // handleClick still expects an id, but we don't need to worry // about that here. Just pass the function itself and ListItem // will call it with the id. return ( <ul> {this.props.items.map(item => <ListItem key={item.id} item={item} onItemClick={handleClick} /> )} </ul> ); } }); var ListItem = React.createClass({ render() { // Don't need a bind here, since it's just calling // our own click handler return ( <li onClick={this.handleClick}> {this.props.item.name} </li> ); }, handleClick() { // Our click handler knows the item's id, so it // can just pass it along. this.props.onItemClick(this.props.item.id); } }); A Note on Performance There’s a tradeoff with most of these methods: more (and more complex) code in exchange for some theoretical performance benefit. “Premature optimization is the root of all evil,” said Donald Knuth. So… before you split up or complicate your code to save a few cycles, actually measure the impact: pop open the dev tools and profile the code and use the React performance tools. Wrap Up That about covers the ways to bind the functions you’re passing to props. Know of any other ways? Got a favorite one? Let us know in the comments. For a step-by-step approach to learning React,
https://daveceddia.com/avoid-bind-when-passing-props/
CC-MAIN-2017-51
refinedweb
1,023
59.3
02 October 2011 15:00 [Source: ICIS news] By John Richardson PERTH, Australia (ICIS)--The statistics speak for themselves. For example: [1–7 October] will bring some stability to the market and give people time to reflect,” said a Shanghai-based marketing manager with a major Asian polyolefins producer. Polypropylene (PP) prices fell by a further $40–50/tonne (€30–38/tonne) in the space of one week because of the latest credit tightening initiative, he added. “We understand that lenders in Zhejiang province were told not to issue 60–90 day letters of credit to importers from 20–30 September. added. The negative mood is such that no great comfort is being taken from large increases in polyolefin imports in August. High density polyethylene (HDPE) imports, for example, rose by 27% month on month and 10% year on year to 320,517 tonnes, according to China Customs. Low density polyethylene (LDPE) imports were up by 43% month on month and a huge 113% year on year, at 169,520 tonnes. Explanations for the strong numbers include international traders liquidating inventory in bonded warehouses in ?xml:namespace> hard-pressed small- and medium-sized enterprises (SMEs). The SMEs have suffered the most from the restrictions on credit that began late last year. “What we need is a solid sign of improvement – an indication that the end-users are coming back to the markets in a big way,” said a Singapore-based polyolefins trader. “If the biaxially oriented PP (BOPP) film buyers came back in a big way, for example, that would be very positive news. A typical BOPP converter buys 2,000–3,000 tonnes.” One shred of current comfort is reports of operating rate cuts at several South Korean crackers. European cracker-to-polyethylene (PE) producers have also lowered rates, and further reductions are expected, according to It is, however, going to take a lot more than a few operating rate cuts to rescue what looks likely to be a very difficult fourth quarter in China. ($1 = €0.75)
http://www.icis.com/Articles/2011/10/02/9496830/insight-polymer-players-hope-for-china-holiday-stability.html
CC-MAIN-2015-22
refinedweb
339
50.06
In the previous lessons in this class, you learned how to create a sync adapter component that encapsulates data transfer code, and how to add the additional components that allow you to plug the sync adapter into the system. You now have everything you need to install an app that includes a sync adapter, but none of the code you've seen actually runs the sync adapter. You should try to run your sync adapter based on a schedule or as the indirect result of some event. For example, you may want your sync adapter to run on a regular schedule, either after a certain period of time or at a particular time of the day. You may also want to run your sync adapter when there are changes to data stored on the device. You should avoid running your sync adapter as the direct result of a user action, because by doing this you don't get the full benefit of the sync adapter framework's scheduling ability. For example, you should avoid providing a refresh button in your user interface. You have the following options for running your sync adapter: - When server data changes - Run the sync adapter in response to a message from a server, indicating that server-based data has changed. This option allows you to refresh data from the server to the device without degrading performance or wasting battery life by polling the server. - When device data changes - Run a sync adapter when data changes on the device. This option allows you to send modified data from the device to a server, and is especially useful if you need to ensure that the server always has the latest device data. This option is straightforward to implement if you actually store data in your content provider. If you're using a stub content provider, detecting data changes may be more difficult. - At regular intervals - Run a sync adapter after the expiration of an interval you choose, or run it at a certain time every day. - On demand - Run the sync adapter in response to a user action. However, to provide the best user experience you should rely primarily on one of the more automated options. By using automated options, you conserve battery and network resources. The rest of this lesson describes each of the options in more detail. Run the sync adapter when server data changes If your app transfers data from a server and the server data changes frequently, you can use a sync adapter to do downloads in response to data changes. To run the sync adapter, have the server send a special message to a BroadcastReceiver in your app. In response to this message, call ContentResolver.requestSync() to signal the sync adapter framework to run your sync adapter. Google Cloud Messaging (GCM) provides both the server and device components you need to make this messaging system work. Using GCM to trigger transfers is more reliable and more efficient than polling servers for status. While polling requires a Service that is always active, GCM uses a BroadcastReceiver that's activated when a message arrives. While polling at regular intervals uses battery power even if no updates are available, GCM only sends messages when needed. Note: If you use GCM to trigger your sync adapter via a broadcast to all devices where your app is installed, remember that they receive your message at roughly the same time. This situation can cause multiple instance of your sync adapter to run at the same time, causing server and network overload. To avoid this situation for a broadcast to all devices, you should consider deferring the start of the sync adapter for a period that's unique for each device. The following code snippet shows you how to run requestSync() in response to an incoming GCM message: public class GcmBroadcastReceiver extends BroadcastReceiver { ... // Constants // Content provider authority public static final String AUTHORITY = "com.example.android.datasync.provider" // Account type public static final String ACCOUNT_TYPE = "com.example.android.datasync"; // Account public static final String ACCOUNT = "default_account"; // Incoming Intent key for extended data public static final String KEY_SYNC_REQUEST = "com.example.android.datasync.KEY_SYNC_REQUEST"; ... @Override public void onReceive(Context context, Intent intent) { // Get a GCM object instance GoogleCloudMessaging gcm = GoogleCloudMessaging.getInstance(context); // Get the type of GCM message String messageType = gcm.getMessageType(intent); /* * Test the message type and examine the message contents. * Since GCM is a general-purpose messaging system, you * may receive normal messages that don't require a sync * adapter run. * The following code tests for a a boolean flag indicating * that the message is requesting a transfer from the device. */ if (GoogleCloudMessaging.MESSAGE_TYPE_MESSAGE.equals(messageType) && intent.getBooleanExtra(KEY_SYNC_REQUEST)) { /* * Signal the framework to run your sync adapter. Assume that * app initialization has already created the account. */ ContentResolver.requestSync(ACCOUNT, AUTHORITY, null); ... } ... } ... } Run the sync adapter when content provider data changes If your app collects data in a content provider, and you want to update the server whenever you update the provider, you can set up your app to run your sync adapter automatically. To do this, you register an observer for the content provider. When data in your content provider changes, the content provider framework calls the observer. In the observer, call requestSync() to tell the framework to run your sync adapter. Note: If you're using a stub content provider, you don't have any data in the content provider and onChange() is never called. In this case, you have to provide your own mechanism for detecting changes to device data. This mechanism is also responsible for calling requestSync() when the data changes. To create an observer for your content provider, extend the class ContentObserver and implement both forms of its onChange() method. In onChange(), call requestSync() to start the sync adapter. To register the observer, pass it as an argument in a call to registerContentObserver(). In this call, you also have to pass in a content URI for the data you want to watch. The content provider framework compares this watch URI to content URIs passed in as arguments to ContentResolver methods that modify your provider, such as ContentResolver.insert(). If there's a match, your implementation of ContentObserver.onChange() is called. The following code snippet shows you how to define a ContentObserver that calls requestSync() when a table changes: public class MainActivity extends FragmentActivity { ... // Constants // Content provider scheme public static final String SCHEME = "content://"; // Content provider authority public static final String AUTHORITY = "com.example.android.datasync.provider"; // Path for the content provider table public static final String TABLE_PATH = "data_table"; // Account public static final String ACCOUNT = "default_account"; // Global variables // A content URI for the content provider's data table Uri mUri; // A content resolver for accessing the provider ContentResolver mResolver; ... public class TableObserver extends ContentObserver { /* * Define a method that's called when data in the * observed content provider changes. * This method signature is provided for compatibility with * older platforms. */ @Override public void onChange(boolean selfChange) { /* * Invoke the method signature available as of * Android platform version 4.1, with a null URI. */ onChange(selfChange, null); } /* * Define a method that's called when data in the * observed content provider changes. */ @Override public void onChange(boolean selfChange, Uri changeUri) { /* * Ask the framework to run your sync adapter. * To maintain backward compatibility, assume that * changeUri is null. */ ContentResolver.requestSync(ACCOUNT, AUTHORITY, null); } ... } ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... // Get the content resolver object for your app mResolver = getContentResolver(); // Construct a URI that points to the content provider data table mUri = new Uri.Builder() .scheme(SCHEME) .authority(AUTHORITY) .path(TABLE_PATH) .build(); /* * Create a content observer object. * Its code does not mutate the provider, so set * selfChange to "false" */ TableObserver observer = new TableObserver(false); /* * Register the observer for the data table. The table's path * and any of its subpaths trigger the observer. */ mResolver.registerContentObserver(mUri, true, observer); ... } ... } Run the sync adapter periodically You can run your sync adapter periodically by setting a period of time to wait between runs, or by running it at certain times of the day, or both. Running your sync adapter periodically allows you to roughly match the update interval of your server. Similarly, you can upload data from the device when your server is relatively idle, by scheduling your sync adapter to run at night. Most users leave their powered on and plugged in at night, so this time is usually available. Moreover, the device is not running other tasks at the same time as your sync adapter. If you take this approach, however, you need to ensure that each device triggers a data transfer at a slightly different time. If all devices run your sync adapter at the same time, you are likely to overload your server and cell provider data networks. In general, periodic runs make sense if your users don't need instant updates, but expect to have regular updates. Periodic runs also make sense if you want to balance the availability of up-to-date data with the efficiency of smaller sync adapter runs that don't over-use device resources. To run your sync adapter at regular intervals, call addPeriodicSync(). This schedules your sync adapter to run after a certain amount of time has elapsed. Since the sync adapter framework has to account for other sync adapter executions and tries to maximize battery efficiency, the elapsed time may vary by a few seconds. Also, the framework won't run your sync adapter if the network is not available. Notice that addPeriodicSync() doesn't run the sync adapter at a particular time of day. To run your sync adapter at roughly the same time every day, use a repeating alarm as a trigger. Repeating alarms are described in more detail in the reference documentation for AlarmManager. If you use the method setInexactRepeating() to set time-of-day triggers that have some variation, you should still randomize the start time to ensure that sync adapter runs from different devices are staggered. The method addPeriodicSync() doesn't disable setSyncAutomatically(), so you may get multiple sync runs in a relatively short period of time. Also, only a few sync adapter control flags are allowed in a call to addPeriodicSync(); the flags that are not allowed are described in the referenced documentation for addPeriodicSync(). The following code snippet shows you how to schedule periodic sync adapter runs: public class MainActivity extends FragmentActivity { ... // Constants // Content provider authority public static final String AUTHORITY = "com.example.android.datasync.provider"; // Account public static final String ACCOUNT = "default_account"; // Sync interval constants public static final long SECONDS_PER_MINUTE = 60L; public static final long SYNC_INTERVAL_IN_MINUTES = 60L; public static final long SYNC_INTERVAL = SYNC_INTERVAL_IN_MINUTES * SECONDS_PER_MINUTE; // Global variables // A content resolver for accessing the provider ContentResolver mResolver; ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... // Get the content resolver for your app mResolver = getContentResolver(); /* * Turn on periodic syncing */ ContentResolver.addPeriodicSync( ACCOUNT, AUTHORITY, Bundle.EMPTY, SYNC_INTERVAL); ... } ... } Run the sync adapter on demand Running your sync adapter in response to a user request is the least preferable strategy for running a sync adapter. The framework is specifically designed to conserve battery power when it runs sync adapters according to a schedule. Options that run a sync in response to data changes use battery power effectively, since the power is used to provide new data. In comparison, allowing users to run a sync on demand means that the sync runs by itself, which is inefficient use of network and power resources. Also, providing sync on demand leads users to request a sync even if there's no evidence that the data has changed, and running a sync that doesn't refresh data is an ineffective use of battery power. In general, your app should either use other signals to trigger a sync or schedule them at regular intervals, without user input. However, if you still want to run the sync adapter on demand, set the sync adapter flags for a manual sync adapter run, then call ContentResolver.requestSync(). Run on demand transfers with the following flags: SYNC_EXTRAS_MANUAL - Forces a manual sync. The sync adapter framework ignores the existing settings, such as the flag set by setSyncAutomatically(). SYNC_EXTRAS_EXPEDITED - Forces the sync to start immediately. If you don't set this, the system may wait several seconds before running the sync request, because it tries to optimize battery use by scheduling many requests in a short period of time. The following code snippet shows you how to call requestSync() in response to a button click: public class MainActivity extends FragmentActivity { ... // Constants // Content provider authority public static final String AUTHORITY = "com.example.android.datasync.provider" // Account type public static final String ACCOUNT_TYPE = "com.example.android.datasync"; // Account public static final String ACCOUNT = "default_account"; // Instance fields Account mAccount; ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... /* * Create the dummy account. The code for CreateSyncAccount * is listed in the lesson Creating a Sync Adapter */ mAccount = CreateSyncAccount(this); ... } /** * Respond to a button click by calling requestSync(). This is an * asynchronous operation. * * This method is attached to the refresh button in the layout * XML file * * @param v The View associated with the method call, * in this case a Button */ public void onRefreshButtonClick(View v) { ... // Pass the settings flags by inserting them in a bundle Bundle settingsBundle = new Bundle(); settingsBundle.putBoolean( ContentResolver.SYNC_EXTRAS_MANUAL, true); settingsBundle.putBoolean( ContentResolver.SYNC_EXTRAS_EXPEDITED, true); /* * Request the sync for the default account, authority, and * manual sync settings */ ContentResolver.requestSync(mAccount, AUTHORITY, settingsBundle); }
https://developer.android.com/training/sync-adapters/running-sync-adapter
CC-MAIN-2018-39
refinedweb
2,231
53.81
10 May 2010 12:00 [Source: ICIS news] LONDON (ICIS news)--UK base interest rates were held at 0.5% by the Bank of England on Monday as widely expected, given an as-yet undecided national election result and continuing financial uncertainty within the EU. The bank also decided not to pump more money into the ?xml:namespace> Its primary rate was unchanged for the fourteenth consecutive month. The monthly announcement had been delayed from the prior week because of the election. The bank's decision was reached also following the agreement struck by The Uncertainty over what eurozone members and other EU staes ($1 = €0.78. €1 = £0.86)
http://www.icis.com/Articles/2010/05/10/9357657/bank-of-england-holds-interest-rates-at-0.5.html
CC-MAIN-2014-35
refinedweb
110
54.22
Sometimes you may need invoke a linux command from your C++ program and collect the output of the command. And then possibly the next step is to parse the output and do something interesting with it. To help us with this task we have a very handy linux system function called popen. Today I am going to dive deep down the linux system function popen and see how we can utilise these for achieving the task in hand which is to be able to invoke a linux command from a C++ program and collect the output, maybe print it on the console. Object modelling Because it is C++ I will take an Object oriented approach which to model the program around C++ classes and structures. To start with let’s call our C++ program SystemAnalyser. Our SystemAnalyser class will have the below functionalities: - Able to launch a Linux command - Able to capture the output of the command - Able to display the command on the console. The above functionalities are quite simple and will good to start with. These are fairly simple operations and we will implement these with simple C++ class methods. So lets design the class. #ifndef __SYSTEM_ANALYSER_H__ #define __SYSTEM_ANALYSER_H__ #include <string> class SystemAnalyser { public: SystemAnalyser(); virtual void RunCommand(const char * command); virtual void StoreOutput(std::string& result); virtual void DisplayOutput(); virtual ~SystemAnalyser(); }; #endif The above is our blueprint for the SystemAnalyser class from the information we have will now. This I have declared in .h file and the plan is to have the implementation in a .cpp file later. Our skeleton cpp file looks like this at this point: #include "SystemAnalyser.h" SystemAnalyser::SystemAnalyser() { } void SystemAnalyser::RunCommand(const char * command) { } void SystemAnalyser::StoreOutput(std::string& result) { } void SystemAnalyser::DisplayOutput() { } SystemAnalyser::~SystemAnalyser() { } Now we are going to write a short main.cpp and try to compile the code in a Linux prompt. #include <iostream> #include <memory> #include "SystemAnalyser.h" int main() { std::cout << "Hello World" << std::endl; auto system = std::make_unique<SystemAnalyser>(); system->RunCommand("ls -la"); system->DisplayOutput(); return 0; } Compile the skeleton I am not going to write a makefile at this point as it would be a bit of overkill. So let’s try compile the above code from command line: ~/system_analysier/SystemAnalyser$ g++ --std=c++17 main.cpp SystemAnalyser.cpp -o system ~/system_analysier/SystemAnalyser$ ./system Hello World So at this point the program is not doing much except printing hello world and reasonably so because we have not really implemented anything. Now lets extend the RunCommand method to get it doing the real work. We are going to use the Linux system function called popen for launching our command. You can have a read about what this functions do from the Linux man page. I will try to briefly discuss what these functions does in the following two paragraph. Popen() In a nutshell popen function does the following operations: The pipe popen creates is unidirectional which essentially means either you can read it or write into it but not both. A pipe you can imagine like new stream of data which either you will generate by writing into the pipe or you will receive by reading the pipe. Popen takes a command string which is what we want to invoke in the shell. We need to be clear here that with popen we are actually invoking a new shell and then executing our command. Implementation Now with that brief background on popen() lets try implement our RunCommand() method. I will first show you a sample implementation and then try explaining it step by step. Here is the implementation: std::string(); } std::cout << result << std::endl; return result; } In the above code snippet what we are doing is we are opening a pipe with the popen() function and collecting the pipe descriptor which popen returns in a pointer named pipe. This is just a name, instead of pipe you can also name the pipe descriptor as your first name or anything you like. Lets focus on the below line and try to understand what it is doing: std::unique_ptr<FILE, decltype(&pclose)> pipe(popen(command, "r"), pclose); In the above line of code we are using C++ smart pointer, unique_ptr() for holding the pipe descriptor as pipe descriptor is nothing but a pointer itself. The interesting part creating this unique_ptr is that we are passing a deleter as well and its type. A deleter is something which the C++ runtime uses to destroy the memory created by popen when the pointer goes out of scope. There are really two parts to understand in how the unique_ptr is being created as below: - specifying the type of the deleter - specifying the deleter A deleter in this case is nothing but a function called pclose(). Pclose is a Linux system provided function. What we are telling the C++ compiler in the below code is that use the type of the deleter as been declared in the library: std::unique_ptr<FILE, decltype(&pclose)> That’s precisely what decltype does, it tells the compiler to consider the type of an object to be same as a previously specified type. In this case the type of the system function pclose() is known to the compiler as it has been included from the library. Now coming to specifying the deleter function. We are specifying the name of the library function pclose() here: pipe(popen(command, "r"), pclose) I am not going to explain the rest of the part of the smart pointer as that is not the focus here. So lets move on to the next part which is how to capture the output from the pipe. To capture the output from the pipe we simply need to read the pipe using the pipe file descriptor. So first thing we need to check if we have a valid pipe descriptor, we do that as below: if (!pipe) { throw std::runtime_error("popen() failed!"); } As you can see we are checking if the pipe pointer is holding a non-zero value, if not this is not a valid pointer and we are throwing an exception with the error message. We need a buffer to hold our output, so we are declaring a buffer here using C++ array data type. std::array<char, 128> buffer; It doesn’t have to be strictly an C++ type array, it could be a simple C type array for this purpose. But this is a C++ implementation so I used more C++ types than C. To read from the pipe we use a Linux function called fgets(). This is the way we are reading the data from the pipe stream: while (fgets(buffer.data(), buffer.size(), pipe.get()) != nullptr) { result += buffer.data(); } What we are telling the fget() is where to put the read data, buffer.data() gives the raw pointer to the first element in the array (remember our array is named ‘buffer’), then telling it how many bytes to read, buffer.size() which is 128 bytes maximum. So with one fgets() call we can read maximum 128 bytes. Now, lets look at the below line: pipe.get() If you are familiar with smart pointers in C++, you know that unique_ptr() is a manged pointer, hence it is not the actual raw pointer. So to get the actual pointer from the unique_ptr we need to use an accessor function called get(). So what the above line does is to return the underlying pointer which is our pipe descriptor required for reading from the pipe. We are using a while loop for the reason that if there is more data than just 128 bytes in the pipe, fget will return a string and the loop will keep on running and reading more data from it until it becomes empty and fget() will encounter EOF and return a null pointer, at that point the loop will stop. In each loop we keep appending the string read into a string type named ‘result’ in our implementation. At the end of the while loop the RunCommand function returns the complete string read from the pipe. We can store this string in a class member string type, lets call it outputStore and declare it in the private section of the SystemAnalyser class as below: private: std::string outputStore; Now, our StoreOutput() function can contribute something by taking the result string as input and assigning it to the class member string outputStore. Obviously, not very path breaking but useful for explaining the concept here. So our StoreOutput() function looks like this now: void SystemAnalyser::StoreOutput(std::string result) { outputStore = result; } StoreOutput() is called within the RunCommand() class member function and hence it doesn’t have to be public. Hence, I would ideally move it to the private section of the class because I would not like to give any unnecessary member function access to the end user, hence my intention is to make as many function and variables private as possible. That is an OOP approach you may consider in your program as well. Now, coming to the displaying the result part. It is very simple. Now, that our StoreOutput() function has stored the output in a class member string, display function just has to pass it to the cout object and that should be it. This is what we can do: void SystemAnalyser::DisplayOutput() { std::cout << outputStore << std::endl; } And finally this is how our friend RunCommand() and StoreOutput() functions looking like: void(); } StoreOutput(result); } void SystemAnalyser::StoreOutput(std::string result) { outputStore = result; } If you notice I have changed the return type of the RunCommand function from std::string to void because after storing the output as string in the class member outputStore there is no need to return the output. So that is pretty much it. The source code can be downloaded from here. Finally, a sample output of the above program should look like below: VBR09@mercury:~/system_analysier/SystemAnalyser$ ./system Hello World total 64 drwxr-xr-x 2 VBR09 tpl_sky_pdd_jira_user 4096 Dec 23 18:51 . drwxr-xr-x 4 VBR09 tpl_sky_pdd_jira_user 4096 Dec 23 14:20 .. -rw-r--r-- 1 VBR09 tpl_sky_pdd_jira_user 683 Dec 23 18:51 SystemAnalyser.cpp -rw-r--r-- 1 VBR09 tpl_sky_pdd_jira_user 329 Dec 23 18:49 SystemAnalyser.h -rw-r--r-- 1 VBR09 tpl_sky_pdd_jira_user 244 Dec 23 14:22 main.cpp -rwxr-xr-x 1 VBR09 tpl_sky_pdd_jira_user 44608 Dec 23 18:51 system In the next blog we will expand the functionalities to include a output parser to have a better look at the interesting bits of the output. Till then, bye! Feature image source:
https://vivekbhadra.wordpress.com/2020/12/23/how-to-invoke-linux-command-in-c-program-and-gather-data/
CC-MAIN-2022-27
refinedweb
1,766
59.94
. As you probably know, Elixir is a functional language used to build fault-tolerant, concurrent systems that handle lots of simultaneous requests. BEAM (Erlang virtual machine) uses processes to perform various tasks concurrently, which means, for example, that serving one request does not block another one. Processes are lightweight and isolated, which means that they do not share any memory and even if one process crashes, others can continue running. BEAM processes are very different from the OS processes. Basically, BEAM runs in one OS process and uses its own schedulers. Each scheduler occupies one CPU core, runs in a separate thread, and may handle thousands of processes simultaneously (that take turns to execute). You can read a bit more about BEAM and multithreading on StackOverflow. Open Telecom Platform (OTP) framework to do server behavior called GenServer that is provided by OTP.! It All Starts With Spawn If you asked me how to create a process in Elixir, I'd answer: spawn it! spawn/1 is a function defined inside the Kernel module that returns a new process. This function accepts a lambda that will be executed in the created process. As soon as the execution has finished, the process exits as well: spawn(fn -> IO.puts("hi") end) |> IO.inspect # => hi # => #PID<0.72.0> So, here spawn returned a new process id. If you add a delay to the lambda, the string "hi" will be printed out after some time: spawn(fn -> :timer.sleep(5000) IO.puts("hi") end) |> IO.inspect # => #PID<0.82.0> # => (after 5 seconds) "hi" Now we can spawn as many processes as we want, and they will be run concurrently: spawn_it = fn(num) -> spawn(fn -> :timer.sleep(5000) IO.puts("hi #{num}") end) end Enum.each( 1..10, fn(_) -> spawn_it.(:rand.uniform(100)) end ) # => (all printed out at the same time, after 5 seconds) # => hi 5 # => hi 10 etc... Here we are spawning ten processes and printing out a test string with a random number. :rand is a module provided by Erlang, so its name is an atom. What's cool is that all the messages will be printed out at the same time, after five seconds. It happens because all ten processes are being executed concurrently. Compare it to the following example that performs the same task but without using spawn/1: dont_spawn_it = fn(num) -> :timer.sleep(5000) IO.puts("hi #{num}") end Enum.each( 1..10, fn(_) -> dont_spawn_it.(:rand.uniform(100)) end ) # => (after 5 seconds) hi 70 # => (after another 5 seconds) hi 45 # => etc... While this code is running, you may go to the kitchen and make another cup of coffee as it will take nearly a minute to complete. Each message is displayed sequentially, which is of course not optimal! You might ask: "How much memory does a process consume?" Well, it depends, but initially it occupies a couple of kilobytes, which is a very small number (even my old laptop has 8GB of memory, not to mention cool modern servers). So far, so good. Before we start working with GenServer, however, let's discuss yet another important thing: passing and receiving messages. Working With Messages It's no surprise that processes (which are isolated, as you remember) need to communicate in some way, especially when it comes to building more or less complex systems. To achieve this, we can use messages. A message can be sent using a function with quite an obvious name: send/2. It accepts a destination (port, process id or a process name) and the actual message. After the message is sent, it appears in the mailbox of a process and can be processed. As you see, the general idea is very similar to our everyday activity of exchanging emails. A mailbox is basically a "first in first out" (FIFO) queue. After the message is processed, it is removed from the queue. To start receiving messages, you need—guess what!—a receive macro. This macro contains one or more clauses, and a message is matched against them. If a match is found, the message is processed. Otherwise, the message is put back into the mailbox. On top of that, you can set an optional after clause that runs if a message was not received in the given time. You can read more about send/2 and receive in the official docs. Okay, enough with the theory—let's try to work with the messages. First of all, send something to the current process: send(self(), "hello!") The self/0 macro returns a pid of the calling process, which is exactly what we need. Do not omit round brackets after the function as you'll get a warning regarding the ambiguity match. Now receive the message while setting the after clause: receive do msg -> IO.puts "Yay, a message: #{msg}" msg after 1000 -> IO.puts :stderr, "I want messages!" end |> IO.puts # => Yay, a message: hello! # => hello! Note that the clause returns the result of evaluating the last line, so we get the "hello!" string. Remember that you may introduce as many clauses as needed: Here we have four clauses: one to handle a success message, another to handle errors, and then a "fallback" clause and a timeout.. Now that you know how to spawn processes, send and receive messages, let's take a look at a slightly more complex example that involves creating a simple server responding to various messages. Working With Server Process. Start by creating the server and the looping function: defmodule MathServer do def start do spawn &listen/0 end defp listen do receive do {:sqrt, caller, arg} -> IO.puts arg _ -> IO.puts :stderr, "Not implemented." end listen() end end So we spawn a process that keeps listening to the incoming messages. After the message is received, the listen/0 function gets called again, thus creating an endless loop. Inside the listen/0 function, we add support for the :sqrt message, which will calculate the square root of a number. The arg will contain the actual number to perform the operation against. Also, we are defining a fallback clause. You may now start the server and assign its process id to a variable: math_server = MathServer.start IO.inspect math_server # => #PID<0.85.0> Brilliant! Now let's add an implementation function to actually perform the calculation: defmodule MathServer do # ... def sqrt(server, arg) do send(:some_name, {:sqrt, self(), arg}) end end Use this function now: MathServer.sqrt(math_server, 3) # => 3 For now, it simply prints out the passed argument, so tweak your code like this to perform the mathematical operation: defmodule MathServer do # ... defp listen do receive do {:sqrt, caller, arg} -> send(:some_name, {:result, do_sqrt(arg)}) _ -> IO.puts :stderr, "Not implemented." end listen() end defp do_sqrt(arg) do :math.sqrt(arg) end end Now, yet another message is sent to the server containing the result of the computation. What's interesting is that the sqrt/2 function simply sends a message to the server asking to perform an operation without waiting for the result. So, basically, it performs an asynchronous call. Obviously, we do want to grab the result at some point in time, so code another public function: def grab_result do receive do {:result, result} -> result after 5000 -> IO.puts :stderr, "Timeout" end end Now utilize it: math_server = MathServer.start MathServer.sqrt(math_server, 3) MathServer.grab_result |> IO.puts # => 1.7320508075688772 It works! Of course, you can even create a pool of servers and distribute tasks between them, achieving concurrency. It is convenient when the requests do not relate to each other. Meet GenServer All right, we have covered a handful of functions allowing us to create long-running server processes and send and receive messages. This is great, but we have to write too much boilerplate code that starts a server loop ( start/0), responds to messages ( listen/0 private function), and returns a result ( grab_result/0). In more complex situations, we might also need to main a shared state or handle the errors. As I said at the beginning of the article, there is no need to reinvent a bicycle. Instead, we can utilize GenServer behavior that already provides all the boilerplate code for us and has great support for server processes (as we saw in the previous section). Behaviour in Elixir is a code that implements a common pattern. To use GenServer, you need to define a special callback module that satisfies the contract as dictated by the behaviour. Specifically, it should implement some callback functions, and the actual implementation is up to you. After the callbacks are written, the behavior module may utilize them. As stated by the docs, GenServer requires six callbacks to be implemented, though they have a default implementation as well. It means that you can redefine only those that require some custom logic. First things first: we need to start the server before doing anything else, so proceed to the next section! Starting the Server To demonstrate the usage of GenServer, let's write a CalcServer that will allow users to apply various operations to an argument. The result of the operation will be stored in a server state, and then another operation may be applied to it as well. Or a user may get a final result of the computations. First of all, employ the use macro to plug in GenServer: defmodule CalcServer do use GenServer end Now we will need to redefine some callbacks. The first is init/1, which is invoked when a server is started. The passed argument is used to set an initial server's state. In the simplest case, this callback should return the {:ok, initial_state} tuple, though there are other possible return values like {:stop, reason}, which causes the server to immediately stop. I think we can allow users to define the initial state for our server. However, we must check that the passed argument is a number. So use a guard clause for that: defmodule CalcServer do use GenServer def init(initial_value) when is_number(initial_value) do {:ok, initial_value} end def init(_) do {:stop, "The value must be an integer!"} end end Now, simply start the server by using the start/3 function, and provide your CalcServer as a callback module (the first argument). The second argument will be the initial state: GenServer.start(CalcServer, 5.1) |> IO.inspect # => {:ok, #PID<0.85.0>} If you try to pass a non-number as a second argument, the server won't be started, which is exactly what we need. Great! Now that our server is running, we can start coding mathematical operations. Handling Asynchronous Requests Asynchronous requests are called casts in GenServer's terms. To perform such a request, use the cast/2 function, which accepts a server and the actual request. It is similar to the sqrt/2 function that we coded when talking about server processes. It also uses the "fire and forget" approach, meaning that we are not waiting for the request to finish. To handle the asynchronous messages, a handle_cast/2 callback is used. It accepts a request and a state and should respond with a tuple {:noreply, new_state} in the simplest case (or {:stop, reason, new_state} to stop the server loop). For instance, let's handle an asynchronous :sqrt cast: def handle_cast(:sqrt, state) do {:noreply, :math.sqrt(state)} end That's how we maintain the state of our server. Initially the number (passed when the server was started) was 5.1. Now we update the state and set it to :math.sqrt(5.1). Code the interface function that utilizes cast/2: def sqrt(pid) do GenServer.cast(pid, :sqrt) end To me, this resembles an evil wizard who casts a spell but doesn't care about the impact it causes. Note that we require a process id to perform the cast. Remember that when a server is successfully started, a tuple {:ok, pid} is returned. Therefore, let's use pattern matching to extract the process id: {:ok, pid} = GenServer.start(CalcServer, 5.1) CalcServer.sqrt(pid) Nice! The same approach can be used to implement, say, multiplication. The code will be a bit more complex as we'll need to pass the second argument, a multiplier: def multiply(pid, multiplier) do GenServer.cast(pid, {:multiply, multiplier}) end The cast function supports only two arguments, so I need to construct a tuple and pass an additional argument there. Now the callback: def handle_cast({:multiply, multiplier}, state) do {:noreply, state * multiplier} end We can also write a single handle_cast callback that supports operation as well as stopping the server if the operation is unknown: def handle_cast(operation, state) do case operation do :sqrt -> {:noreply, :math.sqrt(state)} {:multiply, multiplier} -> {:noreply, state * multiplier} _ -> {:stop, "Not implemented", state} end end Now use the new interface function: CalcServer.multiply(pid, 2) Great, but currently there is no way to get a result of the computations. Therefore, it is time to define yet another callback. Handling Synchronous Requests If asynchronous requests are casts, then synchronous ones are named calls. To run such requests, utilize the call/3 function, which accepts a server, request, and an optional timeout which equals five seconds by default. Synchronous requests are used when we want to wait until the response actually arrives from the server. The typical use case is getting some information like a result of computations, as in today's example (remember the grab_result/0 function from one of the previous sections). To process synchronous requests, a handle_call/3 callback is utilized. It accepts a request, a tuple containing the server's pid, and a term identifying the call, as well as the current state. In the simplest case, it should respond with a tuple {:reply, reply, new_state}. Code this callback now: def handle_call(:result, _, state) do {:reply, state, state} end As you see, nothing complex. The reply and the new state equal the current state as I don't want to change anything after the result was returned. Now the interface result/1 function: def result(pid) do GenServer.call(pid, :result) end This is it! The final usage of the CalcServer is demonstrated below: {:ok, pid} = GenServer.start(CalcServer, 5.1) CalcServer.sqrt(pid) CalcServer.multiply(pid, 2) CalcServer.result(pid) |> IO.puts # => 4.516635916254486 Aliasing It becomes somewhat tedious to always provide a process id when calling the interface functions. Luckily, it is possible to give your process a name, or an alias. This is done upon the starting of the server by setting name: GenServer.start(CalcServer, 5.1, name: :calc) CalcServer.sqrt CalcServer.multiply(2) CalcServer.result |> IO.puts Note that I am not storing pid now, though you may want to do pattern matching to make sure that the server was actually started. Now the interface functions become a bit simpler: def sqrt do GenServer.cast(:calc, :sqrt) end def multiply(multiplier) do GenServer.cast(:calc, {:multiply, multiplier}) end def result do GenServer.call(:calc, :result) end Just don't forget that you can't start two servers with the same alias. Alternatively, you may introduce yet another interface function start/1 inside your module and take advantage of the __MODULE__/0 macro, which returns the current module's name as an atom: Termination Another callback that can be redefined in your module is called terminate/2. It accepts a reason and the current state, and it's called when a server is about to exit. This may happen when, for example, you pass an incorrect argument to the multiply/1 interface function: # ... CalcServer.multiply(2) The callback may look something like this: def terminate(_reason, _state) do IO.puts "The server terminated" end Conclusion In this article we have covered the basics of concurrency in Elixir and discussed functions and macros like spawn, receive, and send. You have learned what processes are, how to create them, and how to send and receive messages. Also, we've seen how to build a simple long-running server process that responds to both synchronous and asynchronous messages. On top of that, we have discussed GenServer behavior and have seen how it simplifies the code by introducing various callbacks. We have worked with the init, terminate, handle_call and handle_cast callbacks and created a simple calculating server. If something seemed unclear to you, don't hesitate to post your questions! There is more to GenServer, and of course it's impossible to cover everything in one article. In my next post, I will explain what supervisors are and how you can use them to monitor your processes and recover them from errors. Until then, happy coding!
https://code.tutsplus.com/articles/what-is-genserver-and-why-should-you-care--cms-29143?ec_unit=translation-info-language
CC-MAIN-2022-33
refinedweb
2,775
64.61
I want my program to have the street's and building's data in xml format. But I have no idea how to start this project. I am new to using api's and reading about the api does not tell me how do I actually write api call in my c++ code and retrieve the data. What sources should I refer to write a simple program to say : "retrieve bounding box xml file in osm map" All queries are on wiki page, but where to write this queries and required setup for starting with this api usage is evasive. asked 24 Dec '12, 06:25 Anubha 31●3●3●6 accept rate: 0% edited 24 Dec '12, 06:27 Implementing the API call in your project is very programming/scripting language-specific and hasn't much to do with OSM. This is a minimal working example in plain C/C++ with only minimal error checking and no response processing: #include <iostream> #include <string> #include <vector> #include <cstdio> #include <arpa/inet.h> #include <sys/socket.h> const std::string host = "81.19.81.199"; // IP of overpass.osm.rambler.ru const int port = 80; const std::string query = "GET /cgi/interpreter?data=node%5Bname%3DGielgen%5D%3Bout%3B HTTP/1.1\r\n" "Host: overpass.osm.rambler.ru\r\n" "User-Agent: SteveC\r\n" "Accept: */*\r\n" "Connection: close\r\n" "\r\n"; int main(int argc, char* argv[]) { int sock = socket(AF_INET, SOCK_STREAM, 0); if (sock == -1) { perror("error opening socket"); return -1; } struct sockaddr_in sin; sin.sin_port = htons(port); sin.sin_addr.s_addr = inet_addr(host.c_str()); sin.sin_family = AF_INET; if (connect(sock, (struct sockaddr *)&sin, sizeof(sin)) == -1) { perror("error connecting to host"); return -1; } const int query_len = query.length() + 1; // trailing '\0' if (send(sock, query.c_str(), query_len, 0) != query_len) { perror("error sending query"); return -1; } const int buf_size = 1024 * 1024; while (true) { std::vector<char> buf(buf_size, '\0'); const int recv_len = recv(sock, &buf[0], buf_size - 1, 0); if (recv_len == -1) { perror("error receiving response"); return -1; } else if (recv_len == 0) { std::cout << std::endl; break; } else { std::cout << &buf[0]; } } return 0; } Example output: HTTP/1.1 200 OK Server: nginx/1.0.9 Date: Mon, 24 Dec 2012 11:45:34 GMT Content-Type: application/osm3s+xml Connection: close Content-Length: 1893 Access-Control-Allow-Origin: * <?xml version="1.0" encoding="UTF-8"?> <osm version="0.6" generator="Overpass API"> <note>The data included in this document is from. The data is made available under ODbL.</note> <meta osm_base="2012-12-24T11:44:01Z"/> <node id="371597317" lat="50.7412721" lon="7.1927120"> <tag k="is_in" v="Bonn,Regierungsbezirk Köln,Nordrhein-Westfalen,Bundesrepublik Deutschland,Europe"/> <tag k="name" v="Gielgen"/> <tag k="place" v="suburb"/> </node> [cropped] </osm> I strongly suggest using a third-party library for the network/HTTP code and you will probably also want to use a third-party library for parsing the XML/JSON/whatever response. A much simpler option is to use a scripting language like Python, Ruby, or - if you like camels - Perl. answered 24 Dec '12, 11:51 scai ♦ 32.2k●20●296●445 accept rate: 23% Thank you again (again as you replied in stackOverflow), I ran the code, and it worked, its first time I connected to an internet server. Will make project in which user will be able to ride a car in any part of world he/she wishes, buildings will be mostly random. Ah, I didn't remember your username :) An API call is done via HTTP, so your program needs to open a network connection. Libraries for C++ that do so are discussed for example here. answered 24 Dec '12, 07:08 Roland Olbricht 6.4k●3●59●86 accept rate: 35% Also checkout file handling in Osmium. thanks, any code samples with simple osm api calls would be really helpful. Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: osm ×631 overpass ×413 newbie ×188 tutorial ×17 c++ ×9 question asked: 24 Dec '12, 06:25 question was seen: 9,221 times last updated: 24 Dec '12, 13:40 How to edit the map with android (for beginners/newbies)? gathering the information of schools - with overpass-api Get city borders for Finland How can I get Overpass-API to just display my objects with just one tag ? Interactive tutorial to getting started editing OSM osmconvert -filter gives back a pure non-sense result Videos for new contributors Dynamic POI using openstreet map My query to retrieve nodes and ways is not returning all the nodes... osmconvert gives back no adress: city, street and housenumber - not in 15000 result-records First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/18684/i-want-to-use-mainoverpass-api-for-retrieving-data-in-c-program
CC-MAIN-2021-17
refinedweb
810
62.17
I'm looking at writing a function that when called will fetch an instance of a Google Maps object on the page if there is one present, using JQuery. For instance if there is indeed a Map on the page, it would look something like this: <div class="map">....</div> $('.map'); You can't, unless the JavaScript code that creates the Google Map object makes it very easy for you. The Google Maps object is completely independent of the DOM element used to display it. There is no way to access the JavaScript object in reverse given access to its DOM element or with jQuery. (Even if you could, looking for a div with a "map" class wouldn't necessarily be fruitful because the map's DOM element can have arbitrary class names and ID.) In order to access the Google Maps JavaScript object you would have to be able to get at the reference to the variable for the instantiated Google Maps object. And the only way that would be possible is if the object is somewhere in the global namespace that you can access it. And even then you would need to manually scan the entire global namespace to try and find a valid Map, or know ahead of time where the variable is located. Snazzymaps.com is an example I know of where the Google Map object has been added to the global namespace. In the map editor, it's accessible as window.editor.map. However, you absolutely cannot count on every Google Map in the wild being this accessible. They might be a private variable, inside a closure or anonymous function, etc. So don't bet on it.
https://codedump.io/share/EbWqladvif7g/1/get-instance-of-google-maps-object-using-jquery
CC-MAIN-2017-09
refinedweb
282
68.7
CGI::Application::Plugin::I18N - I18N and L10N methods for CGI::App Nothing is exported by default. You can specify a list of individual methods or use one of the groups :std, :max or :min. use CGI::Application::Plugin::I18N qw( :std ); Within your setup, cgiapp_init, cgiapp_prerun or specific runmode routine add the line $self->i18n_config(); Or $self->i18n_config( %options ); %options are the same as for Locale::Maketext::Simple. If none are passed the following default are used:- %DEFAULT_OPTIONS = ( Path => "$RealBin/I18N", Style => 'gettext', Export => '_maketext', Decode => 1, Encoding => '', ); $RealBin being the folder from which the executed cgi script is running. Note that Export must remain as _maketext for this module to function properly! For instance if you wanted to use maketext style markup in your lexicons you would use the line:- $self->i18n_config( Style => 'maketext' ); Then use the localtext method to localize text:- print $self->localtext( 'Hello World!' ); This module is a wrapper around Locale::Maketext::Simple by Audrey Tang. It extends the CGI::Application object with variour methods to control the localization of text. A "FAQ" is provided with the aim to fill in the gaps. Runs the initial configuration of Locale::Maketext::Simple and runs it's import within your calling objects namespace (Your CGI::App class) Sets the current language for localtext output. Usage:- $self->localtext_langs( LIST ); LIST must consist of valid language tags as defined in RFC3066. See I18N::LangTags for more details. If LIST is ommited then the method will attempt to figure out the users locale using I18N::LangTags::Detect. This method will also return the list of language tags as an array reference. my $langtags = $self->localtext_langs( LIST ); print @$langtags; This method returns the currently selected language. This is the tag that was actually available for use, after searching through the localtext_langs list. This is the name of the module used in your MyAPP::I18N::XXX namespace (where XXX is the name of the lexicon used) my $lexicon = $self->localtext_lang; This method returns the RFC3066 language tag for the currently selected language. This differs from the above method which would most likely return en_us for American English, whereas this method would return en-us. my $langtag = $self->localtext_lang_tag; This is the method that actually does the work. print $self->localtext( 'Hello World!' ); You can choose to import a shorter method called loc that works the same way as localtext. You need to specify this when you use the module:- use CGI::Application::Plugin::I18N qw( loc ); print $self->loc( 'Hello World!' ); :max exports:- i18n_config localtext_langs localtext_lang localtext_lang_tag localtext loc :std exports:- i18n_config localtext_langs localtext_lang localtext_lang_tag localtext :min exports:- i18n_config localtext I kept a blog on how I put this module together and all the material I looked through in order to understand internationalization. Think of it as a kind of hash. Where the text you use (usually english) has a corrosponding value in the local language. So the 'Hello world' under a German lexicon would have the value 'Hallo welt'. Yes I've written one. CGI::Application::Plugin::I18N::Guide See Guide.pod which is part of this distribution. It'll walk you through what you need to know, and how to make your lexicons. Catalyst::Plugin::I18N - The module this one was heavily based on Locale::Maketext::Simple - Making it possible Locate::Maketext - Doing all the hard work CGI::Application - Providing the framework And all others I haven't yet mentioned. Bristol and Bath Perl moungers is renound for being the friendliest Perl group in the world. You don't have to be from the UK to join, everyone is welcome on the list:- Lyle Hopkins ;)
http://search.cpan.org/dist/CGI-Application-Plugin-I18N/lib/CGI/Application/Plugin/I18N.pm
CC-MAIN-2016-50
refinedweb
604
54.42
Before we finish this chapter with a look at integrated development environments and in particular KDevelop, let's do some fun stuffthree-dimensional graphics programming using the OpenGL libraries! Of course, it would be far too ambitious to give proper coverage of OpenGL programming in this book, so we just concentrate on a simple example and show how to get started and how OpenGL integrates with two popular toolkits. The GL Utility Toolkit was written by Mark Kilgard of SGI fame. It is not free software, but it comes with full source code and doesn't cost anything. The strength of GLUT is that it is tailored specifically for being very simple to get started with programming OpenGL. Mesa comes with a copy of GLUT included, and a free software reimplementation of GLUT is available from. Basically, GLUT helps with initial housekeeping, such as setting up a window and so on, so you quickly can get to the fun part, namely, writing OpenGL code. To use GLUT, you first need to access its definitions: #include <GL/glut.h> Next, call a couple of initialization functions in main( ): glutInit(&argc, argv) to initialize GLUT and allow it to parse command-line parameters, and then: glutInitDisplayMode( unsigned int mode ) where mode is a bitwise OR of some constants from glut.h. We will use GLUT_RGBA|GLUT_SINGLE to get a true-color single-buffered window. The window size is set using: glutInitWindowSize(500,500) and finally the window is created using: glutCreateWindow("Some title") To be able to redraw the window when the window system requires it, we must register a callback function. We register the function disp( ) using: glutDisplayFunc(disp) The function disp( ) is where all the OpenGL calls happen. In it, we start by setting up the transformation for our object. OpenGL uses a number of transformation matrixes, one of which can be made "current" with the glMatrixMode(GLenum mode) function. The initial matrix is GL_MODELVIEW, which is used to transform objects before they are projected from 3D space to the screen. In our example, an identity matrix is loaded and scaled and rotated a bit. Next the screen is cleared and a four-pixel-wide white pen is configured. Then the actual geometry calls happen. Drawing in OpenGL takes place between glBegin( ) and glEnd( ), with the parameter given to glBegin( ) controlling how the geometry is interpreted. We want to draw a simple box, so first we draw four line segments to form the long edges of the box, followed by two rectangles (with GL_LINE_LOOP) for the end caps of the box. When we are done we call glFlush( ) to flush the OpenGL pipeline and make sure the lines are drawn on the screen. To make the example slightly more interesting, we add a timer callback timeout( ) with the function glutTimerFunc( ) to change the model's rotation and redisplay it every 50 milliseconds. Here is the complete example: #include <GL/glut.h> static int glutwin; static float rot = 0.; static void disp(void) { float scale=0.5; /* transform view */ glLoadIdentity( ); glScalef( scale, scale, scale ); glRotatef( rot, 1.0, 0.0, 0.0 ); glRotatef( rot, 0.0, 1.0, 0.0 ); glRotatef( rot, 0.0, 0.0, 1.0 ); /* do a clearscreen */ glClear(GL_COLOR_BUFFER_BIT); /* draw something */ glLineWidth( 3.0 ); glColor3f( 1., 1., 1. ); glBegin( GL_LINES ); /* long edges of box */ ); /* end cap */ ); /* other end cap */ glVertex3f( 1.0, 0.6, 0.4 ); glVertex3f( 1.0, -0.6, 0.4 ); glVertex3f( -1.0, -0.6, 0.4 ); glVertex3f( -1.0, 0.6, 0.4 ); glEnd( ); glFlush( ); } static void timeout( int value ) { rot++; if( rot >= 360. ) rot = 0.; glutPostRedisplay( ); glutTimerFunc( 50, timeout, 0 ); } int main( int argc, char** argv ) { /* initialize glut */ glutInit(&argc, argv); /* set display mode */ glutInitDisplayMode(GLUT_RGBA | GLUT_SINGLE); /* output window size */ glutInitWindowSize(500,500); glutwin = glutCreateWindow("Running Linux 3D Demo"); glutDisplayFunc(disp); /* define the color we use to clearscreen */ glClearColor(0.,0.,0.,0.); /* timer for animation */ glutTimerFunc( 0, timeout, 0 ); /* enter the main loop */ glutMainLoop( ); return 0; } As an example of how to do OpenGL programming with a more general-purpose GUI toolkit, we will redo the GLUT example from the previous section in C++ with the Qt toolkit. Qt is available from under the GPL license and is used by large free software projects such as KDE. We start out by creating a subclass of QGLWidget, which is the central class in Qt's OpenGL support. QGLWidget works like any other QWidget, with the main difference being that you do the drawing with OpenGL instead of a QPainter. The callback function used for drawing with GLUT is now replaced with a reimplementation of the virtual method paintGL( ), but otherwise it works the same way. GLUT took care of adjusting the viewport when the window was resized, but with Qt, we need to handle this manually. This is done by overriding the virtual method resizeGL(int w, int h). In our example we simply call glViewport( ) with the new size. Animation is handled by a QTimer that we connect to a method timout( ) to have it called every 50 milliseconds. The updateGL( ) method serves the same purpose as glutPostRedisplay( ) in GLUTto make the application redraw the window. The actual OpenGL drawing commands have been omitted because they are exactly the same as in the previous example. Here is the full example: #include <qapplication.h> #include <qtimer.h> #include <qgl.h> class RLDemoGLWidget : public QGLWidget { Q_OBJECT public: RLDemoGLWidget(QWidget* parent,const char* name = 0); public slots: void timeout( ); protected: virtual void resizeGL(int w, int h); virtual void paintGL( ); private: float rot; }; RLDemoGLWidget::RLDemoGLWidget(QWidget* parent, const char* name) : QGLWidget(parent,name), rot(0.) { QTimer* t = new QTimer( this ); t->start( 50 ); connect( t, SIGNAL( timeout( ) ), this, SLOT( timeout( ) ) ); } void RLDemoGLWidget::resizeGL(int w, int h) { /* adjust viewport to new size */ glViewport(0, 0, (GLint)w, (GLint)h); } void RLDemoGLWidget::paintGL( ) { /* exact same code as disp( ) in GLUT example */ ... } void RLDemoGLWidget::timeout( ) { rot++; if( rot >= 360. ) rot = 0.; updateGL( ); } int main( int argc, char** argv ) { /* initialize Qt */ QApplication app(argc, argv); /* create gl widget */ RLDemoGLWidget w(0); app.setMainWidget(&w); w.resize(500,500); w.show( ); return app.exec( ); } #include "main.moc"
http://etutorials.org/Linux+systems/running+linux/Part+III+Programming/Chapter+21.+Programming+Tools/Section+21.9.+Introduction+to+OpenGL+Programming/
CC-MAIN-2017-04
refinedweb
1,029
62.88
Custom Labels/Indicator with events from a strategy I developed a strategy, and i would like to visually view specific "events". The logic of determining the events can be found in the strategy. in other words: In the strategy i have all the if statements in order to determine the exact time that i want to see this event on the graph. Is it possible to add labels on bars, from the strategy? Alternatively, and this is not so pretty solution but it will work, i think about developing a custom indicator and change it from 0 to 1 for just 1 bar when this event occurs, but i'm not sure how to do it from the strategy itself (i don't want to duplicate the entire logic of the strategy to a new indicator) How can i do that? Do you have any code sample for that? Write an observer. Code samples - Thanks, i see that observers can display events like buys/sells. But how do i signal into an observer from the strategy itself? how can i code something like this? if (condition): myobserver.event1[0]=1 #change value in the observer somehow is it possible to do so from the strategy? I don't want to duplicate the entire if-logic to the observer just to display a value. I will also mention that i do see a specific case that it is possible: in the Observers example i see the OrderObserver that displays information about orders, but it is only because the strategy generates orders and the observer catches them using the next() function. but what about just any events, without generating orders? just generating the event from the strategy? OK i found a solution Observer can access the strategy using self._owner so we can do the following: in the Observer, we listen to anything inside an self._owner.events array (i just made up the name .events): def next(self): while len(self._owner.events)>0: event,value = self._owner.events.pop(0) getattr(self.lines,event)[0]=value and in the Strategy, we just send events: def __init__(self): self.events=[] if (CONDITION): self.events.append(["event_x",1]) Note that event_x must be the name of a line defined in the observer. if you define self.eventsin the strategy as btline instead of the list, than in the observer's next()its values can be accessed simply last_event_value = self._owner.events[0] Here is the link to observer I wrote to plot stop and take prices defined as self.sl_priceand self.tp_pricein the strategy -
https://community.backtrader.com/topic/3060/custom-labels-indicator-with-events-from-a-strategy/1
CC-MAIN-2021-25
refinedweb
432
65.73
See SONAR-1276 : The setup form is not available in the homepage anymore. Administrators must explicitly browse to http://<server>/setup. New rules (not activated in default Quality profiles) : If some of the following rules are activated in your quality profiles, then you have to manually deactivate them :. See SONAR-1327 : the complexity distribution by class was badly calculated. It now covers all classes, but not only public classes. The version of the Cobertura Maven plugin can be overridden. Default value is 2.3. The new version must be set in the definition of the cobertura plugin in pom.xml. See SONAR-1364 to log all HTTP requests to SonarQube server. This feature is deactivated by default and is ignored when SonarQube is deployed to a JEE server. The main functionality of version 2.0 is about byte code analysis to report on design and architecture. It is however possible to not activate this new feature by using the advanced parameter or through the UI.
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=139166505
CC-MAIN-2014-15
refinedweb
165
59.19
Hi, Im trying to write some code which adds the combined height and weight of two dogs, returns the result and then displays it..The problem is, the result returned is wrong (i think the operation is just adding two memory addresses)..The =operator code is returning rubbish as well, i think the problem is with the operator= function because when i remove it the other works fine.. Please see attached code Thanks //DOG: main.cpp #include "Dog.h" int main() { Alsation a(20, 50); Alsation b(30, 50); Alsation c; c = a + b; c.display(); d = a; d.display(); } // Dog.h #include "Animal.h" #ifndef Dog_h #define Dog_h using namespace std; class Dog{ protected: int ID; public: int getID(){return ID;} //17. Inline methods.. void setID(int newID){ID=newID;} }; class Retriever; class Alsation: public Dog{ private: int height, weight; public: Alsation(int aHeight, int aWeight); Alsation(){}; friend void mixBreed(Alsation a, Retriever b); Alsation operator+ (Alsation ); Alsation operator= (const Alsation &); // bool Alsation::operator == (Alsation x); void display(); }; #endif // Dog.cpp #include "Dog.h" using namespace std; Alsation::Alsation(int aHeight, int aWeight): height(aHeight), weight(aWeight){} Alsation Alsation::operator+ ( Alsation x){ int h = height + x.height; int w = weight + x.weight; return Alsation(h,w); } void Alsation::display(){ cout <<"Combined height = " << height << endl; cout <<"Combined weight = " << weight << endl; } Alsation Alsation::operator= (const Alsation &y){ int h = y.height; int w = y.weight; return *this;}
https://www.daniweb.com/programming/software-development/threads/325131/overloading-the-operator-and-operator
CC-MAIN-2017-17
refinedweb
236
50.02
Ariel is a research project to investigate the design of user interfaces that go beyond the standard mouse and keyboard input modalities. It aims to take advantage of natural means of communication such as speech, gestures, and facial expressions. The challenge is to understand the properties of these new modalities and to uncover the appropriate interface elements to use with them. Under any of the current window systems, a set of standard elements are used for building a user interface. This includes items like buttons, menus, scrollbars, and dialog boxes. These items have been stable for many years with a few changes like adding hyper-links and image-maps as new basic elements that most user understand. There are also a number of concepts common to these interfaces. Things like "cut and paste" and "drag and drop". Many people are attempting to take these same elements and implement them using spoken language systems. Ideas like "speakable links" where the user can say the words of a hyperlink instead of clicking on it, or voice menus where the user can speak the items in a traditional menu instead of using the mouse in the typical fashion, are the standard approach to this problem. These interfaces lead to praise such as that found in a recent Time Magazine review of IBM's latest VoiceType software. "During our demo the system was impressive, though not yet easier than using a mouse." Time, May 13, 1996 I feel that the great lesson to be drawn from all this is that speech recognition makes a lousy mouse. Speech (and the other elements of language based interfaces) differs from mice and keyboards in at least two fundamental ways. First, these interfaces are inherently noisy. It is almost never possible to guarantee any behavior of such a system with less than a 5% error rate. Second, these systems are extremely expressive. Humans have been using gestures and language to communicate with each other. These differing requirements insist that a new set of basic interface elements be discovered. Ariel is a project to discover these primitives by building a prototype system and working with it every day. The thoroughly overused personal information management task was chosen because of its reasonable size and complexity, as well as the fact that almost everyone uses some version of such a system every day. The project is currently 4 months old. It consists of minimal e-mail and web browsing capabilities. The basic user interface elements at the moment are the same ones as are found in other similar systems. (Well, you have to start somewhere). Building a system for research into user interfaces has given me the opportunity to investigate the great variety of graphical modules available for python. I currently have python binaries installed with the following GUI's. Almost all of these systems (with the exception of OpenGL) make it fairly easy to create a standard GUI under python. Many of them can even produce code that is portable between different operating systems, and some even run with the appropriate native look and feel. However, I'm not interested in the elements of a standard GUI. If I thought that these were the appropriate elements, then that's what I'd be using. What I wanted in a graphics system is as follows: The X11 and PythonWin systems are platform specific, so they fail the test immediately. TkInter, Rivet, and WPY are all based on Tk (at least under X). In order to get the sort of generality I desire under Tk, you must use the canvas object. This is unfortunately much to slow to do anything useful. WXWindows was the most promising of the candidates that I looked at (other than OpenGL); however, I found that its great pains to maintain native look-and-feel (which is a big plus to most people) were a big hinderance to my plans. "The OpenGL graphics system is a software interface to graphics hardware. It allows you to create interactive programs that produce color images of moving three-dimensional (and two-dimensional) objects." - OpenGL Programming Guide Open GL is a portable graphics standard, currently supported on all major computer platforms. Because the high-performance implementation of OpenGL can be expensive on some platforms, it is reassuring that a freely available implementation of the OpenGL standard exists (Mesa). This implementation compiles almost everywhere that python does (not including DOS) and is an efficient implementation of the API on top of the native window system. OpenGL is fast. It was designed to produce "color images of moving three-dimensional objects." This is an extremely computationally demanding operation. Speed of the basic drawing primitives was and continues to be a primary motivating factor in the system's design. With systems that have hardware graphics acceleration, this can produce performance that is even faster than raw XLib coding (the fastest (and ugliest) of the other systems considered here). The library's design is clean, easy to understand and natural to work with. It gives you the power to perform the finest grained manipulations on the graphical output without requiring the sorts of obscure technical details of working with something like Xlib. The great thing about building this project in python has been the phenomenal speed with which I was able to go from nothing to a working prototype. Now that I have something working, there are one or two chunks of code that are acting as bottle necks in the program. The standard python approach is to move these speed critical blocks of code down into C. Having translated a fairly large amount of simple python code to C recently, I've noticed that much of the translation is trivial if I include static types in the function specification. To use the most overused example of function definition, here it is folks, the venerable factorial function. def factorial(n): if n == 0: return 1 else: return n*factorial(n-1) Now, what does this look like in C? long py2c_Factorial_factorial(long n) { if (n == 0) { return 1; } else { return (n*py2c_Factorial_factorial( (n-1) ); } } The reason that some of the names are a little long, and that there are a few more parenthesis than you might expect in such a piece of code is that this was automatically generated from the python source. Of course, the key to making this work was the addition to the python source file of the following single line. #DECLARE factorial:(long, long) This tells the translation program the signature of the factorial function. Without this information, doing anything useful with this function is virtually impossible. Including static types in this manner is definately a sacrifice of python's outstanding dynamic properties, but it's exactly the sacrifice expected in order to translate to efficient C code. I've just recently discovered the MESS project, and it's interesting to note that that framework involves similar static type declarations for python classes. What it has that this work currently doesn't is the ability to declare a variable as a generic python object. This of course sacrifices much of the speed gains of translating to C, so its not a top priority for me. The final step in the translation to C is to produce a python interface to the newly generated C function so that it can be used as before. This is done fairly trivially with Bgen (since the function's signature is known). Obviously I haven't just created a general purpose tool for translating python code to C. If I'd done that there'd have been much more fanfare, and I anticipate that there'd be much rejoicing throughout the land. Instead, I've gone a long way towards automating the process of translating very simple python code to C. The key idea is that static typing has its uses, and that while all of us love the dynamic nature of python, it can be very useful to abandon it at particular moments. I've had many situations (particularly with my numeric code) where I've wanted to statically type a function, not for speed requirements, but for code reliability. I think that there are some interesting gains to be made by continuing to work in this direction.
http://www.python.org/workshops/1996-06/papers/hugunin.IPCIV.html
crawl-002
refinedweb
1,382
51.99
Enable apksigner for Windows am: cb5e16ea45 Original change: Change-Id: I95928fddc96f3a90e2ae935e9ccd6229801471d1 apksig is a project which aims to simplify APK signing and checking whether APK signatures are expected to verify on Android. apksig supports JAR signing (used by Android since day one) and APK Signature Scheme v2 (supported since Android Nougat, API Level 24). apksig is meant to be used outside of Android devices. The key feature of apksig is that it knows about differences in APK signature verification logic between different versions of the Android platform. apksig thus thoroughly checks whether an APK's signature is expected to verify on all Android platform versions supported by the APK. When signing an APK, apksig chooses the most appropriate cryptographic algorithms based on the Android platform versions supported by the APK being signed. The project consists of two subprojects: apksig library offers three primitives: ApkSignerwhich signs the provided APK so that it verifies on all Android platform versions supported by the APK. The range of platform versions can be customized. ApkVerifierwhich checks whether the provided APK is expected to verify on all Android platform versions supported by the APK. The range of platform versions can be customized. (Default)ApkSignerEnginewhich abstracts away signing APKs from parsing and building APKs. This is useful in optimized APK building pipelines, such as in Android Plugin for Gradle, which need to perform signing while building an APK, instead of after. For simpler use cases where the APK to be signed is available upfront, the ApkSignerabove is easier to use. NOTE: Some public classes of the library are in packages having the word “internal” in their name. These are not public API of the library. Do not use *.internal.* classes directly because these classes may change any time without regard to existing clients outside of apksig and apksigner. apksigner command-line tool offers two operations: apksigner signfor usage information. apksigner verifyfor usage information. The tool determines the range of Android platform versions (API Levels) supported by the APK by inspecting the APK's AndroidManifest.xml. This behavior can be overridden by specifying the range of platform versions on the command-line.
https://android.googlesource.com/platform/tools/apksig/+/e290dbccf564795bd8bf530a5bb2ddb60cc22664
CC-MAIN-2022-33
refinedweb
354
54.73
Important: Please read the Qt Code of Conduct - QtCreator - Compiling Issue - External source files - Blacktempel last edited by I have a Qt project, accessing another cross-platform (boost) project on my disc. Adding the header includes does not seem to cause any problem. #include "../../Visual Studio 2015/Projects/..." //Header file down the road Adding existing source files to the sources folder in my Qt Project works also without a problem, the files are found and I can open them. But these source files are not compiled, as a linker error clearly states (IMO). LNK1104: cannot open file 'debug\Error.obj' Copying the content of (here Error.cpp) the source file in a new file in the Qt-Project directory and adding it solves the problem, though that would mean having to copy each time I pull the other project's changes with git or make changes myself. That misses the purpose of including external source files. I have already asked this question on stackoverflow without much luck. One suggestion was to create a ".pri" file and include this file in the Qt-project's ".pro" file. The best I got out of this was a change of the linker error from 1104 to 1181, both unable to open the ".obj" file. Has someone experienced this behaviour before or has a hint to a possible solution ? - JKSH Moderators last edited by JKSH Hi @Blacktempel, and welcome to the Qt Dev Net! The correct way to link to external libraries is described at. In a nutshell, you should set the INCLUDEPATHand LIBSvariables in your *.pro file. @Blacktempel said: #include "../../Visual Studio 2015/Projects/..." //Header file down the road After you set INCLUDEPATH, you don't need such a long #include line anymore. - Blacktempel last edited by Blacktempel thank you for your answer. This is no library, these are just external source files which have to be compiled together with my Qt-project. The boost headers and libs are already correctly included. Setting the INCLUDEPATHvariable only changes the length of the header include and source file include in the *.pro file. Trying to set the correct path to the boost-directories root directory and adding the source files with a "short path" results in a failure to find the files at all, which is... a little strange but I was already there. (Paths are correct) If I use the "long" path, the files are found, but not compiled. INCLUDEPATH += //all other include paths \ "../../Visual Studio 2015/Projects/ProjectDir/" SOURCES += //all other sources \ Shared/Net/Error.cpp //or using ProjectDir/Net/Error.cpp results in "Failure to find: %filenameWithShortPath%". - JKSH Moderators last edited by Hi @Blacktempel, I see, I misread your original post. One suggestion was to create a ".pri" file and include this file in the Qt-project's ".pro" file. The best I got out of this was a change of the linker error from 1104 to 1181, both unable to open the ".obj" file. What happens if you try this solution again, after moving/copying the external source files to a path that does not contain any spaces? - Blacktempel last edited by changing the include in the *.pro file to a path without spaces seemed to work, thank you. As test I copied the project to C: //This apparently works and source files are compiled. include(C:/ProjDir/ProjName.pri) //This works too, but the source files are not being compiled. include("../../Visual Studio 2015/Projects/ProjDir/ProjName.pri") Is that an existing bug in "include" ? Is there currently no way to include external source files where the path has space[s] in it ? - JKSH Moderators last edited by @Blacktempel said: Is that an existing bug in "include" ? Is there currently no way to include external source files where the path has space[s] in it ? I'm not sure; I have never used the include()function with space-containing paths. You can ask the Qt engineers by subscribing to the Interest mailing list and posting there.
https://forum.qt.io/topic/69108/qtcreator-compiling-issue-external-source-files/1
CC-MAIN-2020-40
refinedweb
666
74.79
Interface and using Polymorphism (I think?) -- Game I'm making and need help. Hans Hovan Greenhorn Joined: Mar 03, 2013 Posts: 29 posted Aug 22, 2013 03:01:13 0 This post relates to this one: (I mostly figured out and took the suggestions there but now I have another issue that is a bit different) I've been making a game for fun/practice where the player moves around a 2-d array 'board' and encounters different spaces on that board. I have an interface: interface MapSpaces { Creature doAction(int row, int column); } Various creatures (and events) implement that interface: public class Creature_Goblin extends Creature implements MapSpaces { //Goblin (and other creatures) will have various other properties //specific to them. This is just a test. public Creature_Goblin() { super("Goblin", 7, 2, 4); } public Creature doAction(int row, int column) { System.out.println("A goblin attacks you!"); Creature_Goblin goblin = new Creature_Goblin(); return goblin; } } etc. I then fill my board with the creatures/events: MapSpaces[][]spaces; spaces[0][0] = new Terrain(); spaces[0][1] = new Creature_Goblin(); //etc. Each creature has a doAction method, so depending on what space the player is currently on (this is controlled by other classes/methods) something happens: whatHappens = spaces[row][column].doAction(row, column); I have my 'player' object contained in the same Class (encounter class) as where 'whatHappens = spaces[row][column].doAction(row, column);' takes place. To make the code cleaner I'd like to be able to doAction() and have it effect the player object in the encounter class. However, I'm not sure how to do this. Right now doAction in a particular creature class just returns that creature to the encounter class where combat or talking or whatever happens. This causes the encounter class to be really crammed full of code and is a list kind of like this: if (whatHappens.getClass().equals(terrainType.getClass())) { if (row == 0 && column == 0) { townStore(row, column); } else if (row == 3 && column == 2) { cityStore(row, column); } else if (row == 0 && column == 5) { if (player.equipmentCheck("Rope")) { //stuff } } //etc. } else if (whatHappens.getClass().equals(Creature_Goblin.getClass())) { //stuff } //etc. Is there a way to do something to the player object from within one of the creature classes? There is no player object present in the creature classes so I can do something like encounter.player.giveRope(); from there. Since I return a creature object when I doAction() I thought I might be able to get information about that object using an accessor. So, for example, if doAction(int row, int column) is in a creature class and row = something and column = something I set some attribute of that creature to lets say '1' whereas by default the vaule is 0. Then, in the encounter class I could do something like (remember whatHappens is the creature type) int thing = whatHappens.getAttribute(). Then use thing to go on and do stuff using other methods without having such a laundry list of if/else if statements. But I couldn't seem to get this to work? I was only able to whatHappens.getClass() and break it up that way. It seemed only attributes of the Super Class would be returned, not attributes of the specific Sub Class . I hope my question makes sense...I tried not to leave out too many details while still making is mildly succinct. Still new to Java and learning my way around, trying to get these polymorphism ideas down. I appreciate any help or let me know if more details are needed. TLDR: I have Class A. Class B holds an object of Class A. I have Classes that implement an interface. There is an array of interface objects in Class B. Can I (within the classes that implement the interface) do things to Class A? Piet Souris Ranch Hand Joined: Mar 08, 2009 Posts: 912 11 posted Aug 22, 2013 03:53:26 1 hi Hans, Most important thing to start with: do you know exactly WHAT your game is about? For instance, if your player lands on some space, and there is a Creature there, do you know exaclt what should happen? In all situations? Does what happens depend on say the state of the player, the specific Creature that is on that space, and does the action of that Creature depend on what space it is? Just a few questions... Maybe it's best to give you some examples of what is possible. Chances are your game will be completely different, but I hope you get some ideas. Suppose that we have a player, like: class Player ( double health, money; boolean isAlive; public Player() { health = 1; money = 1; isAlive = true; } public setHealth(double health) { this.health = health; if (this.health > 1) this.health = 1; else if (this.health <= 0) isAlive = false; } // et cetera } Now suppose we have the Creature interface, with this doAction. Let's create a Creature_Goblin: class Creature_Goblin implements Creature ( String name = "I'm a Goblin" public void doAction() { System.out.println(name + " Starting a fight"); if (Math.random() < 0.2) { System.out.println("And I won, damaging the health of the player"); player.setHealth(player.health - 0.1); } else { System.out.println("I lost, player is unharmed;); } } } and we have the class Bomb: class Bomb implements Creature ( public void doAction() { if (Math.random() < 0.1) ( System.out.println("Bomb exploded, player has deceased") player.isAlive = false; } else { // do something else ) } } Hope tihs helps. Hans Hovan Greenhorn Joined: Mar 03, 2013 Posts: 29 posted Aug 25, 2013 16:32:57 0 Thanks for the post Piet, Yeah I know what my game is about and what is supposed to happen on each space in all situations. I do have it set up a similar way to the examples you gave. My big problem is how I can have my creature interface do something to the player. For example, under your Creature_Goblin you have 'player.setHealth(player.health - 0.1);' but how can I do anything to the player object this way since there is no player object here? I'd get a compiler error that the variable player symbol cannot be found. As it is right now I have have an 'encounter' class that holds the player object as well as the array of creature interface spaces. I can do things to the player from the encounter class but is there a way to do things to the player from the creature classes, like player.setHealth(player.health - 0.1);. Thanks for the help. Piet Souris Ranch Hand Joined: Mar 08, 2009 Posts: 912 11 posted Aug 26, 2013 02:25:24 1 hi Hans, the easiest thing you could do is to incorporate a Player variable in the interface's doAction, like in: interface Creature { public void doAction(Player player); } This would have the advantage that it allows you to have more than one player acting. If you put your classes in the same package, there should be no problem. Another solution could be to define your Creatures as class Creature_XXX implements Creature { // there's an underscore in the class name, but it doesn't show up! Player player; public Creature_XXX(Player p) { player = p; } public void doAction() {...} } But probably many other solutions are possible. If the above isn't suitable, could you then give me an overview of your classes, what innerclasses they contain and whether they are in the same package? Just a description would do. Greetz, Piet Hans Hovan Greenhorn Joined: Mar 03, 2013 Posts: 29 posted Aug 26, 2013 16:22:51 0 Yes all of the classes are in the same package (assuming I correctly understand what a package is -- they are all in the same folder). A quick breakdown: The Play class contains the main method and handles the player's movement around the board (spaces). The play class contains an Encounter object and the space that the player is on is passed into the encounter object's room method (think of room as meaning a space on the board): Encounter encounter = new Encounter(); encounter.room(row, column); //row and column are controlled using other classes/methods In the encounter class is an instance of PlayerCharacter. The PlayerCharacter class has a bunch of attributes associated with it like health, damage, defense, etc. The encounter class also has the method room that takes in the row and column arguments and uses them to determine what space is encountered: public Encounter() { player = new PlayerCharacter("", 30, 30, 3, 1, 50); spaces = new MapSpaces[6][6]; } public int room(int row, int column) { Creature encounter; encounter = spaces[row][column].doAction(row, column); //The type of creature is returned and then depending on what type something happens. } As seen in my earlier post, this then creates a long if/else if chain in the room method if I want to do something specific to the player like say give the player a weapon that makes them do more damage if they are on row 3 and column 2. What I want to do is give the player a weapon FROM the .doAction method of one of the creature types. So, in the Creature_Goblin class I'd have: if (conditions are met -- like player wins the battle!) player.giveWeapon() or something like that. But Creature_Goblin has no access to the PlayerCharacter object that is in the encounter class. The PlayerCharacter has to be constant, I cannot just create new PlayerCharacter objects (or maybe I can and I just don't know what I'm doing?) because I have to do things like add or subtract health which doesn't make sense if my health keeps getting set to the constructor default. Maybe I'm just missing something conceptually. In your example of incorporating a PlayerCharacter variable, say if I had spaces[row][column].doAction(row, column, player) and in the interface had doAction(int row, int column, PlayerCharacter player), if I did something like player.giveWeapon() would it give the PlayerCharacter object stored back in the encounter class a sword (so in the encounter class I could have player.hasSword == true) or only give a sword to the player existing within the interface doAction? I hope all of that makes sense. Thanks for sticking with me. I'm still fairly novice and just trying to grasp how it all works. Thanks again for your help. Piet Souris Ranch Hand Joined: Mar 08, 2009 Posts: 912 11 posted Aug 27, 2013 14:16:27 1 hi Hans, I was asking about the structure, because I wanted to know the 'visability' of your Player class (or 'PlayerCharacter' as you call it). The easiest way to let all of your classes know about the existence of the 'PlayerCharacter' class is to put that class in a separate file inside your package, that is indeed the folder in which you have all your classes. In that case you can define the interface 'Creature' safely with a reference to a 'PlayerCharacter'. I understand this: you have a 'Play' class, and an 'Encounter' class, and by the look of it, it is an instance of the 'Encounter' class that does the actual game: public Encounter() { player = new PlayerCharacter("", 30, 30, 3, 1, 50); spaces = new MapSpaces[6][6]; } and where 'spaces' is the actual board in use. Please correct me if I'm wrong. My idea was to define the board as: Creature[][] spaces = new Creature[6][6]; And then populate this array in your initializer method, something like: for (int row = 0; row < 6; row++) { for (int column = 0; column < 6; column++) { spaces[row][column] = new Creature_XXX(); } } And what specific Creature this _XXX would be, well, that's up to you to decide that, maybe based on some formula that picks a specific Creature at random from the available Creatures that you have defined. Now, the whole idea is that if your player lands on some space, say spaces[2, 3], that this space, being some Creature, knows what to do! You do not need to know what specific Creature that might be, the Creature itself knows it, and that's enough! How does this work? Because each Creature, be it 'Creature_XXX' or 'Creature_YYY', has a method 'doAction(PlayerCharacter player) {...}' that exactly defines what should happen to the player in this method. So, suppose we have two players, 'Hans' and 'Piet', playing your game in a turn based fashion (yes, two players!). It's Hans'turn and he lands on spaces[2, 3]. Since spaces[2][3] is a Creature, it knows what to do, it has a method 'doAction', and so we call: spaces[2, 3].doAction(Hans); and since Hans is lucky, he gets a sword. You, see, there is no need to retrieve what kind of creature is located on spaces[2, 3]. Now it's Piets turn. The dice are cast, and he lands on spaces[4,5]. Here we go again: spaces[2, 3].doAction(Piet); You see, no long if's and else's. Now look at your lines 09 and 10 of the 'Encouter' class. Creature encounter; encounter = spaces[row][column].doAction(row, column); The right hand part is okay, you call the 'doAction' methode of whatever Creature populates that space. But there is no need to retrieve that Creature. Now, with this in mind, a possible set up of the game might be: public class Game_Of_Hans { public void main(String[] args) { PlayerCharacter hans = new PlayerCharacter("Hans", 10, 20, ...); PlayerCharacter pans = new PlayerCharacter("Piet", 5, Math.random(20), ...); Play game = new Play(hans, Piet); game.start(); } } public class Play() { PlayerCharacter firstPlayer, secondPlayer; public Play(PlayerCharacter first, PlayerCharacter second) { fiirstPlayer = first; // et cetera encounter = new Encounter(); encounter.fillSpacesWithCreatures(); dice = new PairOfDice(); } public start() { .... } } class PlayerCharacter { String name; boolean hasSword; double moneyInPocket; public PlayerCharacter(String name, double startAmount, ...) } Interface Creature { public void doAction(PlayerCharacter player); } class Creature_Goblin implements Creature ( } class Encounter { Creature[][] spaces; } // et cetera As said, this is just an example of what is possible. But I hope that the idea of using 'interfaced' Creatures is clear, and how you could act the 'doAction' on this player. Greetings, Piet Hans Hovan Greenhorn Joined: Mar 03, 2013 Posts: 29 posted Sep 02, 2013 17:04:24 0 Piet, thank you so much for all of your responses and examples. It has finally all clicked and that was extremely useful as far as my understanding of what is possible and how everything can work together. Again, thank you! Piet Souris Ranch Hand Joined: Mar 08, 2009 Posts: 912 11 posted Sep 07, 2013 14:10:30 0 You're welcome, Hans. Please show us your game when you're ready! I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: subject: Interface and using Polymorphism (I think?) -- Game I'm making and need help. Similar Threads tic tac toe/nought and crosses exercise Creating simple GUI interface for BattleShips class confusion Having NullPointerException Error in Connect4 MVC for text adventure: working out how the classes interact All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/618464/java/java/Interface-Polymorphism-Game-making
CC-MAIN-2015-32
refinedweb
2,525
62.98
Hi, i'm trying to make a simple C++ program in which the user must try to guess a number, if they guess too high it says "too high" and if they guess too low it says "too low". I also decided to add a feature which allows them to select how many tries they would like to guess the number. I tried to make "tries" type an enum so if the user could not pick an invalid number but for some reason i cannot use it in an if statement. here is the code and i am getting the first error on line 27: Could someone please explain to me what I am doing wrong here?Could someone please explain to me what I am doing wrong here?Code:#include <iostream> #include <ctime> #include <cstdlib> using namespace std; int guess; int i; int endProgram = 0; int win = 0; int j = 0; enum tries { five = 5, ten = 10, fifthteen = 15, twenty = 20 }; int getValue() { srand(time(NULL)); i = rand() % 100 + 1; } int finish() { if (guess == i || j == tries) { endProgram = 1; } else if ( guess > (i + 10) && guess < (i + 50) && guess > (i - 50)) { cout <<"\n\n\n\nTOO HIGH\n\n\n\n"; } else if (guess < (i - 10) && guess < (i + 50) && guess > (i - 50)) { cout <<"\n\n\n\nTOO LOW\n\n\n\n"; } else if (guess >= (i - 10) && guess < i) { cout <<"\n\n\n\nA LITTLE TOO LOW\n\n\n\n"; } else if (guess >= (i + 10) && guess > i) { cout <<"\n\n\n\nA LITTLE TOO HIGH\n\n\n\n"; } else if (guess >= (i + 50)) { cout <<"\n\n\n\nWAY TOO HIGH\n\n\n\n"; } else if (guess <= (i - 50)) { cout <<"\n\n\n\nWAY TOO LOW\n\n\n\n"; } } int main() { char retry; getValue(); cout <<"HOW MANY ATTEMPTS WOULD YOU LIKE (5, 10, 15, 20): "; cin >>tries; cout <<"\n\n\n\n"; while (endProgram != 1) { cout <<"TAKE A GUESS: "; cin >>guess; finish(); j++; } if (guess == i) { cout <<"CONGRATULATIONS YOU GUESSED THE NUMBER IN " <<j <<"/" <<tries <<" TRIES!"; } else { cout <<"UNLUCKY, YOU FAILED TO GUESS THE NUMBER" } cout <<"\n\nPLAY AGAIN? (y/n)"; cin >>retry; if (retry == 'y') { return main(); } } Thanks
https://cboard.cprogramming.com/cplusplus-programming/161348-getting-error-using-enum-if-statement-post1193037.html?s=cf504186a096d21bac6890ab19f71938
CC-MAIN-2021-39
refinedweb
362
70.5
Enums in java actually means grouping same kind of constants as a one unit/package. Constants which are static and final. Enums are added as feature from JDK 1.5 and above. Lets get an clear idea of enums from below examples. class WithoutEnums { public static final String north = "NORTH"; public static final String south = "SOUTH"; public static final String east = "EAST"; public static final String west = "WEST"; } public class MainClass { public static void main(String[] args) { System.out.println(WithoutEnums.north); System.out.println(WithoutEnums.south); System.out.println(WithoutEnums.east); System.out.println(WithoutEnums.west); } } The above examples is without using the concept of enums. Lets see how to declare constants using enums. enum POLES { NORTH, SOUTH, EAST, WEST; } public class EnumsExample { public static void main(String[] args) { POLES d1 = POLES.EAST; System.out.println(d1); POLES d2 = POLES.NORTH; System.out.println(d2); System.out.println(POLES.SOUTH); System.out.println(POLES.WEST); } } Here in the above example we need not create a reference to access the constant, and we can directly access using the enum Class. Below are the few points need to remember while using Enum class in java. Enums in java are declared with enum keyword and constants in It is good practice to declare constants with UPPERCASE letters Look at the sample example below: enum Foods { CHIPS, FRIES, BREAKFAST; Duplicates are not allowed in enum constants, it would throw compile time error. By default each constant declared in enum Class is public static final. As every constant is static, they can be accessed directly using enum name. Semicolon at the end of the declaration is not mandatory. Example: enum enums { A, B, C, D //semicolon at the end of this statement is not mandatory } Enum can have an abstract method declared in body provided each enum constants declared in the class must implement it. enum enums { ONE { @Override void abstractMethod() { //Abstract Method Implemented } }, TWO { @Override void abstractMethod() { //Abstract Method Implemented } }, THREE { @Override void abstractMethod() { //Abstract Method Implemented } }; abstract void abstractMethod(); }
https://codingsmania.in/enums-in-java/
CC-MAIN-2021-25
refinedweb
337
54.52
Hello, I'm trying to get this package compiled on my machine which states this requirement "RTI Connext DDS >= 5.1.0 (Source install to /opt)". I've managed to get RTI installed on my computer but then I ran into this bug It seems like the problem is this package:. I'm not too sure about the RTI library but I've been attempting to use the idl file to recreate the .cxx and .h files in the package by running `rtiddsgen DDSImage.idl -replace` and then moving the files to the correct folders. After doing that though, the px-ros-pkg will build but the vmav-ros-pkg will still not build and the output looks strange on git diff. For example the namespace was removed and there is a "#using <new>". Since I'm not familiar with this library, I am not sure how to update the package so that it can still be used. Would anyone be able to take a look at the output of the rtiddsgen command to see if there is anything obvious that I'm missing? Thanks in advance Just a guess here, but -- Can you determine what the original command-line options for rtiddsgen were, when this was built for v5.1.0? I suspect that one or more options should be included when using rtiddsgen to generate the code, such as: "-namespace", but there may be others needed. Try regenerating the code with this option, and see if it gets you closer to the original configuration. The command-line options for the 5.3.1 release are in the current rtiddsgen user's manual, and the ones for 5.1.0 are in the 5.1.0 Core Libs & Utils user's manual.
https://community.rti.com/forum-topic/updating-510-531
CC-MAIN-2020-45
refinedweb
293
72.36
Message-ID: <1069900094.6871.1413798999331.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6870_92422207.1413798999331" ------=_Part_6870_92422207.1413798999331 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Groovy's 'as' operator can be used with closures in a neat way w= hich is great for developer testing in simple scenarios. We haven't found t= his technique to be so powerful that we want to do away with dynamic mockin= g, but it can be very useful in simple cases none-the-less.=20 Suppose we are using Interface Orie= nted Design and as sometimes advocated we have defined a number of shor= t interfaces as per below. (Note: we ignore the discussion about whether in= terfaces are as valuable a design approach when using dynamic languages tha= t support duck-typing.)=20 interface Logger { def log(message) } interface Helper { def doSomething(param) } interface Factory { Helper getInstance() }=20 Now, using a coding style typically used with dependency injection (as you might use with Spring), = we might code up an application class as follows:=20 class MyApp { private factory private logger MyApp(Factory factory, Logger logger) { this.logger =3D logger this.factory =3D factory } def doMyLogic(param) { factory.getInstance().doSomething(param) logger.log('Something done with: ' + param) } }=20 To testing this, we could use Groovy's built-in mockin= g or some other Java-based dynamic mocking framework. Alternatively, we= could write our own static mocks. But no one does that these days I hear y= ou say! Well, they never had the ease of using closures, which bring dynami= c power to static mocks, so here we go:=20 def param =3D 'DUMMY STRING' def logger =3D { message -> assert message =3D=3D 'Something done with: = ' + param} def helper =3D { assert it =3D=3D param } def factory =3D { helper as Helper } def myApp =3D new MyApp(factory as Factory, logger as Logger) myApp.doMyLogic(param)=20 That was easy. Behind the scenes, Groovy creates a proxy object for us t= hat implements the interface and is backed by the closure.=20 Easy yes, however, the technique as described above assumes our interfac= es all have one method. What about more complex examples? Well, the 'as' me= thod works with Maps of closures too. Suppose our helper interface was defi= ned as follows:=20 interface Helper { def doSomething(param) def doSomethingElse(param) }=20 And our application modified to use both methods:=20 ... def doMyLogic(param) { def helper =3D factory.getInstance() helper.doSomething(param) helper.doSomethingElse(param) logger.log('Something done with: ' + param) } ...=20 We simply use a map of closures with the key used being the same name as= the methods of the interface, like so:=20 ... def helperMethod =3D { assert it =3D=3D param } def helper =3D [doSomething:helperMethod, doSomethingElse:helperMethod] // as before def factory =3D { helper as Helper } ...=20 Still easy!=20 For this simple example, where we wanted each method to be the same (i.e= . implementing the same code) we could have done away with the map altogeth= er, e.g. the following would work, making each method be backed by the clos= ure:=20 def factory =3D { helperMethod as Helper }=20 See also:=20
http://docs.codehaus.org/exportword?pageId=66905
CC-MAIN-2014-42
refinedweb
543
53.92
Provided by: liblfc-dev_1.8.10-1build3_amd64 NAME lfc_chdir - change LFC current directory used by the name server SYNOPSIS #include <sys/types.h> #include "lfc_api.h" int lfc_chdir (const char *path) DESCRIPTION lfc_chdir changes the LFC current directory used by the name server to expand LFC pathnames not beginning with /. This current working directory is stored in a thread-safe variable in the client. path specifies the logical pathname relative to the current LFC directory or the full LFC pathname. RETURN VALUE This routine returns 0 if the operation was successful or -1 if the operation failed. In the latter case, serrno is set appropriately. ERRORS) AUTHOR LCG Grid Deployment Team
http://manpages.ubuntu.com/manpages/xenial/man3/lfc_chdir.3.html
CC-MAIN-2019-30
refinedweb
111
51.44
OK. So, even if a new query results in individual object cache to be flushed, we are still avoiding DB hits becasue we have query result cache in place. Does query result cache gets individually flushed, or the only way is group flush [1]? Actually, I want to cache User objects (or query results) forever. User object is retrieved when user logs in using an expression like "user_name=$name and password=$pwd". I want to make sure that whenever a user updates his information the object and query result for that particular user gets flushed, not all the users that were retrieved using "user_name=$name and password=$pwd" expression. Please suggest. [1] Regards Nishant --- On Thu, 28/1/10, Andrus Adamchik <[email protected]> wrote: From: Andrus Adamchik <[email protected]> Subject: Re: Individual Object Caching To: [email protected] Date: Thursday, 28 January, 2010, 3:36 PM Yes, a query refreshes cached objects (unless the query itself is served from cache). Andrus On Jan 28, 2010, at 11:59 AM, Nishant Neeraj wrote: > Hi, > > I am a bit confused on the way Cayenne handles object-cache. It sounds to me that object-cache gets invalidated [1], whenever a call like this takes place > > loadMenu(){... > SelectQuery query = new SelectQuery(PaintingType.class, ...); > return (List<PaintingType>)context.performQuery(query); > } > Now, there are multiple users accessing the same page that calls loadMenu(), as per the document [1], for each user Object-Cache for this would be flushed and restores. > > [1] > > RegardsNishant > > > Your Mail works best with the New Yahoo Optimized IE8. Get it NOW! Your Mail works best with the New Yahoo Optimized IE8. Get it NOW!
http://mail-archives.apache.org/mod_mbox/cayenne-user/201001.mbox/%[email protected]%3E
CC-MAIN-2018-05
refinedweb
275
56.45
Opened 16 months ago Closed 16 months ago Last modified 16 months ago #20020 closed Bug (invalid) ValueError for FileField if the file is closed in clean method Description Hi, Say you have a Model with a FileField, and you need to override clean to do some validation on the file before it is saved. If the file is closed within clean and the file is over 2.5MB (or larger than FILE_UPLOAD_MAX_MEMORY_SIZE) a "ValueError: I/O operation on closed file" will result. If you attempt to re-open the file in clean to avoid this error a different ValueError will result stating "The file cannot be reopened." This is particularly a problem if the verification of a file uses some other library that may open and close the file. I'm using Django 1.4.3, and have seen the issue under the Django web development server as well as under Apache w/mod_wsgi. Here's an example models.py to illustrate the issue: from string import join from django.db import models from FileUploadExample.settings import MEDIA_ROOT def some_file_operation(file): file.open() file.close() def upload_file_path(instance, filename): upload_dir = join([MEDIA_ROOT, str(filename)], '') return upload_dir class UploadedFile(models.Model): uploaded_file = models.FileField(upload_to=upload_file_path) def clean(self): # some call to somewhere to validate the file some_file_operation(self.uploaded_file) Attachments (0) Change History (5) comment:1 Changed 16 months ago by whitews@… - Keywords FileField ValueError added - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Type changed from Uncategorized to Bug comment:2 Changed 16 months ago by aaugustin - Resolution set to invalid - Status changed from new to closed comment:3 Changed 16 months ago by whitews@… I'm not quite following, if it is deleted how can the file be re-opened? This is the issue I am seeing, once the file is closed there is no way to proceed to save the model instance. comment:4 Changed 16 months ago by aaugustin Sorry I wasn't clear -- you can re-open it by name as long as you haven't closed the original. comment:5 Changed 16 months ago by whitews@… Got it, this works. I just create another file handle to give to a routine that closes files. Thanks TemporaryUploadedFile relies on Python's tempfile.NamedTemporaryFile to make sure the temporary file will be eventually deleted; in fact, it's deleted as soon as it's closed. That's why you cannot reopen it, and that isn't going to change — unless you come up with another way to guarantee that the temporary file gets deleted eventually. At least on Unix it seems that the file can be re-opened by name. Use self.uploaded_file.temporary_file_path.
https://code.djangoproject.com/ticket/20020
CC-MAIN-2014-23
refinedweb
451
53.51
. This document was written with the intention of providing a precise semantic theory for RDF and RDFS, and to sharpen the notions of consequence and inference. It reflects the current understanding of the RDF Core working group at the time of writing. In some particulars this differs from the account given in Resource Description Framework (RDF) Model and Syntax Specification, and these exceptions are noted. This work is part of the W3C Semantic Web Activity. It has been produced mailto:[email protected], a mailing list with public archive). The editors and the Working Group plan to address feedback in future revisons of this document. 0. Introduction 0.1 Model-theoretic semantics 0.2 Graph Syntax 0.3 Definitions 1. Interpretations 1.1 Technical notes 1.2 Urirefs, resources and literals 1.3 Interpretations 1.4 Denotations of ground graphs 1.5 Unlabeled nodes as existential assertions 1.6 Comparison with formal logic 2. Simple Entailment C. Acknowledgements References Change Log.Model theory is usually most relevant to implementation via the notion of entailment, described later, and by making it possible to define valid inference rules. This document describes a model theory for RDF(S) which treats the language as simple assertional language, in which each triple makes a distinct assertion and the meaning of any triple is not changed by adding other triples. This imposes a fairly strict monotonic discipline on the language, so that it cannot express closed world assumptions, local default preferences, and several other commonly-used non-monotonic constructs. There are several aspects of meaning in RDF which are ignored by this semantics; in particular, it treats URIs as simple names, ignoring aspects of meaning encoded in particular URI forms [RFC 2396] and does not provide any analysis of time-varying data or of changes to URI denotations. Any semantic theory must be attached to a syntax. Of the several syntactic forms for RDF, we have chosen the RDF graph as introduced in [RDFMS] as the primary syntax, largely for its simplicity. We understand linear RDF notations such as N-Triples and rdf/xml [RDF/XML] as lexical notations for specifying RDF graphs. (There are well-formed graphs that cannot be described by these notations, however.) Two RDF documents, in whatever lexical form, are syntactically equivalent if and only if they map to the same RDF graph. The model theory assigns interpretations directly to the graph; we will refer to this as the 'graph syntax' to avoid ambiguity, since the bare term 'syntax' is often assumed to refer to a lexicalization..) The convention that relates such a set of triples to a picture of an RDF graph can then be stated as follows. Draw one oval for each blank node and uriref, and one rectangle for each literal, which occur in either the S or O position in any triple in the set, and write each uriref or literal as the label of its shape. Then for each triple <S,P,O>, draw an arrowed line from the shape produced from S to the shape produced from O, and label it with. Other RDF serializations may use other means of indicating the graph structure; for our purposes, the important syntactic property of RDF graphs is that each distinct item in an RDF graph is treated as a distinct referring entity in the graph syntax. Several definitions will be important in what follows. A subgraph of an RDF graph is simply a subset of the triples in the graph. Notice that each triple in a graph is considered to be a subgraph. The result of taking the set-union of two or more RDF graphs (i.e. sets of triples) is another graph, which we will call the merge of the graphs.; and arcs are of course never merged.In particular, this means that every blank node in a merged graph can be identified as coming from one particular graph in the original set of graphs.. An RDF graph will be said to be ground if it has no blank nodes. or blank nodes for blank nodes in the original; and that a graph is an instance of another just when every triple in the first graph is an instance of a triple in the second graph, and every triple in the second graph has an instance in the first graph. Note that any graph is an instance of itself. This allows blank nodes in the second graph to be replaced by drawn from V. Throughout this document, the fact that two sets are given different names should not be taken.. Notice that the question of whether or not a class contains itself as a member is quite different from the question of whether or not it is a subclass of itself. RDF uses two kinds of referring expression; urirefs and literals. We make very simple and basic assumptions about these. Urirefs are treated as logical constants, i.e. as names which denote. An interpretation assigns meanings to symbols in a particular vocabulary of urirefs. Some interpretations may assign special meanings to the symbols in a particular namespace, which we will call a reserved vocabulary.We will use two reserved vocabularies in this document, defined using the Qname syntax with the prefixes rdf: and rdfs: as follows: Interpretations which share the special meaning of a particular reserved vocabulary will be named for that vocabulary, so that we will speak of 'rdf-interpretations' and 'rdfs-interpretations', etc.. An interpretation with no reserved vocabulary will be called a simple interpretation, or simply an interpretation.. Together with our assumed fixed interpretation of literals, this is just enough to fix the truth-value of any ground triple, and hence any ground RDF graph.(We will show how to determine the truthvalues in the world being specified. Asserting an RDF graph amounts to claiming that it is true, which is another way of saying that the world it describes is, in fact, so arranged as to be an interpretation which makes it true. In other words, asserting a piece of RDF amounts to asserting a constraint on the possible ways the world might be. Notice that there is no presumption here that any RDF graph contains enough information to specify a single unique interpretation. It is very difficult, and usually impossible, to assert enough in any language to completely constrain the interpretations to a single possible world, so there is no such thing as 'the' unique RDF interpretation. In general, increasing the size of a graph - saying more about the world - decreases the set of interpretations that an assertion of the graph allows to be true - the number of ways the world could be, while making the asserted graph true of it. The use of 'public' URIs in an RDF graph is often taken to imply that an assertion of the graph implicitly assents to the truth of other RDF graphs that define the meaning of that URI. To apply the model theory to this kind of situation, one should think of a fully adequate account of what it means to make an assertion in a Web context is a research problem that is beyond the scope of this document. All interpretations will be relative to a set of urirefs, called the vocabulary of the interpretation; so that one has to speak, strictly, of an interpretation of an RDF vocabulary, rather than of RDF itself. A simple interpretation I of a vocabulary V is defined. Note that no particular relationship is assumed between IR and LV; an interpretation is allowed to treat them as disjoint or to include all or part of LV in IR. It is convenient to define IP to be the subset of IR with a nonempty extension. Intuitively, IP is the set of properties.: node labels denote things, and sets of triples denote truthvalues.. Notice that denote something in the world. Similarly, the first rule implies that the universe must contain every literal in the graph. As an illustrative example, the following is a small interpretation for the artificial vocabulary { ex:a, ex:b, ex:c}. We use integers to indicate the 'things' in the universe. This is not meant to imply that RDF interpretations should be interpreted as being about arithmetic, but more to emphasize that the exact nature of the things in the universe is irrelevant. IR= {1, 2}; IP = {1} IEXT: 1->{<1,2>,<2,1>}. To emphasize; this is only one possible interpretation of this vocabulary; there are (infinitely) many others. For example, if we modified this interpretation by attaching the property extension to 2 instead of 1, none of the above six triples would be true. Blank nodes are treated as simply indicating the existence of a thing, without. The discussion of skolemization in section 2.1 is also relevant.) We now show how an interpretation can specify the truth-value of a graph containing blank nodes. This will require some definitions, as the theory so far provides no meaning for unlabeled nodes. Suppose I is an interpretation and A is a mapping from some set of unlabeled nodes to the universe IR: Notice that we have not changed the definition of an interpretation; it still consists of the same values IR, IEXT and IS. We have simply extended the rules for defining denotations under an interpretation, so that the same interpretation that provides a truth-value for ground graphs also assigns truth-values to graphs with unlabeled nodes, even though it provides no denotation for the unlabeled nodes themselves. Notice also that the unlabeled nodes themselves are perfectly well-defined entities with a robust notion of identity;. Notice however that that since two unlabeled nodes cannot have the same label,-triple document corresponding to the graph.) For example, with this convention, the graph defined by the following triples is false in the interpretation shown in figure 1: _:xxx <ex:a> <ex:b> . <ex:c> <ex:b> _:xxx . since if A' maps the unlabeled node to 1 then the first triple is false in [I+A'], and if it maps it to 2 then the second triple is false. Note that each of these triples, when taken as a single graph, is true in I, but their conjunction is not; and that if a different node ID were used in the two triples, indicating that the RDF graph had two blank nodes instead of one, then A' could map one node to 2 and the other to 1, and the resulting graph would be true under the interpretation I., and.This requires us to introduce bound variables to correspond to the blank nodes of the graph, similarly to the use of node identifiers in the N-Triples syntax. For example, the graph defined in the above example translates to the logical expression (written in the extended KIF syntax defined in [Hayes&Menzel]) (exists (?y)(and (ex:a ?y ex:b)(ex:b ex:c ?y))) This translation maps the model theory exactly. Notice however that the resulting expression may contain the same symbol in both relation and object positions (e.g. ' ex:b' in this example), which is considered syntactically illegal in many versions of logic.To map to a more conventional logical syntax one can use a 'dummy' ternary relation symbol to assert that a binary relation holds between two arguments. For example,[Fikes&McGuinness] translate the RDF triple s p o into the KIF expression (PropertyValue p s o) The above example would then map to (exists (?y)(and (PropertyValue ex:a ?y ex:b)(PropertyValue ex:b ex:c ?y))) Under this translation, to obtain the appropriate KIF interpretation one has to interpret (PropertyValue x y z) to mean ((IEXT x) y z). Following conventional terminology, we say that I satisfies E if I(E)=true, and that a set S of expressions (simply) entails E if every interpretation which satisfies every member of S also satisfies E. If the singleton set {E} entails E' then we will simply say that E entails E'. In later sections these notions will be adapted to classes of interpretations with particular reserved vocabularies, but throughout this section entailment should be interpreted as simple RDF graph E from some other graphs proof step in RDF are, in logical terms, the inference from (P and Q) to P, and the inference from (foo baz) to (exists (?x) (foo ?x)). Note, these results apply only to simple entailment, not to the more subtle notions of entailment introduced in later sections. Proofs, all of which are straightforward, are given in appendix B, a single graph as far as the model theory is concerned.. Notice that unlabeled nodes are not identified with other nodes in a merge, and indeed this reflects a basic principle of RDF graph inference: in contrast to names, which have a global identity which carries across all graphs, blank nodes should not be identified with other nodes or re-labeled a proper.. Vocabulary entailment is more powerful than simple entailment, in the sense that a given set of premises entails more consequences. In general, as the reserved follow from some of the extra assumptions incorporated in the semantic conditions imposed on the reserved.) An rdf-interpretation of a vocabulary V is an interpretation I on (V union rdfV) which satisfies the following extra conditions: This forces every rdf interpretation to contain a thing which can be interpreted as the 'type' of properties. (The second condition could be regarded as defining IP to be the set of resources in the universe of the interpretation which have the value I( rdf:Property) of the property I( rdf:type). This way of construing subsets of the universe will be central in interpretations of RDFS.) For example, the following rdf-interpretation extends the simple interpretation in figure 1: IR = {1, 2, T }; IP = {1, T} IEXT: 1->{<1,2>,<2,1>}, T->{<1,P>,<T,P>} IS: ex:a -> 1, ex:b ->1, ex:c -> 2, rdf:type->T, rdf:Property-. It is important to note that every rdf-interpretation is also a simple interpretation. RDF Schema [RDFSchema] extends RDF to include a larger reserved vocabulary rdfsV with more complex semantic constraints: ( rdfs:seeAlso, rdfs:isDefinedBy, rdfs:comment and rdfs:label are omitted, as the model theory places no constraints on their meanings. The account given here of rdfs:domain and rdfs:range reflect our current understanding of multiple domain and range restrictions under which such assertions are understood conjunctively, and they allow cyclic assertions of rdfs:subClassOf and rdfs:subPropertyOf. These differ from earlier specifications of RDF and RDFS.) Although not strictly necessary, it is convenient to state the RDFS semantics in terms of a new semantic construct, a 'class', i.e. a resource which represents a set of things in the universe which have the same value of the rdf:type property. We will define a mapping ICEXT (for the Class Extension in I) from classes to their extensions, in terms of the relational extension of rdf:type, as follows: ICEXT(x) = {y | <y,x> is in IEXT(I( rdf:type)) } An rdfs-interpretation of V is a simple interpretation of (V union rdfsV) which satisfies the following semantic conditions. The first two of these are simply definitions of IC and ICEXT, which could be used to eliminate these concepts from the rest of the conditions; as noted earlier, the conditions on IR and IP could also be regarded as definitions. The IEXT condition on rdf:Property in an rdf-interpretation is equivalent to the ICEXT condition on rdf:Property above; so these clearly imply all the conditions on an rdf-interpretation. It follows that any rdfs-interpretation is also an rdf-interpretation of the same vocabulary. We will not attempt to give a pictorial diagram of an rdfs-interpretation. The semantic conditions on rdfs-interpretations do not include the condition that ICEXT(I( rdfs:Literal)) must be a subset of LV. While this would seem to be required for conformance with [RDFMS], the class rdfs:Literal, none of these can be validly transferred to literals themselves. For example, a triple of the form <ex:a> [rdf:type] [rdfs:Literal] . is legal even though ' ex:a' is a uriref rather than a literal. What it says is that I( ex:a) is a literal, ie that the uriref ' ex:a' denotes a literal. set, which immediately contradicts the interpolation lemma for rdf-entailment. Rather than develop a separate theory of the syntactic conditions for recognising entailment for each. Therdf-closure of an RDF graph E is the graph gotten by adding triples to E according to the following (very simple) rules: 1. Add the following triple (which is true in any rdf-interpretation): [rdf:type] [rdf:type] [rdf:Property] . 2. Apply the following rule recursively to generate all legal RDF triples (i.e. until none of the rules apply or the graph is unchanged.) Here xxx and yyy stand for any uriref, bNode or literal, aaa for any uriref. (This rule will generate the triple mentioned in 1 in two steps from any RDF triple; nevertheless, we mention the triple explicitly since it is required to be in the closure of even an empty set of graphs.).. The rdfs-closure of an RDF graph E is the graph gotten by adding triples to E according to the following rules: 1. Add the following triples, which are true in any rdfs-interpretation. These assign classes, domains and ranges to the properties in the rdfs vocabulary. (There are several other triples which are true in every rdfs-interpretation, but they will be generated from these by other rules.) [rdfs:Resource] [rdf:type] [rdfs:Class] . [rdfs:Literal] [rdf:type] [rdfs:Class] . [rdfs:Class] [rdf:type] [rdfs:Class] . [rdf:Property] [rdf:type] [rdfs:Class] . [rdf] . ...). Unlike the simpler rdf closure rules, the outputs of some of these rules may.They] . [rdf:Property] [rdfs:subClassOf] [rdfs:Resource] ... If an instance of a graph E' is a subgraph of another graph E then E entails E'; this follows from the subgraph and instance lemmas. As we show below, this is in fact a necessary as well as sufficient condition for entailment, so it is useful to give a name to the syntactic condition that captures non-entailment. Say that a graph E' is separable from a graph E if no instance of E' is a subgraph of E. In particular, a ground graph is separable from E just when it is not a subgraph of E, and a ground triple is separable just in case it isn't in the graph. Graphs which are not separable from E are entailed by E; but for all others, there is a way to arrange the world so that they are false and E true. For ground graphs, the subgraph lemma can be strengthened to provide simple necessary and sufficient conditions for entailment. Conjunction Lemma.If E is ground, then I satisfies E if and only if it satisfies every triple in E. Proof. Obvious, from definition of denotation for ground graphs. QED Plain Subgraph Lemma. If E and E' are ground, then E entails E' if and only if E' is a subgraph of E. Proof. 'If' follows directly from subgraph lemma; 'only if' follows from previous lemma and definition of entailment. QED Herbrand Lemma. Any RDF graph has a satisfying interpretation. Proof. We will construct the interpretation from the graph, by providing 'just enough' entities and extensions to make the graph true. Since the exact nature of the things in the universe is irrelevant, it is convenient to use the nodes of the graph themselves as their own denotations. (That was Herbrand's idea.) y . Define the mapping A to be the identity mapping on blank nodes of the graph. Clearly I satisfies all ground triples in the graph, and [I+A] satisfies the entire graph; so I satisfies the graph. QED An interpretation constructed in this way, so that the IS mapping is the identity mapping, is called a Herbrand interpretation. more information than is necessary to specify the truth of E; an interpretation - a world - can be larger than strictly needed to establish the truthvalues of a particular set of triples. It is therefore useful to define a notion of the minimal part of an interpretation which is just enough to make a given graph true. Say that I' is a subinterpretation of I when vocab(I') is a subset of vocab(I), IR'is a subset of IR, I'(x)=I(x) wherever I'(x) is defined, and IEXT'(x) is a subset of IEXT(x) wherever IEXT'(x) is defined. Intuitively, I' defines a 'part' of the world defined by I. If. It is clear that if I satisfies E, then a minimal satisfying interpretation exists with a vocabulary precisely the vocabulary of E. The minimal interpretations can be characterized by the following lemma. Minimality lemma. If I is a minimal satisfying interpretation of E, then I fails to satisfy every triple which has no instance in E. Proof. We will argue by reductio. Suppose I satisfies some such triple S P O, i.e.. IEXT(I(P)) contains <I(S),I(O)>, and consider the subinterpretation I' which is like I except that IEXT(I'(P)) does not contain that pair. Since S P O has no instances in E, [I'+A](x)=[I+A](x) for any mapping A from blank nodes and any triple x in E, and I satisfies E, so I' satisfies E; so I is not minimal. QED The property extensions in a minimal interpretation are 'shrink-wrapped' onto the assertions in the graph. Notice that every thing in the universe of a minimal interpretation of E must be the denotation of at least one node in E, and that every pair in any property extension must have at least one corresponding triple in E that it makes true; for if not, one could delete some of the interpretation and still satisfy E. We will make use of this property in later proofs. Strong Herbrand Lemma. Any RDF graph E has a satisfying interpretation which does not satisfy any graph which is separable from E. Proof.The construction in the proof of the Herbrand Lemma in fact establishes this result for arbitrary separable graphs. Consider the Herbrand interpretation I constructed in the proof of the Herbrand lemma, and let <S P O> be a triple which has no instances in E. Then either S is a name and there are no triples of the form S P O' in E, or O is a name and there are no triples of the form S' P O in E. Consider the first case (the other case is similar); then by the construction in the earlier proof, IEXT(I(P)) contains no pairs of the form <I(S), x>; so there is no mapping A from blank nodes to IR that could make the triple true in [I+A]; so the triple is false in I. Similarly for the other case. QED Merging lemma. The merge of a set S of RDF graphs is entailed by S, and entails every member of S. Proof. Obvious, from definitions of entailment and merge. All members of S are true iff all triples in the merge of S are true. QED. Anonymity lemma 1. Suppose E is a lean graph and E' is a proper instance of E. Then E does not entail E'. Proof. Since E' is a proper instance and E is lean, E' contains a triple which has no instances in E; otherwise the triple in E which it is a proper instance of would have had a proper instance in E. By the strong Herbrand lemma, there exists an interpretation which satisfies E but not E'. So E does not entail E'. QED Anonymity lemma 2. Suppose that E is a lean graph and that E' is like E except that two distinct unlabeled nodes in E have been identified in E'. Then E does not entail E'. Proof. First we assume that the blank nodes occur in two distinct triples in E. Suppose that E contains the triples S1 P1 _:x1 . S2 P2 _:x2 . where E' contains the triples S1 P1 _:x . S2 P2 _:x . (The arguments for the cases where the blank nodes occur in other positions in the triples are similar.) Since E is lean, it contains no other triples of the form S1 P1 O' or S2 P2 O'. Let I be a Herbrand interpretation of E; then I(S1) is distinct from I(S2) and IEXT(I(P1)) ={<I(S1), _:x1>}and IEXT(I(P2))={<I(S2), _:x2>}. Let A be any mapping from the blank nodes of E' to IR, then in order for both triples to be true in [I+A], [I+A](_:x) would have to equal both _:x1 and _:x2; but these are distinct; so I does not satisfy E'. The only remaining case is where E contains a single triple with two blank nodes which are identified in E': _:x1 P _:x2 . where E' contains _:x P _:x . The argument here is similar; the Herbrand interpretation I now has IEXT(I(P)) = {<_:x1,_:x2>} and there is no mapping from the second triple that could satisfy this, so again I satisfies E but not E'. QED. Note that the 'minimal' nature of the Herbrand construction provides an interpretation that is sufficient to make a graph true, but only just sufficient. This is a basic technique for showing that one graph does not entail another and for establishing a precise correspondence between syntactic relationships and entailment. Interpolation Lemma. S entails E if and only if a subgraph of the merge of S is an instance of E. Proof. 'if' is a direct consequence of the merging and instance lemmas. To prove 'only if' we will show the converse. This is just a re-statement of the strong Herbrand lemma. Assume that no subgraph of the merge of S is an instance of E, i.e. that all subgraphs of the merge of S fail to be instances of E; i.e., that E is separable from the merge of S. Then by the strong Herbrand lemma the merge of S does not entail E. So, by the merging lemma, S does not entail E. QED. Skolemization is a syntactic transformation routinely used in automatic inference systems in which existential variables are replaced by 'new' functions - function names not used elsewhere - applied to any enclosing' name, i.e. a uriref which is guaranteed to not occur anywhere else.(Using a literal would not do. Literals are never 'new' in the required sense, since their meaning is fixed.) To be precise, a skolemization of E (with respect to V) is a ground instance of E with respect to a vocabulary V which is disjoint from the vocabulary of E. The following lemma shows that skolemization has the same properties in RDF as it has in conventional logics. Intuitively, this lemma shows that asserting a skolemization expresses a similar content to asserting the original graph, in many respects. In effect, it simply gives 'arbitrary' names to the anonymous entities whose existence was asserted by the use of blank nodes. However, care is needed, since these 'arbitrary' names have the same status as any other urirefs once published. Also, skolemization would not be an appropriate operation when applied to anything other than the antecendent of an entailment. A skolemization of a query would represent a completely different query. interpolation. So E entails F. QED. RDF closure lemma. Any satisfying rdf-interpretation of E satisfies the rdf-closure of E; and any minimal simple satisfying interpretation of the rdf-closure of E is a satisfying rdf-interpretation of E. Proof. This follows from a comparison of the rdf closure rules with the semantic conditions on an rdf-interpretation. Although the argument is very simple in this case, we give it here in full to illustrate the general technique. The first part follows from the fact that the closure rules are all rdf-valid. To show this, suppose I is an rdf-interpretation; then for any aaa in the vocabulary of I, if a triple of the form xxx aaa yyy is true in I, then IEXT(I(aaa)) is nonempty then I(aaa) is in IP, so IEXT(I(rdf:type)) contains <I(aaa),I(rdf:Property)>, so the triple aaa rdf:type rdf:Property is true in I. Since I is an rdf-interpretation, its vocabulary contains rdf:type and IP contains I(rdf:type), so in particular the triple [rdf:type] [rdf:type] [rdf:Property] . is true in I. That establishes that the closure rules are rdf-valid. To prove the other part of the lemma we must show that the closure rules are together sufficient to force any minimal interpretation to be an rdf-interpretation of E. The simplest way to argue this is to show the converse, viz. that any minimal simple interpretation of the rdf-closure that violates one of the semantic conditions for an rdf-interpretation of E would thereby fail to satisfy the closure. Suppose therefore that I is a minimal simple interpretation of the rdf-closure of E. If I violates the first constraint then IP does not contain I(rdf:type); in that case, the added triple in the first closure rule is false in I. So assume that I violates the second constraint. Then there is some x in IP for which IEXT(I(rdf:type)) does not contain <x,I(rdf:Property)>. Since I is minimal, there is some node aaa in E with I(aaa)=x; and since I(aaa) is in IP, there is a pair <y,z> in IEXT(I(aaa)) and a triple bbb aaa ccc . in E with I(bbb)=y and I(ccc)=z. Then the closure of E contains the triple aaa [rdf:type] [rdf:Property] . which is false in I. So I fails to satisfy the rdf-closure. QED. Notice the need for the minimality assumption, which 'forces' the semantic violation to be made explicit in the syntax of the graph itself. The second part of the lemma could be false for an arbitrary simple interpretation of the closure, which might fail to meet the required semantic conditions on some part of the universe that was not referred to in the graph itself. In general, one cannot infer, from the lack of an assertion in a graph, that what that assertion would say if it were in the graph must be false in a satisfying interpretation of the graph. Minimal interpretations, however, embody a ' closed world assumption' which would sanction such an inference. To prove an entailment we need to prove something about all interpretations; but to prove the converse, it is enough to show that a single interpretation exists with the right properties, and this is where the special properties of minimal interpretations are useful. RDF entailment lemma. S rdf-entails E if and only if the rdf-closure of the merge of S simply entails E. Proof. Follows from the merging lemma, the RDF closure lemma and the definition of entailment. By the merging lemma, we can identify S with the merge of S, i.e. we can treat a set of graphs as a single graph MS. So suppose that MS rdf-entails E, and let I be a simple interpretation of the rdf-closure c(MS) of of MS. Then there is a minimal simple subinterpretation I' of I which satisfies c(MS); so, by the previous lemma, I' is a satisfying rdfs-interpretation of E. Therefore I satisfies E (since I' is a subinterpretation of I). Conversely, suppose that c(MS) simply entails E, and let I be an rdf-interpretation of MS; then by the previous lemma, I satisfies c(MS), so I satisfies E (since every rdf-interpretation is a simple interpretation). QED. RDFS Closure Lemma. Any satisfying rdfs-interpretation of E satisfies the rdfs-closure of E; and any minimal simple satisfying interpretation of the rdf-closure of E is a satisfying rdfs-interpretation of E. Proof.(Sketch) As in the proof of the RDF closure lemma, this follows from a point-by-point comparison of the rdfs closure rules with the semantic conditions on an rdfs-interpretation. A full proof would be long but tedious.We will illustrate the form of the argument by considering some typical cases in detail. The first part follows from the fact that the rdfs closure rules are all rdfs-valid, which can be checked case by case. For example, consider the closure rule rdfs5, and suppose I is an rdfs-interpretation which satisfies aaa [rdfs:subPropertyOf]bbb . bbb [rdfs:subPropertyOf]ccc . Then by the semantic conditions on an rdfs-interpretation, IEXT(I(aaa)) is a subset of IEXT(I(bbb)) and IEXT(I(bbb)) is a subset of IEXT(I(ccc)); so IEXT(I(aaa)) is a subset of IEXT(I(ccc)); so I satisfies aaa [rdfs:subPropertyOf]ccc . The other cases are similarly straightforward. To demonstrate the other part of the lemma we must show that the closure rules are together sufficient to restrict any minimal interpretation to be an rdfs-interpretation. The simplest way to argue this is to show the converse, by demonstrating that any minimal simple interpretation of the rdfs-closure c(E) of E that violates one of the semantic conditions for an rdfs-interpretation of E would thereby make some triple in c(E) false. Again, this can be checked by a detailed examination of the cases that arise. Suppose that I is a minimal simple interpretation of E. If I violates any of the conditions involving the rdfs-vocabulary then it is easy to check that one of the 'added' triples would be false, eg if IEXT(I(rdfs:domain) does not contain <I(rdfs:domain),I(rdf:Property)> then the triple [rdfs:domain] [rdfs:domain] [rdf:Property] . is false in I. If I violates the condition on IEXT(I( rdfs:range)), then there exist x, y, u and v in IR with <x,y> in IEXT(I( rdfs:range)), <u,v> in IEXT(x) but v not in ICEXT(y). Since I is a minimal interpretation and satisfies c(E), the closure must contain two triples aaa [rdfs:range]bbb . ccc aaa ddd . where I(aaa)=x, I(bbb)=y, I(ccc)=u and I(ddd)=v; but I makes the triple ddd [rdf:type]bbb . false, since I(ddd) is not in ICEXT(I(bbb)); but by the closure rule rdfs3, this triple is in c(E); so I fails to satisfy c(E). The IEXT(I( rdfs:domain)) case is similar. Finally, suppose that I violates the condition on rdfs:subClassOf. Then for some x and y in IR, <x,y> is in IEXT(I( rdfs:subClassOf)) but there is some z in ICEXT(x) but not in ICEXT(y). Again, since I is minimal, these entities must occur in triples in c(E) of the form respectively aaa [rdfs:subClassOf]bbb . ccc [rdf:type]aaa . where I(aaa)=x, I(bbb)=y and I(ccc)=z, and where the triple ccc [rdf:type]bbb . is false in I; so I does not satisfy c(E) by a similar argument, using the closure rule rdfs9. Again, the case for a violation of the the condition on rdfs:subPropertyOf. Sergey Melnick suggested using a translation into logic. Jeremy Carroll noticed the need for the lean graph condition. Jos deRoo, Graham Klyne, Jeremy Carroll and Patrick Stickler found errors in earlier drafts and suggested many stylistic improvements. The use of an explicit extension mapping to allow self-application without violating the axiom of foundation was suggested by Chris Menzel. Peter Patel-Schneider found several major errors in an earlier draft, and suggested several important technical improvements. Changes from Working Draft March 2002
http://www.w3.org/TR/2002/WD-rdf-mt-20020429/
crawl-002
refinedweb
6,023
52.49
Knife Modeling Command On 10/11/2014 at 07:58, xxxxxxxx wrote: I'm using SendModelingCommand() to apply the Knife Tool to some objects. But it does not seem like the modeling command supports all the options the actual Knife Tool does. For instance, I can not activate the generation of N-Gons, only cutting visible polygons or any but the "Line" mode. Check out the scene here. You can move the "Cut_Plane" around to change the cut position, and at some positions it creats NGons. Even if you activate the "NGons" option, it still creates NGons. The code I use to cut the object is def cut(obj, p1, p2, n1, n2, options={}) : data = c4d.BaseContainer() data.SetVector(c4d.MDATA_KNIFE_P1, p1) data.SetVector(c4d.MDATA_KNIFE_P2, p2) data.SetVector(c4d.MDATA_KNIFE_V1, n1) data.SetVector(c4d.MDATA_KNIFE_V2, n2) data.SetBool(c4d.MDATA_KNIFE_NGONS, bool(options.get('ngons'))) data.SetBool(c4d.MDATA_KNIFE_VISIBLEONLY, bool(options.get('visible_only'))) c4d.utils.SendModelingCommand( c4d.MCOMMAND_KNIFE, [obj], bc=data, doc=obj.GetDocument()) Can I enabled NGons and Visible Only somehow? Thanks Niklas Edit : By the way, here's a nice post from Yannick with details about cutting. On 11/11/2014 at 05:46, xxxxxxxx wrote: Hello, you are using the Knife command (MCOMMAND_KNIFE). The parameters you want to enable are parameters of the Knife tool (ID_MODELING_KNIFE_TOOL). Unfortunately this tool does not support the usage of SendModellingCommand. The docs will be updated. Best wishes, Sebastian On 13/11/2014 at 00:33, xxxxxxxx wrote: Hi Sebastian, thanks for the reply. Since the prefix of the symbols that need to be used (P1, P2, V1, V2), is the same as the other knife tool symbols, I thought it might work somehow. Unfortunate it does not :-( Best Niklas
https://plugincafe.maxon.net/topic/8292/10815_knife-modeling-command
CC-MAIN-2019-13
refinedweb
288
59.7
Making your components more flexible with render props Render props are a great technique for making your components more flexible. React Hooks make them less critical but still a very useful technique. Our scatterplot doesn't look quite as nice as the earlier screenshot. Regular SVG circles with no styling just can't match up. What if we wanted to render beautiful circles? Or stars? Or maybe something else entirely? We can use render props to give users of our scatterplot component the power to define how they want datapoints to render. 😱 Think of it as a sort of inversion of control. Another common buzzword are "slots", or renderless components. The idea is that one of our props accepts a React component. We then use that prop to render our datapoints. It looks a little like this 👇 <Scatterplotx={10} y={10}data={data}datapoint={(props) => <Datapoint {...props} />}> What's more, we can add interactions and other useful stuff to our <Datapoint> and <Scatterplot> doesn't have to know anything about it. All the scatterplot cares about is rendering two axes and a bunch of datapoints. Let's use the render prop approach to make our scatterplot more reusable. Steps 👇 - pass in a render prop - use it to render datapoints - make datapoint component look nice You can see my solution on CodeSandbox. I recommend you follow along with your existing code. Pass in a render prop React components are Just JavaScript. Either a JSX function call or a function that returns some JSX. That means we can pass them into components via props. Let's do that in App.js // App.jsimport Datapoint from "./Datapoint"//..;<svg width="800" height="800" onClick={this.onClick}><Scatterplotx={50}y={50}width={width}height={height}data={data}datapoint={({ x, y }) => <Datapoint x={x} y={y} />}/></svg> For extra flexibility and readability we're wrapping our <Datapoint> component in another function that accepts x and y coordinates. This is a common pattern you'll see with render props 👉 it gives you the ability to pass in props from both the rendering component and the component that's setting the render prop. Say we wanted Datapoint to know something about our App and our Scatterplot. The scatterplot calls this function with coordinates. We pass those into <Datapoint>. And because the method is defined inside App, we could pass-in anything that's defined in the App. Like perhaps data. Your code will start throwing an error now. Datapoint isn't defined. Don't worry, it's coming soon. Use render prop to render datapoints To use our new datapoint render prop, we have to change how we render the scatterplot. Instead of returning a <circle> for each iteration of the dataset, we're calling a function passed in from props. // Scatterplot.jsrender() {const { x, y, data, height, datapoint } = this.props,{ yScale, xScale } = this.state;return (<g transform={`translate(${x}, ${y})`}>{data.map(([x, y]) => datapoint({x: xScale(:satisfied:,y: yScale(y)}))} We take the datapoint function from props and call it in data.map making sure to pass in x and y as an object. Calling functions with objects like this is a common JavaScript pattern to fake named arguments. Make datapoint component look nice We've got all the rendering, now all we need is the <Datapoint component itself. That goes in a new Datapoint.js file. import React from "react"import styled from "styled-components"const Circle = styled.circle`fill: steelblue;fill-opacity: 0.7;stroke: steelblue;stroke-width: 1.5px;`class Datapoint extends React.Component {render() {const { x, y } = this.propsreturn <Circle cx={x} cy={y} r={3} />}}export default Datapoint I'm using styled-components to define the CSS for my Datapoint. You can use whatever you prefer. I like styled components because they're a good balance between CSS-in-JS and normal CSS syntax. The component itself renders a styled circle using props for positioning and a radius of 3 pixels. For an extra challenge, try rendering circle radius from state and changing datapoint size on mouse over. Make the scatterplot interactive.
https://reactfordataviz.com/building-blocks/4/
CC-MAIN-2022-40
refinedweb
681
60.21
This chapter describes the structure and provides an overview of the Domain Name System (DNS). One of the most common and important uses of DNS is connecting your network to the global Internet. To connect to the Internet, your network IP address must be registered with whomever is administering your parent domain. This chapter covers the following topics. Server Configuration and Data File Names The Domain Name System (DNS) is an application–layer protocol that is part of the standard TCP/IP protocol suite. This protocol implements the DNS naming service, which is the naming service used on the Internet. This section introduces the basic DNS concepts. It assumes that you have some familiarity with network administration, particularly TCP/IP, and some exposure to other naming services, such as NIS+ and NIS. Refer to Chapter 4, Administering DNS (Tasks), worldwide might. The following figure addresses. The following figure shows name-to-address resolution outside the local domain. From a DNS perspective, an administrative domain is a group of machines which are administered as a unit. Information about this domain is maintained by at least two name servers, which are “authoritative” for the domain. The DNS domain is a logical grouping of machines. The domain groupings could correspond to a physical grouping of machines, such as all machines attached to the Ethernet in a small business. Similarly, a local DNS domain. in.named is a public domain TCP/IP program and master server and should have at least one slave server to provide backup. Implementing DNS explains primary and secondary servers in detail. To function correctly, the in.named daemon requires a configuration file and four data files. The master server configuration file is /etc/named.conf. The configuration file contains a list of domain names and the file names containing host information. See The named.conf File for additional information on the named.conf file. If you are internally consistent, you can name the zone data files anything you want. This flexibility might lead to some confusion when working at different sites or referring to different DNS manuals and books. For example, the file names used in Sun manuals and at most many Solaris sites vary from those used in the book DNS and BIND published by O'Reilly & Associates and both of those nomenclatures have some differences from that used in the public-domain Name Server Operations Guide for BIND. In addition, this manual and other DNS documentation use generic names that identify a file's main purpose, and specific example names for that file in code samples. For example, this manual uses the generic name hosts when describing the function and role of that file, and the example names db.doc and db.sales in code samples. The required data files are the following. /var/named/named.ca See The named.ca File for additional information on the named.ca file. As long as you are internally consistent, you can name this file anything you want. /var/named/hosts See The hosts File for additional information on hosts files. The name hosts is a generic name indicating the file's purpose and content. But to avoid confusion with /etc/hosts, you should name this file something other than hosts. The most common naming convention is db.domainname. Thus, the hosts file for the doc.com domain would be called db.doc. db.sales. /var/named/hosts.rev See The hosts.rev File for additional information on the hosts.rev file.. /var/named/named.local See The named.local File and for additional information on the named.local file. As long as you are internally consistent, you can name this file anything you want. An include file is any file named in an $INCLUDE() statement in a DNS data file. $INCLUDE files can be used to separate different types of data into multiple files for your convenience. See $INCLUDE Files. For reference purposes, the following table compares BIND file names from the above mentioned sources.Table 3–1 File Name Examples A domain name is the name assigned to a group of systems on a local network that share DNS administrative files. A domain name is required for the network information service database to work properly. DNS obtains your default domain name from your resolv.conf file. If the resolv.conf file is not available, or does not identify a default domain, and if your enterprise-level naming service is either NIS+ or NIS, the Sun implementation of DNS obtains the default domain name from those services. If resolv.conf is not available or does not provide a domain name and you are not running either NIS+ or NIS, you must either provide a resolv.conf file on each machine that does specify the domain or set the LOCALDOMAIN environment variable. When working with DNS-related files, follow these rules regarding the trailing dot in domain names: Use a trailing dot in domain names in hosts, hosts.rev, named.ca, and named.local data files. For example, sales.doc.com. is correct for these files. Do not use a trailing dot in domain names in named.boot or resolv.conf files. For example, sales.doc.com is correct for these files. To be a DNS client, a machine must run the resolver. The resolver is neither a daemon nor a single program.. The DNS name server uses several files to load its database. At the resolver level, it needs the file /etc/resolv.conf listing the addresses of the servers where it can obtain its information. The resolver reads this resolv.conf file to find the name of the local domain and the location of name servers. It sets the local domain name and instructs the resolver routines to query the listed name servers for information. Normally, each DNS client system on your network has a resolv.conf file in its /etc directory. If a client does not have a resolv.conf file, it defaults to using a server at IP address 127.0.0.1. Whenever the resolver has to find the IP address of a host (or the host name corresponding to an address), the resolver builds a query package and sends it to the name servers listed in /etc/resolv.conf. The servers either answer the query locally or contact other servers known to them, ultimately returning the answer to the resolver. When a machine's /etc/nsswitch.conf file specifies hosts: dns (or any other variant that includes dns in the hosts line), the resolver libraries are automatically used. If the nsswitch.conf file specifies some other naming service before dns, that naming service is consulted first for host information and only if that naming service does not find the host in question are the resolver libraries used. For example, if the hosts line in the nsswitch.conf file specifies hosts: nisplus dns, the NIS+ naming service will first be searched for host information. If the information is not found in NIS+, then the DNS resolver is used. Since naming. For a detailed description of what the resolv.conf file does, see resolv.conf(4). See Setting Up the resolv.conf File for a discussion on how to set up the resolv.conf file. BIND 8.1 added a new configuration file, /etc/named.conf, that replaces the /etc/named.boot file. The /etc/named.conf file establishes the server as a master, slave, an NIS+ host can read and write Logging specifications Selectively applied options for a set of zones, rather than to all zones The configuration file is read by in.named when the daemon is started by the server's startup script, /etc/init.d/inetsvc. The configuration file directs in.named to other servers or to local data files for a specified domain. The named.conf file contains statements and comments. Statements end with a semicolon. Some statements can contain a block of statements. Again, each statement in the block is terminated with a semicolon.Table 3–2 named.conf Statements If your company is large enough, it might support a number of domains, organized into a local namespace. The following figure shows a domain hierarchy that might be in place in a single company. The top-level, or “root” domain for the organization is ajax.com, which has three subdomains, the following figure is a “leaf” of the huge DNS namespace supported on the global Internet. It consists of the root directory, represented as a dot (.), and two top level domain hierarchies, one organizational and one geographical. Note that the com domain introduced in this figure is one of a number of top-level organizational domains in existence on the Internet. At the present time, the organizational hierarchy divides its namespace into the top-level domains listed shown in the following table. It is probable that additional top-level organizational domains will be added in the future.Table 3–3 naming service without connecting to the Internet, you can use any name your organization wants for its your domains and subdomains, if applicable. However, if your site plans wants to join the Internet, it must register its domain name with the Internet governing bodies. To join the Internet, do the following. Register your DNS domain name with the an appropriate Internet governing body. Obtain a network IP address from that governing body. There are two ways to accomplish this. You can communicate directly with the appropriate Internet governing body or their agent. added to the name of the Internet hierarchy to which it belongs. For example, the ajax domain shown in Figure 3–5 has been registered as part of the Internet com hierarchy. Therefore, its Internet domain name becomes ajax.com. The following figure shows the position of the ajax.com domain in the DNS namespace on the Internet. The ajax.com subdomains now have the following names. DNS does not require domain names to be capitalized, though they can following syntax. The fully qualified domain names for the ajax domain and its subdomains are: Note the dot at the furthest right position of each name. DNS service for a domain is managed on the set of name servers. Name servers can manage a single domains or multiple domains, or domains and some or all of their corresponding subdomains. The part of the namespace that a given name server controls is called a zone. Therefore, the name server is said to be authoritative for the zone. If you are responsible for a particular name server, you might the above contains a top domain (Ajax), four subdomains, and five sub-subdomains. It is divided into four zones. includes.168.21.165. In the in-addr.arpa zone files, its address is listed as 165.21.168.192.in-addr.arpa. with the dot at the end indicating the root of the in-addr.arpa domain.
http://docs.oracle.com/cd/E19683-01/816-7511/6mdgu0gvp/index.html
CC-MAIN-2015-32
refinedweb
1,815
58.69
Accelerated Continuous Testing with Test Impact Analysis – Part 4 August 4, 2017 Essential to TIA’s test selection is the map of dynamic dependencies between test methods and source files of code exercised during their execution. TIA needs dependencies mapped in this. Out-of-the-box, that, in a nutshell, is TIA’s scope. But by explicitly providing the dependencies-map, TIA can be extended even beyond! Extending TIA to new scenarios You can provide TIA the dependencies-map explicitly in an xml file. The mapping can even be approximate, and the desired tests-to-run can be specified in terms of a test case filter you would typical provide in the VSTest task. TIA can thus be extended to scenarios not supported out-of-the-box. For e.g.: - To support code in other languages – JavaScript, C++, etc. - To support the scenario where tests and product code are running on different machines. Here is an example – consider code organized in the repo:. This is the repo for MSTest V2. The folder test/UnitTests/MSTest.Core.Unit.Tests/Attributes contains tests for various attributes, and these tests are enclosed in the namespace Microsoft.VisualStudio.TestPlatform.TestFramework.UnitTests.Attributes The corresponding product code is in the folder src/TestFramework/MSTest.Core/Attributes/ Based on this, you might want to run the attribute tests whenever there was a commit into any of the files in above folder. Remember the dependencies mapping that you specify can be approximate – therefore it can be specified as follows: <TestImpactMap> <Tests> <Test Filter="FullyQualifiedName~AttributeTests"> <Dependencies> <Dependency Name="/src/TestFramework/MSTest.Core/Attributes/**/*" /> </Dependencies> </Test> </Tests> </TestImpactMap> Create an XML file with the above information, check it in. Assuming the file is checked in at the root of the repo, and is named TIAmap.xml, TIA can be pointed to this file by setting the build variable tia.usermapfile to $(System.DefaultWorkingDirectory)/TIAmap.xml Now, when a commit comes in to any file in the dependent folder, TIA consults the mapping and runs only the test methods matched by the filter criteria specified! In this case the dependency was mentioned at the level of a folder. It could as well have been at the level of a file, and that file could as well have had any extension, or could have been just an extension. In effect, TIA can thus be extended to any file (.cpp, .js, or .xml files, etc.), and such files need not even contain executable code (as in the case of .xml files) and other languages, and to the cases where the tests and product code are running on different machines! Go ahead, try it out on your repo, and let us know. We look forward to your feedback. If XML is the only way to configure this, please ensure the expected XML format of the file is well documented. The .runsettings file was a mystery for years and lacked documentation when it was released. I wish Visual Studio had good UIs to be able to configure these files, or at a very minimum good intellisense based off XSD files. If MSFT actually followed good practice, their XML files would simply be serialized POCOs, so you could easily discover and explore the properties and object graph at your leisure. Unfortunately, they do not so they burden you with the frustration that you are experiencing. It’s OK though, you are not the only one living in such consternation. 😉 😉 😉 This blog is the very useful information for user and get help work. screenshoot is very helpful for user. Thank you for sharing information. Your article is very nice and interesting
https://blogs.msdn.microsoft.com/devops/2017/08/04/accelerated-continuous-testing-with-test-impact-analysis-part-4/
CC-MAIN-2017-34
refinedweb
606
56.76
ASP. In this chapter, we simply aim to unveil all the mysteries fluttering around the dynamic compilation of ASP.NET pages. We'll do this by considering the actions performed on the Web server, and which modules perform them, when a request arrives for an .aspx page. The first part of this chapter discusses under-the-hood details that might not interest you, as they aren't strictly concerned with the development of ASP.NET applications. Reading this chapter in its entirety is not essential to understanding fundamental techniques of ASP.NET programming. So, if you want, you can jump directly to the "The Event Model" section, which is the section in which we discuss what happens once an ASP.NET page has been requested and starts being processed.. Click to view graphic Figure 2-1 The IIS application mappings for resources with an .aspx extension. Just like any other ISAPI extension, aspnet_isapi.dll is hosted by the IIS 5.0 processt. Table 2-1 IIS Application Mappings for aspnet_isapi.d processthe ASP.NET worker process. The connection between aspnet_isapi.dll and aspnet_wp.exe is established through a named pipea <processModel>. Note that the documentation available with version 1.0 of the .NET Framework is a bit confusing on this point. Documentation in version 1.1 is clear and correct. <processModel>. Table 2-2 Parameters of the ASP.NET Process Default values for the arguments in Table 2-2 can be set by editing the attributes of the <processModel>CreateCreateProcess("aspnet_wp.exe", "[iis_id] [This-Process-Unique-ID] ...", ...); The worker process caches the This-Process-Unique-ID argument and uses it to recognize which named-pipe messages it has to serve. <processModel> section from the machine.config file. Only thread and deadlock settings are read from that section of the machine.config. Everything else goes through the metabase and can be configured only by using the IIS Manager. (Other configuration information continues being read from .config files.)an internal-use object responsible for returning a valid object capable of handling the request.withintypically, an ASP.NET page. It then uses a handler factory object to either instantiate the type from an existing assembly or dynamically create the assembly and then an instance of the type. A handler factory object is a class that implements the IHttpHandlerFactory interface and is responsible for returning an instance of a managed class that can handle the HTTP requestan HTTP handler. An ASP.NET page is simply a handler objectthat is, an instance of a class that implements the IHttpHandler interface.. Table 2-3 Handler Factory Classes in the .NET Framework. Locating the Assembly for the Page Assemblies generated for ASP.NET pages are cached in the Temporary ASP.NET Files folder. The path for version 1.1 of the .NET Framework is as follows. %SystemRoot%\Microsoft.NET\Framework\v1.1.4322\Temporary ASP.NET Files Of course, the directory depends on the version of the .NET Framework you installed. The directory path for version 1.0 of the .NET Framework includes a subdirectory named v1.0.3705. The Temporary ASP.NET Files folder has one child directory for each application ever executed. The name of the subfolder matches the name of the virtual directory of the application. Pages that run from the Web server's root folder are grouped under the Root subfolder. Page-specific assemblies are cached in a subdirectory placed a couple levels down the virtual directory folder. The names of these child directories are fairly hard to make sense of. Names are the result of a hash algorithm based on some randomized factor along with the application name. A typical path is shown in the following listing. The last two directories (in boldface) have fake but realistic names. \Framework \v1.1.4322 \Temporary ASP.NET Files \MyWebApp \3678b103 \e60405c7 Regardless of the actual algorithm implemented to determine the folder names, from within an ASP.NET application the full folder path is retrieved using the following, pretty simple, code: string tempAspNetDir = HttpRuntime.CodegenDir; So much for the location of the dynamic assembly. So how does the ASP.NET runtime determine the assembly name for a particular .aspx page? The assembly folder contains a few XML files with a particular naming convention: [filename].[hashcode].xml If the page is named, say, default.aspx, the corresponding XML file can be named like this: default.aspx.2cf84ad4.xml The XML file is created when the page is compiled. This is the typical content of this XML file: <preserve assem="c5gaxkyh" type="ASP.Default_aspx" hash="fffffeda266fd5f7"> <filedep name="C:\Inetpub\wwwroot\MyWebApp\Default.aspx" /></preserve> I'll say more about the schema of the file in a moment. For now, it will suffice to look at the assem attribute. The attribute value is just the name of the assembly (without extension) created to execute the default.aspx page. Figure 2-6 provides a view of the folder. Figure 2-6 Temporary ASP.NET Files: a view of interiors. The file c5gaxkyh.dll is the assembly that represents the default.aspx page. The other assembly is the compiled version of the global.asax file. (If not specified, a standard global.asax file is used.) The objects defined in these assemblies can be viewed with any class browser tool, including Microsoft IL Disassembler, ILDASM.exe. Detecting Page Changes As mentioned earlier, the dynamically compiled assembly is cached and used to serve any future request for the page. However, changes made to an .aspx file will automatically invalidate the assembly, which will be recompiled to serve the next request. The link between the assembly and the source .aspx file is kept in the XML file we mentioned a bit earlier. Let's recall it: <preserve assem="c5gaxkyh" type="ASP.Default_aspx" hash="fffffeda266fd5f7"> <filedep name="C:\Inetpub\wwwroot\MyWebApp\Default.aspx" /></preserve> The name attribute of the <filedep> node contains just the full path of the file associated with the assembly whose name is stored in the assem attribute of the <preserve> node. The type attribute, on the other hand, contains the name of the class that renders the .aspx file in the assembly. The actual object running when, say, default.aspx is served is an instance of a class named ASP.Default_aspx. Based on the Win32 file notification change system, this ASP.NET feature enables developers to quickly build applications with a minimum of process overhead. Users, in fact, can "just hit Save" to cause code changes to immediately take effect within the application. In addition to this development-oriented benefit, deployment of applications is greatly enhanced by this feature, as you can simply deploy a new version of the page that overwrites the old one. When a page is changed, it's recompiled as a single assembly, or as part of an existing assembly, and reloaded. ASP.NET ensures that the next user will be served the new page outfit by the new assembly. Current users, on the other hand, will continue viewing the old page served by the old assembly. The two assemblies are given different (because they are randomly generated) names and therefore can happily live side by side in the same folder as well as be loaded in the same AppDomain. Because that was so much fun, let's drill down a little more into this topic. How ASP.NET Replaces Page Assemblies When a new assembly is created for a page as the effect of an update, ASP.NET verifies whether the old assembly can be deleted. If the assembly contains only that page class, ASP.NET attempts to delete the assembly. Often, though, it finds the file loaded and locked, and the deletion fails. In this case, the old assembly is renamed by adding a .DELETE extension. (All executables loaded in Windows can be renamed at any time, but they cannot be deleted until they are released.) Renaming an assembly in use is no big deal in this case because the image of the executable is already loaded in memory and there will be no need to reload it later. The file, in fact, is destined for deletion. Note that .DELETE files are cleaned up when the directory is next accessed in sweep mode, so to speak. The directory, in fact, is not scavenged each time it is accessed but only when the application is restarted or an application file (global.asax or web.config) changes. Each ASP.NET application is allowed a maximum number of recompiles (with 15 as the default) before the whole application is restarted. The threshold value is set in the machine.config file. If the latest compilation exceeds the threshold, the AppDomain is unloaded and the application is restarted. Bear in mind that the atomic unit of code you can unload in the CLR is the AppDomain, not the assembly. Put another way, you can't unload a single assembly without unloading the whole AppDomain. As a result, when a page is recompiled, the old version stays in memory until the AppDomain is unloaded because either the Web application exceeded its limit of recompiles or the ASP.NET worker process is taking up too much memory. The runtimeinfo.aspx page also lists all the assemblies currently loaded in the AppDomain. The sample page needs 12 system assemblies, including those specific to the applicationglobal.asax and the page class. This number increases each time you save the .aspx file because after a page update, a new assembly is loaded but the old one is not unloaded until the whole AppDomain is unloaded. If you save the .aspx file several times (by just opening the file and hitting Ctrl+S), you see that after 15 recompiles the AppDomain ID changes and the number of loaded assemblies reverts back to 12 (or whatever it was). Figure 2-7 shows the result of this exercise. Figure 2-7 Runtimeinfo.aspx shows ASP.NET runtime information. Batch Compilation Compiling an ASP.NET page takes a while. So even though you pay this price only once, you might encounter situations in which you decide it's best to happily avoid that. Unfortunately, as of version 1.1, ASP.NET lacks a tool (or a built-in mechanism) to scan the entire tree of a Web application and do a precompilation of all pages. However, you can always request each page of a site before the site goes live or, better yet, have an ad hoc application do it. In effect, since version 1.0, ASP.NET has supported batch compilation, but this support takes place only at run time. ASP.NET attempts to batch into a single compiled assembly as many pages as possible without exceeding the configured maximum batch size. Furthermore, batch compilation groups pages by language, and it doesn't group in the same assembly pages that reside in different directories. Just as with many other aspects of ASP.NET, batch compilation is highly configurable and is a critical factor for overall site performance. Fine-tuning the related parameters in the <compilation> section of the machine.config file is important and should save you from having and loading 1000 different assemblies for as many pages or from having a single huge assembly with 1000 classes inside. Notice, though, that the problem here is not only with the size and the number of the assemblies but also with the time needed to recompile the assemblies in case of updates. How ASP.NET Creates a Class for the Page An ASP.NET page executes as an instance of a type that, by default, inherits from System.Web.UI.Page. The page handler factory creates the source code for this class by putting a parser to work on the content of the physical .aspx file. The parser produces a class written with the language the developer specified. The class belongs to the ASP namespace and is given a file-specific name. Typically, it is the name and the extension of the file with the dot (.) replaced by an underscore (_). If the page is default.aspx, the class name will be ASP.Default_aspx. You can check the truthfulness of this statement with the following simple code: void Page_Load(object sender, EventArgs e){ Response.Write(sender.ToString());} As mentioned earlier, when the page runs with the Debug attribute set to true, the ASP.NET runtime does not delete the source code used to create the assembly. Let's have a quick look at the key parts of the source code generated. (Complete sources are included in this book's sample code.) Reviewing the Class Source Code For a better understanding of the code generated by ASP.NET, let's first quickly review the starting pointthe .aspx source code: <%@ Pageprivate void Page_Load(object sender, EventArgs e) { TheSourceFile.Text = HttpRuntime.CodegenDir;}private void MakeUpper(object sender, EventArgs e) { string buf = TheString.Value; TheResult.InnerText = buf.ToUpper();}</script> <html><head><title>Pro ASP.NET (Ch 02)</title></head><body> <h1>Sample Page</h1> <form runat="server"> <asp:Label<hr> <input runat="server" type="text" id="TheString" /> <input runat="server" type="submit" id="TheButton" value="Uppercase..." onserverclick="MakeUpper" /><br> <span id="TheResult" runat="server"></span> </form></body></html> The following listing shows the source code that ASP.NET generates to process the preceding page. The text in boldface type indicates code extracted from the .aspx file: namespace ASP { using System; using ASP; using System.IO; public class Default_aspx : Page, IRequiresSessionState { private static int __autoHandlers; protected Label TheSourceFile; protected HtmlInputText TheString; protected HtmlInputButton TheButton; protected HtmlGenericControl TheResult; protected HtmlForm TheAppForm; private static bool __initialized = false; private static ArrayList __fileDependencies; private void Page_Load(object sender, EventArgs e) { TheSourceFile.Text = HttpRuntime.CodegenDir; } private void MakeUpper(object sender, EventArgs e) { string buf = TheString.Value; TheResult.InnerText = buf.ToUpper(); } public Default_aspx() { ArrayList dependencies; if (__initialized == false) { dependencies = new ArrayList(); dependencies.Add( "c:\\inetpub\\wwwroot\\vdir\\Default.aspx"); __fileDependencies = dependencies; __initialized = true; } this.Server.ScriptTimeout = 30000000; } protected override int AutoHandlers { get {return __autoHandlers;} set {__autoHandlers = value;} } protected Global_asax ApplicationInstance { get {return (Global_asax)(this.Context.ApplicationInstance));} } public override string TemplateSourceDirectory { get {return "/vdir";} } private Control __BuildControlTheSourceFile() { Label __ctrl = new Label(); this.TheSourceFile = __ctrl; __ctrl.ID = "TheSourceFile"; return __ctrl; } private Control __BuildControlTheString() { // initialize the TheString control } private Control __BuildControlTheButton() { // initialize the TheButton control } private Control __BuildControlTheResult() { // initialize the TheResult control } private Control __BuildControlTheAppForm() { HtmlForm __ctrl = new HtmlForm(); this.TheAppForm = __ctrl; __ctrl.ID = "TheAppForm"; IParserAccessor __parser = (IParserAccessor) __ctrl; this.__BuildControlTheSourceFile(); __parser.AddParsedSubObject(this.TheSourceFile); __parser.AddParsedSubObject(new LiteralControl("<hr>")); this.__BuildControlTheString(); __parser.AddParsedSubObject(this.TheString); this.__BuildControlTheButton(); __parser.AddParsedSubObject(this.TheButton); __parser.AddParsedSubObject(new LiteralControl("<br>")); this.__BuildControlTheResult(); __parser.AddParsedSubObject(this.TheResult); return __ctrl; } private void __BuildControlTree(Control __ctrl) { IParserAccessor __parser = (IParserAccessor)__ctrl; __parser.AddParsedSubObject( new LiteralControl("<html>.</h1>")); this.__BuildControlTheAppForm(); __parser.AddParsedSubObject(this.TheAppForm); __parser.AddParsedSubObject(new LiteralControl(".</html>")); } protected override void FrameworkInitialize() { this.__BuildControlTree(this); this.FileDependencies = __fileDependencies; this.EnableViewStateMac = true;this.Request.ValidateInput(); } public override int GetTypeHashCode() { return 2003216705; } }} In addition, for ASP.NET pages, the language declared in the @Page directive must match the language of the inline code. The Language attribute, in fact, is used to determine the language in which the class is to be created. Finally, the source code is generated using the classes of the language's Code Document Object Model (CodeDOM). CodeDOM can be used to create and retrieve instances of code generators and code compilers. Code generators can be used to generate code in a particular language, and code compilers can be used to compile code into assemblies. Not all .NET languages provide such classes, and this is why not all languages can be used to develop ASP.NET applications. For example, the CodeDOM for J# has been added only in version 1.1 of the .NET Framework, but there is a J# redistributable that adds this functionality to version 1.0. <script> block are copied verbatim as members of the new class with the same level of visibility you declared. The base class for the dynamically generated source is Page unless the code-behind approach is used. In that case, the base class is just the code-behind class. We'll return to this later in "The Code-Behind Technique" section. The Page Life Cycle Within the base implementation of ProcessRequest, the Page class first calls the FrameworkInitialize method, which, as seen in the source code examined a moment ago, builds the controls tree for the page. Next, ProcessRequest makes the page go through various phases: initialization, loading of view-state information and postback data, loading of the page's user code, and execution of postback server-side events. After that, the page enters rendering mode: the updated view state is collected, and the HTML code is generated and sent to the output console. Finally, the page is unloaded and the request is considered completely served. During the various phases, the page fires a few events that Web controls and user-defined code can intercept and handle. Some of these events are specific to controls and can't be handled at the level of the .aspx code. In theory, a page that wants to handle a certain event should explicitly register an appropriate handler. However, for backward compatibility with the Visual Basic programming style, ASP.NET also supports a form of implicit event hooking. By default, the page tries to match method names and events and considers the method a handler for the event. For example, a method named Page_Load is the handler for the page's Load event. This behavior is controlled by the AutoEventWireup attribute on the @Page directive. If the attribute is set to false, any applications that want to handle an event need to connect explicitly to the page event. The following code shows how to proceed from within a page class: // C# codethis.Load += new EventHandler(this.MyPageLoad); ' VB codeAddHandler Load, AddressOf Me.MyPageLoad By proceeding this way, you will enable the page to get a slight performance boost by not having to do the extra work of matching names and events. Visual Studio .NET disables the AutoEventWireup attribute. Page-Related Events To handle a page-related event, an ASP.NET page can either hook up the event (for example, Load) or, in a derived class, override the corresponding methodfor example, OnLoad. The second approach provides for greater flexibility because you can decide whether and when to call the base method, which, in the end, fires the event to the user-code. Let's review in detail the various phases in the page life cycle: Last Updated: June 2, 2003
http://www.microsoft.com/mspress/books/sampchap/6667.aspx
crawl-003
refinedweb
3,086
50.02
Hello.I have already mentioned this here -> but as I'm sure now that this is a bug I thought I'd file a bug report. Issue: Build scripts don't work on files which contain non-ascii characters. Reproduce: create a file test.py. Past this code into it: print 'hello world!' Press f7 and it'll print "hello world!" to the console.Now, rename it to "tést.py", "täst.py" or anything else containing "strange" chars. When trying to build now, an error will show i the sublime console now: Traceback (most recent call last): File "./sublime_plugin.py", line 325, in run_ File "./exec.py", line 124, in run UnicodeEncodeError: **'ascii' codec can't encode character** u'\xe9' in position 53: ordinal not in range(128) This is quite annoying to me as german speaker as I often have umlauts in path names. It'd not be that much to fix it I guess just add some "u"s somewhere in the right place Thanks I suppose that the exec command must be modified with something like: class ExecCommand(sublime_plugin.WindowCommand, ProcessListener): def run(self, cmd = ], file_regex = "", line_regex = "", working_dir = "", encoding = "utf-8", env = {}, quiet = False, kill = False, # Catches "path" and "shell" **kwargs): cmd = [c.encode(sys.getfilesystemencoding()) for c in cmd] ... At least it works that way on Windows 7, not sure for other OS. Thanks very much. Works now under Linux, too. OSX should work then too, I guess.
https://forum.sublimetext.com/t/bug-build-scripts-dont-work-with-unicode-paths/5824/3
CC-MAIN-2016-30
refinedweb
242
76.11
mm ); Since: BlackBerry 10.0. POSIX defines the following: - MAP_PRIVATE - MAP_SHARED - MAP_FIXED The following are Unix or BlackBerry 10 OS, or a physical address (e.g. for mapping a device's registers in a resource manager). If you want to map a device's physical memory, use mmap_device_memory() instead of mmap().. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The mmap() and mmap64() functions map a region within the object specified by filedes, beginning at off and continuing for len, into the caller's address space and returns the location. The mmap64() function is a 64-bit version of mmap(). The object that you map from can be one of the following: - a file, opened with open() - a shared memory object, opened with shm_open() - a typed memory object, opened with posix_typed_mem_open() - physical memory — specify NOFD for fildes If fildes isn't NOFD, you must have opened the file descriptor for reading, no matter what value you specify for prot; write access is also required for PROT_WRITE if you haven't specified MAP_PRIVATE. The mapping is as shown below. BlackBerry 10 OS. - MAP_FIXED - Map the object to the address specified by addr. If this area is already mapped, the call changes the existing mapping of the area. - In order to use MAP_FIXED, your process must have the PROCMGR_AID_MAP_FIXED ability enabled. For more information, see procmgr_ability(). -. For anonymous shared memory objects (those created via mmap() with MAP_ANON | MAP_SHARED and a file descriptor of -1), a MAP_LAZY flag implicitly sets the SHMCTL_LAZY flag on the object (see shm_ctl()). - MAP_NOINIT - Relax the POSIX requirement that the memory be zeroed.. If the same region in a file is mapped twice, once with MAP_NOSYNCFILE and once without, the memory manager might not be able to tell whether a change was made through the MAP_NOSYNCFILE mapping or not, and thus write out changes that weren't intended. -). - In order to use MAP_PHYS, your process must have the PROCMGR_AID_MEM_PHYS ability enabled. For more information, see procmgr_ability(). - You should use mmap_device_memory() instead of MAP_PHYS, unless you're allocating physically contiguous memory. - BlackBerry 10 OS Programmer's Guide. Specifying the MAP_SYSRAM bit in a call to mmap() will result in an error of EINVAL. If fildes represents a typed memory object opened with either the POSIX_TYPED_MEM_ALLOCATE or POSIX_TYPED_MEM_ALLOCATE_CONTIG flag (see posix_typed_mem_open()), and there are enough resources available, mmap() maps len bytes allocated from the corresponding typed memory object that weren't previously allocated to any process in any processor that may access that typed memory object. If there aren't enough resources available, mmap() fails. If fildes represents a typed memory object opened with the POSIX_TYPED_MEM_ALLOCATE_CONTIG flag, the allocated bytes are contiguous within the typed memory object. If the typed memory object wase opened with POSIX_TYPED_MEM_ALLOCATE, the allocated bytes may be composed of noncontiguous fragments within the typed memory object. If the typed memory object was opened with neither of these flags, len bytes starting at the given offset within the typed memory object are mapped, exactly as when mapping a file or shared memory object. In this case, if two processes map an area of typed memory using the same offset and length and using file descriptors that refer to the same memory pool (either from the same port or from a different port), both processes map the same region of storage. Errors: - EACCES - One of the following occurred: - The file descriptor in fildes isn't open for reading. - You specified PROT_WRITE and MAP_SHARED, and fildes isn't open for writing. - You specified PROT_EXEC for a memory-mapped file mapping, the file doesn't have execute permission for the client process, and procnto was started with the -mX option. -: - Addresses in the range [off, off+len) are invalid for the object specified by fildes. -. - EPERM - The calling process doesn't have the required permission (see procmgr_ability()), or it attempted to set PROT_EXEC for a region of memory covered by an untrusted memory-mapped file. Examples: Open a shared memory object and share it with other processes: fd = shm_open( "/datapoints", O_RDWR, 0777 ); addr = mmap( 0, len, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0 ); Allocate a physically contiguous DMA buffer for a bus-mastering PCI network card: addr = mmap( 0, 262144, PROT_READ|PROT_WRITE|PROT_NOCACHE, MAP_PHYS|MAP_ANON, NOFD, 0 ); Map a file into memory, change the memory, and then verify that the file's contents have been updated: #include <stdlib.h> #include <stdio.h> #include <sys/mman.h> #include <fcntl.h> #include <unistd.h> #include <string.h> #define TESTSTRING "AAAAAAAAAA" int main(int argc, char *argv[]) { char buffer[80], filename[200] = "/tmp/try_it"; int fd, file_size, ret, size_written, size_read; void *addr; /* Write the test string into the file. */ unlink( filename); fd = open( filename, O_CREAT|O_RDWR , 0777 ); if( fd == -1) { perror("open"); exit(EXIT_FAILURE); } size_written = write( fd, TESTSTRING, sizeof (TESTSTRING) ); if ( size_written == -1 ){ perror("write"); exit(0); } printf( "Wrote %d bytes into file %s\n", size_written, filename ); lseek( fd, 0L, SEEK_SET ); file_size = lseek( fd, 0L, SEEK_END ); printf( "Size of file = %d bytes\n", file_size ); /* Map the file into memory. */ addr = mmap( 0, file_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0 ); if (addr == MAP_FAILED) { perror("mmap"); exit(EXIT_FAILURE); } /* Change the memory and synchronize it with the disk. */ memset( addr, 'B', 5 ); ret = msync( addr, file_size, MS_SYNC); if( ret == -1) { perror("msync"); exit(0); } /* Close and reopen the file, and then read its contents. */ close(fd); fd = open( filename, O_RDONLY); if( fd == -1) { perror("open"); exit(EXIT_FAILURE); } size_read = read( fd, buffer, sizeof( buffer ) ); printf( "File content = %s\n", buffer ); close(fd); return EXIT_SUCCESS; } Classification: mmap() is POSIX 1003.1 MF|SHM|TYM; mmap64() is Large-file support Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mmap.html
CC-MAIN-2016-30
refinedweb
962
52.39
BizTalk 2004 has great Web Service support; you can create an orchestration as usual and then expose it for invocation as a Web Service by using the Biztalk Web Services Publishing Wizard which is located on the Tools Menu inside VS.NET, and you can of course consume Web Services from within an orchestration. To expose an orchestration as a Web Service you run through the Wizard and select the Port(s) in your orchestration that you want to expose via a Web Service, and the magic happens – Note the “Create BizTalk Receive Locations” option, if you check this it will create the Receive Locations (visible with BizTalk Explorer) for you which is handy. This works really well in almost all circumstances, if you however have multiple receive ports within your Orchestration you’ll see that they are exposed as separate Web Services (within the same IIS virtual directory), this works just fine but it’s a bit neater (in my view) to perhaps have one Web Service that exposes a method per port, instead of the consumer having to use two or more Web Services endpoints. The Wizard doesn’t allow you to configure the generated web service to enable this, but it’s pretty straight forward to do: · Create the ASP.NET Web Service that you wish to expose to your consumers; this will become the consumers interface to your BizTalk Orchestration(s). Create the methods that you require · Use the Biztalk Web Services Publishing Wizard tool to publish all of the relevant ports from your Orchestration. Do not check the “Create BizTalk Receive Locations”The Web Services Publishing Wizard will create a new Virtual Directory in Internet Information Server, and this will contain a new Web Service per Receive port. · A Web Service consists of two files: <Name>.asmx and a <Name>.asmx.cs file. The ASMX file is the Web Service endpoint and the CS file is the C# code relating to it. · If you inspect the .asmx.cs files you will see that they all belong to the same namespace, and contain one method each which contains the code to pass the parameters to the Orchestration. · To combine the methods you must manually copy of the methods from the ASMX.CS files into one ASMX.CS file. You can rename this ASMX file to a more suitable name if you wish · If you now retrieve the WSDL for the Web Service containing all of the methods you will see that two methods are exposed.e.g:http://<YourServerName>/<YourVirtualDirectory>/<YourWebServiceName>.asmx?wsdl · The Web Service now exposes all of the ports via individual methods inside one Web Service, you must now configure a BizTalk receive port to accept SOAP messages from this Web Service. Otherwise BizTalk will reject the messages as the sender is unrecognized. · In the BizTalk Explorer, Create a new Receive Port, Select the Port Type as being a Request-Receive port. Within the Properties for this Receive port select Microsoft.BizTalk.DefaultPipelines.PassThruTransmit as the Receive PipelineExpand your newly created Receive port and Add a New Receive Location. Choose SOAP as the transport type, and open the dialog to set the Address (URI) for this Transport.Enter the Virtual directory and Web Service name in the textbox provided, e.g. /MyWebService/Service1.asmx. Note – This Web Service name will be the ASMX file that we copied the methods into above · Within the BizTalk explorer you must now Bind the Receive port we’ve created to the Orchestration, ensure that you bind all of the Receive ports on your Orchestration to the same Receive port. Give it a whirl! Trademarks | Privacy Statement
http://blogs.msdn.com/darrenj/archive/2004/03/30/104021.aspx
crawl-002
refinedweb
609
57.4
Simple Unix tools From HaskellWiki Revision as of 13:34, 16 April 2007 Simple 'wc': $ cat file.txt | ghc -e 'wc_l' UnixTools.hs Or, one could define 'main' to be a chosen tool/function (add a line to the effect that "main = wc_l") and then compile the tool with $ ghc --make UnixTools.hs The given Haskell codes presents yet a third way of doing things: much like the BusyBox suite of Unix tools, it is possible to compile a single monolithic binary and have it detect what name it is run by and then act appropriately. This is the approach the following code takes: you can compile it and then make symbolic links (like "ln -s UnixTools echo") and then run those commands ( "echo foo | ./echo" would produce output of "foo"). import Heredoc in Haskell
https://wiki.haskell.org/index.php?title=Simple_Unix_tools&diff=12536&oldid=12483
CC-MAIN-2015-18
refinedweb
135
77.98
- NAME - VERSION - DESCRIPTION - TIPS - IGNORING THINGS - SEE ALSO - AUTHOR NAME App::Cmd::Tutorial - getting started with App::Cmd VERSION version 0.327 DESCRIPTION The script is the actual executable file run at the command line. It can generally consist of just a few lines: #!/usr/bin/perl use YourApp; YourApp->run; The Application Class All the work of argument parsing, validation, and dispatch is taken care of by your application class. The application class can also be pretty simple, and might look like this: package YourApp; use App::Cmd::Setup -app; 1;::. The Command Classes We can set up a simple command class like this: # ABSTRACT: set up YourApp package YourApp::Command::initialize; use YourApp -command; 1; Now, a user can run this command, but he'll get an error: $ yourcmd initialize YourApp::Command::initialize does not implement mandatory method 'execute' Oops! This dies because we haven't told the command class what it should do when executed. This is easy, we just add some code: sub execute { my ($self, $opt, $args) = @_; print "Everything has been initialized. (Not really.)\n"; } Now it works: $ yourcmd initialize Everything has been initialized. (Not really.) Default Commands By default applications made with App::Cmd know two commands: commands and - commands lists available commands. $yourcmd commands Available commands: commands: list the application's commands help: display a command's help screen init: set up YourApp Note that by default the commands receive a description from the # ABSTRACTcomment in the respective command's module, or from the =head1 NAMEPod section. allows one to query for details on command's specifics. $yourcmd help initialize yourcmd initialize [-z] [long options...] -z --zero ignore zeros Of course, it's possible to disable or change the default commands, see App::Cmd. Arguments and Options In this example $ yourcmd reset -zB --new-seed xyzxy foo.db bar.db -zB and --new-seed xyzxy are "options" and foo.db and bar.db are "arguments." With a properly configured command class, the above invocation results in nicely formatted data: $opt = { zero => 1, no_backup => 1, #default value new_seed => 'xyzzy', }; $args = [ qw(foo.db bar.db) ]; Arguments are processed by Getopt::Long::Descriptive (GLD). To customize its argument processing, a command class can implement a few methods: usage_desc provides the usage format string; opt_spec provides the option specification list; validate_args is run after Getopt::Long::Descriptive, and is meant to validate the $args, which GLD ignores. See Getopt::Long for format specifications. The first two methods provide configuration passed to GLD's describe_options routine. To improve our command class, we might add the following code: sub usage_desc { "yourcmd %o [dbfile ...]" } sub opt_spec { return ( [ "skip-refs|R", "skip reference checks during init", ], [ "values|v=s@", "starting values", { default => [ 0, 1, 3 ] } ], ); } sub validate_args { my ($self, $opt, $args) = @_; # we need at least one argument beyond the options; die with that message # and the complete "usage" text describing switches, etc $self->usage_error("too few arguments") unless @$args; } Global Options There are several ways of making options available everywhere (globally). This recipe makes local options accessible in all commands. To add a --help option to all your commands create a base class like: package MyApp::Command; use App::Cmd::Setup -command; sub opt_spec { my ( $class, $app ) = @_; return ( [ 'help' => "this usage screen" ], $class->options($app), ) } sub validate_args { my ( $self, $opt, $args ) = @_; if ( $opt->{help} ) { my ($command) = $self->command_names; $self->app->execute_command( $self->app->prepare_command("help", $command) ); exit; } $self->validate( $opt, $args ); } Where options and validate are "inner" methods which your command subclasses implement to provide command-specific options and validation. Note: this is a new file, previously not mentioned in this tutorial and this tip does not recommend the use of global_opt_spec which offers an alternative way of specifying global options. TIPS Delay using large modules using Class::Load, Module::Runtime or requirein your commands to save memory and make startup faster. Since only one of these commands will be run anyway, there's no need to preload the requirements for all of them. Add a descriptionmethod to your commands for more verbose output from the built-in help command. sub description { return "The initialize command prepares ..."; } To let your users configure default values for options, put a sub like), ) } You need to activate strictand warningsas usual if you want them. App::Cmd doesn't do that for you. IGNORING THINGS. Ignoring a Single Module. This is the simplest approach, and most useful for one-offs. package YourApp::Command::foo::NotACommand; use YourApp -ignore; <whatever you want here> This will register this package's namespace with YourApp to be excluded from its plugin validation magic. It otherwise makes no changes to ::NotACommand's namespace, does nothing magical with @ISA, and doesn't bolt any hidden functions on. Its also probably good to notice that it is ignored only by YourApp. If for whatever reason you have two different App::Cmd systems under which ::NotACommand is visible, you'll need to set it ignored to both. This is probably a big big warning NOT to do that. Ignoring Multiple modules from the App level. If you really fancy it, you can override the should_ignore method provided by App::Cmd to tweak its ignore logic. The most useful example of this is as follows: sub should_ignore { my ( $self, $command_class ) = @_; return 1 if not $command_class->isa( 'App::Cmd::Command' ); return; } This will prematurely mark for ignoring all packages that don't subclass App::Cmd::Command, which causes non-commands ( or perhaps commands that are coded wrongly / broken ) to be silently skipped. Note that by overriding this method, you will lose the effect of any of the other ignore mechanisms completely. If you want to combine the original should_ignore method with your own logic, you'll want to steal Moose's around method modifier. use Moose::Util; Moose::Util::add_method_modifier( __PACKAGE__, 'around', [ should_ignore => sub { my $orig = shift; my $self = shift; return 1 if not $command_class->isa( 'App::Cmd::Command' ); return $self->$orig( @_ ); }]); SEE ALSO CPAN modules using App::Cmd AUTHOR Ricardo Signes <[email protected]> This software is copyright (c) 2015 by Ricardo Signes. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/App::Cmd::Tutorial
CC-MAIN-2015-14
refinedweb
1,040
51.89
I’ve written a lot of blog posts in the past about pair programming and the advantages that I’ve seen from using this technique but lately I find myself increasingly frustrated at the need to pair 100% of the time which happens on most teams I work on. From my experience it’s certainly useful as a coaching tool, as I’ve mentioned ...Read More » Yearly Archives: ...Read More » Best Of The Week – 2012 – W01 Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * DevOps: What it is, and what it is not: In this article the author tries to demystify DevOps, discussing what it is and what is not. Also check out Devops has made Release .. » Learning Android: Getting a service to communicate with an activity In the app I’m working on I created a service which runs in the background away from the main UI thread consuming the Twitter streaming API using twitter4j. It looks like this: public class TweetService extends IntentService { String consumerKey = "TwitterConsumerKey"; String consumerSecret = "TwitterConsumerSecret"; public TweetService() { super("Tweet Service"); } @Override protected void onHandleIntent(Intent intent) { AccessToken accessToken .. » Java 7: Project Coin in code examples ...Read More » Why though the examples are often exaggerated (for example counting imports), it is true Java programs requires .. »
http://www.javacodegeeks.com/2012/page/143/
CC-MAIN-2015-14
refinedweb
228
57.5
In this Visual Studio Tools for Office I will walk you through how to make a very simple customization to the Word 2007 (or any of these: Visio, PowerPoint, InfoPath, Outlook or Excel) Ribbon-menu and add our own button with some code to execute when it’s pressed. Basically, what you need to do before doing anything else is to install and configure the VSTO add-in for Visual Studio. Just search for it and you will find it. Once the prerequisites are installed, we will create a new project and choose the Word-template as seen in this screenshot: What we do next before doing anything else, is the following: Right-click your project in the Solution Explorer, choose "Add" -> "New Item…" and choose the Ribbon component, as seen in the following Screenshot: Once that’s done. Uncomment the commented code in the new Ribbon1.cs (or whatever you chose to call it). Basically, everything’s done now. All we need to do is add some buttons or any other controls we would like to add in order to extend the Ribbon-menu. In the following screenshot I’ve done only minor modifications to the labels and text and I havn’t added any other controls, to keep it simple: Note: I’m defining a method to be executed when the button is toggeled with this xml: onAction="OnToggleButton1" The following code shows a simple method that’s implemented by default with some minor modifications to insert my signature when the button is toggeled (Best practice would be to use the button control, not the toggleButton.. I just chose to do it this way because I couldn’t care less to change the default ;) We’re done for now. Execute the project and you’ll see the following screen with the option to press a toggleButton that will insert my very basic signature into the document: What I wanted to show with this blogpost was simply the fact that it’s quite easy to extend the Ribbon to enable the use of your own custom code. The possibilites are almost endless since you get access to the entire .NET Framework via the code and can do basically whatever you want. You can also integrate the menu with SharePoint via the Microsoft.SharePoint namespace, however if you’re going to communicate with the SharePoint API I would suggest making a WebService with the required methods and host it on the SharePoint-machine and then call the WebService from your clients. Note: In order for this to work on the client machines, I think you need to install the VSTO on them aswell (Note, a full installation of the Office-suite will enable this feature if I’m not mistaken). The clients will also need the .NET Framework installed. That’s it for now. I hope someone could enjoy this simple VSTO Tip :) Cheers people!
https://zimmergren.net/zimmergren-vsto-tip-1-extend-the-ribbon-menu-in-word-2007/
CC-MAIN-2017-34
refinedweb
485
64.95
Tuist 0.16.0 allows users to link system libraries and frameworks Hi, Ollie here 👋🏼! Happy Friday! I’m happy to announce the release of Tuist 0.16.0; I’m going to talk through the changes we have made this release and some of the upcoming work we have planned to support some of the newer features announced at this year’s WWDC. Adding support for linking system libraries and frameworks 🏛 Liking against system libraries and frameworks explicitly is sometimes necessary. This is a common use-case when using 3rd-Party frameworks such as Firebase. We’ve added support for a new dependency type sdk. Target(name: "App",platform: .iOS,product: .app,bundleId: "io.tuist.App",infoPlist: "Info.plist",sources: [ "Sources/**" ],dependencies: [.sdk(name: "CloudKit.framework", status: .required),.sdk(name: "StoreKit.framework", status: .optional),.sdk(name: "libc++.tbd"),]) Add input & output paths for target action 🎯 If you use tools which need the ability to configure a pre-build or post-build script with input and output files, we now have added support for both. .pre(path: "my_custom_script.sh",name: "My Custom Script Phase",inputFileListPaths: [ "Data/Cars.raw.json", "Data/Drivers.raw.json" ],outputFileListPaths: [ "Data/Cars.swift", "Data/Drivers.swift" ]) Generate Tuist projects with no build settings 🧬 If you have a custom setup and don’t want Tuist to provide any default build settings then you are now able to specify .none for settings on Project or Target. import ProjectDescriptionlet project = Project(name: "MyFramework",settings: Settings(debug: .init(xcconfig: "Configuration/Debug.xcconfig"),release: .init(xcconfig: "Configuration/Release.xcconfig"),defaultSettings: .none),targets: [Target(name: "MyFramework",platform: .iOS,product: .framework,bundleId: "io.tuist.MyFramework",infoPlist: "Sources/Info.plist",sources: ["Sources/**"],dependencies: [.framework(path: "../Framework2/prebuilt/Framework2.framework"),]),]) This will ensure tuist does not generate a project with any build settings. Be warned if you do this you will need to ensure you provide some build settings otherwise it might not build inside Xcode. Bug Fixes 🐞 We’ve been really busy squishing bugs and improving the overall stability and experience when using Tuist. We think fixing bugs you find are very important to us and the future of Tuist - so if you find any bugs please raise an issue. Code sign frameworks on when embedding ✍🏼 Frameworks were not correctly being codesigned when embedded. This caused a bug when trying to build to device “App installation failed. No code signature found”. I was able to figure out where the problem was and include it in this release. Thanks to @Rag0n for rasising the issue. Stability for generated projects 🏗 We’ve been working really hard to stabilize the generated Xcode projects which is really good news if you check them in as you will not see changes you didn’t intend to make. It also meant that Xcode could not live-reload the project correctly. Both Kas and Marcing have introduced fixes into this release! 💪🏼 Installing custom tuist builds from source 👷🏼♂️ tuist local was failing to install due to a small bug in the installer still referencing an old compiler flag, luckily I was able to track down the issue and fix it. So if you like living on the edge and using the master branch then it’s all back up and working 👍🏼 And much much more, checkout the changelog for the full list of additions, fixes and improvements Next up 🕵🏼♂️ - We have started work on adding support for SwiftPM. - Tuist will soon be able to control the generation of the Info.plist for your project/manifest. - You will soon be able to visualise your dependencies. - Join the discussion about how we could support the new .xcframeworktype. - We’re talking about multi-platform targets. Thanks, see you next time!
https://tuist.io/blog/2019/06/21/version-0.16.0/
CC-MAIN-2020-29
refinedweb
615
55.54
Making a Ghostbusters Proton WandApril 18, 2020 The Story So Far Last year, my wife and I made a Ghostbusters costume for our (then) 6-year-old, for his kindergarten carnival party. I found this PKE Meter on Thingiverse which I built for this costume and for the proton wand and backpack, I designed, 3D-printed and soldered something myself. I was quite happy with the result. A few weeks ago, my now 7-year-old discovered these toys again which triggered an immediate "Why don't I have that, Daddy!" reaction in my 3-year-old. Needless to say that my "Well, last year you were too young and didn't actually care" response was not received particularly well and I was given a 1-week ultimatum to make him a proton wand as well. So, why not. I dug out my 3D models, notes, sketches and figured that this would be a good opportunity to write this blog post I should have written last year anyway. I was given a 1-week ultimatum by my 3-year-old to make him a proton wand as well. The Parts All the main parts of the proton wand come out of the 3D printer. I designed and made them available on Tinkercad. The only 2 parts I did not design myself and found on Thingiverse, were the heat sink and the regulator knob. These are the parts fresh out of my 3D printer. All the visible parts (except for the handle) got a treatment with black or silver acrylic spray. Part List In addition to those 3D-printed parts, you will need this: - Arduino Nano - USB power bank - 5x WS2812b LEDs (RGB, individually addressable) - Potentiometer - On/Off Switch (with indicator LED) - TM1637 4-digit 7-segment display - Round push button - LED - 330Ω Resistor - On/Off Switch - Piezo Speaker - Cables, wires Power For the power supply I am just using a regular power bank like you would use for charging a phone. This power bank disappears in the backpack and the proton wand is connected to it using a USB cable coming out of the handle. On/Off Switch The main on/off switch sits on top of the wand's casing and interrupts the Arduino's power supply. I chose an extra fancy switch for that, that comes with a safety cap which must be opened first. This makes activating the wand look really legit and serious - my kids love that feature. Also, this switch has a built-in led that indicates whether the wand is on or off. Trigger Button The fire/trigger button is also integrated in the proton wand's handle and is a simple push-button connected to the Arduino's digital pin 2 (see FIRE_BUTTON_PIN in Firmware). Intensity Control As soon as the trigger button is active, another component comes into play which is the intensity regulator. This is a simple potentiometer connected to the Arduino's analog pin ( A1). The value set by this potentiometer gets displayed on the 4-digit-7-segment display. It also affects the proton wand's light and sound output to make this toy more interactive. Display For displaying the potentiometer's state I used a 4-bit-7-segment LED display that sits in front of the main power switch. Note: I'm using a 5-pin version of this display which is apparently outdated and rather hard to come by. You are more likely to find a 4-pin I2C version of this. This however means that the firmware needs to be updated accordingly. This display uses another 3 digital pins of the Arduino. Light The proton wand's tip is 3D-printed with translucent filament and accommodates 5 individually-addressable WS2812b RGB LEDs. The last LED in this strip is glued flat to the bottom of the tip and the remaining 4 form a circle glued to the inner wall of it. In addition to power and ground, this LED strip only needs one other digital pin on the Arduino for its data signal (see LIGHT_PIN in Firmware). Pushing the trigger button activates this light strip, hence creating the firing effect and the potentiometer setting determines its color and intensity. Sound Of course, a toy like this also needs a high-pitched, nerve-wracking sound effect when the trigger button is pressed. This is accomplished by a small piezo speaker that gets connected to pin 9 on the Arduino (see AUDIO_PIN in Firmware). Similar to the light effects, the frequency of the audio signal, is also determined by the potentiometer ("intensity control"). However, even more important than having sound-effects, is the ability to turn them off again from time to time (you're welcome fellow parents). For this, another on/off switch is added to the side of the proton wand's main casing. This button's state is read via digital pin 7 on the Arduino (see AUDIO_ONOFF_BTN_PIN in Firmware). Another LED connected to pin 8 (see AUDIO_ONOFF_LED_PIN) visualizes the on/off state off the audio. Assembling The Parts After soldering and gluing all parts together this is nearly finished. After Cramming all the cables in, closing the main case up and attaching the last parts (knob and heatsink), this is how the end-result looks like. Firmware Following Arduino sketch is the proton wand's "firmware". Most part of this code just deals with showing numbers on the 4-digit-7-segment display and displaying sound in real-time. The "actual logic" is only a few lines of code in the main function that: - Reads the potentiometer and calculates an intensity setting ( level) from it - Checks if the wand is firing ( digitalRead(FIRE_BUTTON_PIN)) - If yes: - Activates the RGB LEDs ( strip.setPixelColor) - Checks is sound is enabled ('digitalRead(AUDIO_ONOFF_BTN_PIN`): - If yes: - Plays sound ('NewTone`) - If no: - Turn off LEDs ( strip.clear()) Note: For controlling the RGB LEDs, the Arduino 3rd party library Adafruit_NeoPixelis used. This is the full source code of the sketch. #include <Adafruit_NeoPixel.h> #define LIGHT_PIN 10 #define FIRE_BUTTON_PIN 2 #define NUM_DIGITS 4 #define SHIFTCLOCK 6 #define LATCHCLOCK 4 #define DISPLAYOUT 3 #define AUDIO_ONOFF_BTN_PIN 7 #define AUDIO_ONOFF_LED_PIN 8 #define AUDIO_PIN 9 Adafruit_NeoPixel strip(5, LIGHT_PIN, NEO_GRB + NEO_KHZ800); const uint32_t WHITE = strip.Color(255, 255, 255); const uint32_t colors[] = { strip.Color( 0, 0, 255), // blue strip.Color( 0, 255, 0), // green strip.Color(255, 255, 0), // yellow strip.Color(255, 165, 0), // orange strip.Color(255, 0, 0), // red }; byte printDigits[NUM_DIGITS] = {0}; void setupDisplay() { // configure outputs: pinMode(SHIFTCLOCK, OUTPUT); pinMode(LATCHCLOCK, OUTPUT); pinMode(DISPLAYOUT, OUTPUT); // setup timer code // Timer 2 - gives us the display segment refresh interval TCCR2A = 0; // reset Timer 2 TCCR2B = 0; TCCR2A = bit (WGM21) ; // configure as CTC mode // 16 MHz clock (62.5 nS per tick) - prescaled by 256 // counter increments every 16 µS. // There are 8 segments and 4 digits to scan every 1/60th of a second // so we count 32 of them at 60Hz, giving a display refresh interval of 512 µS. #define scanRateHz 60 // tested, maximum that works is 88 using Wire library and Serial #define NUM_DIGITS 4 #define displayScanCount 1000000L / NUM_DIGITS / 8 / scanRateHz / 16 OCR2A = displayScanCount; // count up to 32 @ 60Hz // Timer 2 - interrupt on match (ie. every segment refresh interval) TIMSK2 = bit (OCIE2A); // enable Timer2 Interrupt TCNT2 = 0; // counter to zero // Reset prescalers GTCCR = bit (PSRASY); // reset prescaler now // start Timer 2 TCCR2B = bit (CS21) | bit (CS22) ; // prescaler of 256 } void setup() { setupDisplay(); strip.begin(); pinMode(FIRE_BUTTON_PIN, INPUT_PULLUP); pinMode(AUDIO_ONOFF_BTN_PIN, INPUT_PULLUP); pinMode(AUDIO_ONOFF_LED_PIN, OUTPUT); } uint16_t currentPixel = 0; uint32_t levelToColor(uint32_t level) { auto index = map(level, 0, 100, 4, 0); return colors[index]; } void loop() { int level = analogRead(A0); level = min(100, max(0, map(level, -1, 1024, 0, 101))); auto currentDelay = max(20, level); bool soundOn = false; bool isFiring = (digitalRead(FIRE_BUTTON_PIN) == 0); bool isAudioOn = (digitalRead(AUDIO_ONOFF_BTN_PIN) == 0); digitalWrite(AUDIO_ONOFF_LED_PIN, isAudioOn); if (isFiring) { if (isAudioOn) { NewTone(AUDIO_PIN, 2500 - level, currentDelay / 2); } strip.setPixelColor(4, currentPixel % 2 == 0 ? levelToColor(level) : 0); for (uint16_t i = 0; i < 4; i++) { strip.setPixelColor(i, i == currentPixel ? levelToColor(level) : 0); } currentPixel = (currentPixel + 1) % 4; if (currentPixel % 2 == 0) { strip.setPixelColor(currentPixel, WHITE); } strip.show(); } else { strip.clear(); strip.show(); } delay(currentDelay); showNumber(1000 - level * 10); } void showNumber(int val) { for (int i = 3; i >= 0; i--) { printDigits[i] = val % 10; val /= 10; } } // // display update ISR // // every time this routine is called, 16 bits are shifted into the display // and latched. // The first 8 bits is the segment data, and first 4 bits of the second byte are // the segment select. ISR (TIMER2_COMPA_vect) { // segment list to make a seven segment font const byte NUM_PLUS_SYMBOL_FONT[] = { 0b00111111, // 0 0b00000110, // 1 0b01011011, // 2 0b01001111, // 3 0b01100110, // 4 0b01101101, // 5 0b01111101, // 6 0b00000111, // 7 0b01111111, // 8 0b01101111, // 9 }; static byte digit = 0; static byte segment = 0x80; byte tempDigit = printDigits[digit]; // send segment data shiftOut(DISPLAYOUT, SHIFTCLOCK, MSBFIRST, ~(segment & ((NUM_PLUS_SYMBOL_FONT[tempDigit & 0x7f]) | (tempDigit & 0x80))) ); // send digit select data shiftOut(DISPLAYOUT, SHIFTCLOCK, MSBFIRST, 8 >> digit); // data is now in the display shift register, so latch to LEDs digitalWrite(LATCHCLOCK, LOW); digitalWrite(LATCHCLOCK, HIGH); // increment variables to select the next segment and possibly the next digit: // segment = segment >> 1; if (segment == 0) { segment = 0x80; digit++; if (digit >= NUM_DIGITS) { digit = 0; } } } unsigned long _nt_time; // Time note should end. uint8_t _pinMask = 0; // Pin bitmask. volatile uint8_t *_pinOutput; // Output port register void NewTone(uint8_t pin, unsigned long frequency, unsigned long length) { uint8_t prescaler = _BV(CS10); // Try using prescaler 1 first. unsigned long top = F_CPU / frequency / 4 - 1; // Calculate the top. if (top > 65535) { // If not in the range for prescaler 1, use prescaler 256 (61 Hz and lower @ 16 MHz). prescaler = _BV(CS12); // Set the 256 prescaler bit. top = top / 256 - 1; // Calculate the top using prescaler 256. } if (length > 0) _nt_time = millis() + length; else _nt_time = 0xFFFFFFFF; // Set when the note should end, or play "forever". if (_pinMask == 0) { _pinMask = digitalPinToBitMask(pin); // Get the port register bitmask for the pin. _pinOutput = portOutputRegister(digitalPinToPort(pin)); // Get the output port register for the pin. uint8_t *_pinMode = (uint8_t *) portModeRegister(digitalPinToPort(pin)); // Get the port mode register for the pin. *_pinMode |= _pinMask; // Set the pin to Output mode. } ICR1 = top; // Set the top. if (TCNT1 > top) TCNT1 = top; // Counter over the top, put within range. TCCR1B = _BV(WGM13) | prescaler; // Set PWM, phase and frequency corrected (ICR1) and prescaler. TCCR1A = _BV(COM1B0); TIMSK1 |= _BV(OCIE1A); // Activate the timer interrupt. } void noNewTone(uint8_t pin) { TIMSK1 &= ~_BV(OCIE1A); // Remove the timer interrupt. TCCR1B = _BV(CS11); // Default clock prescaler of 8. TCCR1A = _BV(WGM10); // Set to defaults so PWM can work like normal (PWM, phase corrected, 8bit). *_pinOutput &= ~_pinMask; // Set pin to LOW. _pinMask = 0; // Flag so we know note is no longer playing. } ISR(TIMER1_COMPA_vect) { // Timer interrupt vector. if (millis() >= _nt_time) noNewTone(0); // Check to see if it's time for the note to end. *_pinOutput ^= _pinMask; // Toggle the pin state. } Let's Hunt Some Ghosts All done - my little Ghostbusters approve!
https://wolfgang-ziegler.com/Blog/ghostbusters-proton-wand
CC-MAIN-2020-45
refinedweb
1,815
53.31
We can observe that a valid (as in is a substring of A * x) B string is in the form of following: [S] + A * x + [P] where S and P indicate optional suffix or prefix of A at the beginning and end respectively. We can conclude that if A is not in B then a valid B would look like: [S] + [P] => S + P or S or P Thus if A is not in B we only need to check if B is in A (checks S or P) or A * 2 (checks S + P) If A is in B then our valid string looks like such as mentioned above: [S] + A * x + [P] Thus we can find the first occurrence of A then use that to find the potential end of A * x then add to the count if we have tails on either end. After obtaining the count through this logic we can return the count if A is in B. def repeatedStringMatch(self, A, B): if not B: return 0 if A not in B: # either S + P or S or P return B in A and 1 or (B in A * 2 and 2 or -1) start = B.index(A) cnt = (len(B) - start) // len(A) end = start + cnt * len(A) cnt += (start != 0) + (end != len(B)) return B in A * cnt and cnt or -1
https://discuss.leetcode.com/topic/106025/python-82ms-with-optimized-concatenation
CC-MAIN-2018-05
refinedweb
230
67.56
[ ] Peter Klügl closed UIMA-2206. ----------------------------- Resolution: Fixed Restructuring and renaming is done. Basic functionality is working correctly. Further restructuring will be performed when adapting the projects to maven. > Further restructuring and renaming of the TextMarker projects > ------------------------------------------------------------- > > Key: UIMA-2206 > URL: > Project: UIMA > Issue Type: Task > Components: TextMarker > Reporter: Peter Klügl > Assignee: Peter Klügl > Attachments: TextMarker.zip > > > The TextMarker projects should be further restructured, especially the dltk plugins should be merged. That will ease the migration to the maven build process. The engine project should then be separated to a normal java project and a fetcher plugin. This will be done not until the migration to maven. > This restructuring enforces some renaming. Therefore, also additional renaming is performed in this task. All projects will be adapted to the UIMA convention for eclipse plugins. The namespaces will be unified to org.apache.uima.textmarker.* and the mention of dltk will be replaced by ide. -- This message is automatically generated by JIRA. For more information on JIRA, see:
http://mail-archives.us.apache.org/mod_mbox/uima-dev/201108.mbox/%3C146451208.32940.1313150607190.JavaMail.tomcat@hel.zones.apache.org%3E
CC-MAIN-2019-43
refinedweb
165
61.43
From: Brian Barrett (bbarrett_at_[hidden]) Date: 2007-09-17 10:33:21 On Sep 9, 2007, at 10:28 AM, Foster, John T wrote: > I'm having trouble configuring Open-MPI 1.2.4 with the Intel C++ > Compiler v. 10. I have Mac OS X 10.4.10. I have succesfully > configured and built OMPI with the gcc compilers and a combination > of gcc/ifort. When I try to configure with icc/icpc it gives me an > error saying I have a bad C++ compiler and it failed to compile a > simple program. I have compiled many simple test programs with > icpc with no error. The config.log and output of configure > (config.out) files are attached. Thanks in advance for any help. I think there is actually something wrong with your environment. Here's the error from icpc for the test that's failing: configure:21042: icpc -o conftest -DNDEBUG conftest.cc >&5 /usr/local/include/c++/4.2.0/ext/atomicity.h(51): error: identifier "__sync_fetc h_and_add" is undefined { return __sync_fetch_and_add(__mem, __val); } Do you have a GCC 4.2 install in /usr/local? It looks like that might be causing problems in this case. Based on the tests run, it appears that icpc is able to compile C++ code as long as the STL headers aren't included. The test that failed was a fairly trivial one: #define _GNU_SOURCE 1 #include <string> int main () { std::string foo = "Hello, world" ; return 0; } Hope this helps, Brian -- Brian W. Barrett Networking Team, CCS-1 Los Alamos National Laboratory
http://www.open-mpi.org/community/lists/users/2007/09/4026.php
CC-MAIN-2014-35
refinedweb
260
67.35
Implementing scheduling in Spring Boot In this tutorial, we are going to look at how we can schedule tasks to be executed in the future in spring boot applications. Spring provides very convenient and painless ways of scheduling tasks. To enable scheduling in your spring boot application, you add @EnableScheduling annotation in any of your configuration files. This annotation does not require any additional dependencies. @SpringBootApplication @EnableScheduling public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } } To schedule tasks, you can use the @Scheduled annotation for simple tasks or use a programmable api for more complex tasks. The @Scheduled annotation allows one to schedule tasks at a fixed rate, fixed Delay or using cron expressions. You can add an initial delay to both fixed rate and fixed delay. Methods annotated using @Scheduled annotation don’t accept any parameters and return void. Fixed Rate: — this option is used when you want to execute a task a fixed interval of time. e.g The code below will be executed after every 1 second. @Scheduled(fixedRate = 1000) public void scheduleTaskWithFixedRate(){ logger.info("I run after every second."); } The tasks are executed in a single thread, so if u had overlapping tasks, the code will be run after the previous one completes. You can use @Async annotation to run tasks in parallel. You can read more about @Async annotation in this guide. Fixed Delay: — this option is used to run tasks after a fixed period from the execution of the last method. fixedRate specifies the interval between method invocations, measured from the start time of each invocation. fixedDelay specifies the interval between invocations measured from the completion of the task. The code below will run after 1s. Then after it completes, it will be scheduled to run again in 1s time. @Scheduled(fixedDelay = 1000) public void scheduledTaskWithFixeDelay(){ logger.info("I run after 1 second after the completion of the previous task."); } Adding an initial delay. @Scheduled(initialDelay=2000, fixedDelay = 1000) public void scheduledTaskWithInitialDelay(){ logger.info("Task with initial delay");} The task will be executed a first time after the initialDelay of 2 seconds and then it will continue to be executed according to the fixedDelay of 1 second. For more complex tasks, for example, you want to run some task at the last Sunday of every month or the first day of the month at 10:15am, cron expressions are more convenient and come in handy. The cron job below will run at 10:15am at the first of every month. @Scheduled(cron="0 15 10 1 * *") public void scheduleTaskWithCronExpression(){ logger.info("Task Using cron expression");} To schedule a task to run on every last Sunday of the month at 2.15PM, u would use the cron expression below. @Scheduled(cron="0 15 14 ? * 1L *") public void scheduleTaskWithCronExpression(){ logger.info("Task using cron expression");} Writing cron expressions can be complex and error prone. It is also disastrous since there is no simple way to test. There are good free tools to generate this cron expressions on the web. I personally use . There is also. Cron expressions generated by crontab guru have 5 parameters instead of the 6 required by spring cron expression. The seconds part of the cron expression is usually not included as per the time of writing this tutorial. So remember to add it otherwise you will get compile time errors. There is also a good resource to learn about cron expressions by Baeldung at. If you have more complex scheduling operations that can’t be handled the methods we have discussed above, you can schedule tasks programmatically using the TaskScheduler interface provided by spring. This interface has the following methods. that accepts a Runnable and a date only. Apart from it, the rest of the methods are capable of executing periodic tasks. All the methods provided by this interface can easily be implemented using the @Scheduled annotation and therefore don’t provide much flexibility. However, ScheduledFuture schedule(Runnable task, Trigger trigger) which accepts a Trigger is much more flexible and ideal for complex tasks. Let’s illustrate this using the example below. Suppose you have a requirement where you want to send a notification to particular group of users to file their report 3 days before the end of every month. Different months have different days making it hard to use cron expression for this task. Let’s create a class called FileReportNotificationTask that implements Runnable interface and contains the business logic of sending the notification. Next, lets create a class called AppSchedulerConfigurer where the `magic ` happens. I will explain the code in the next step. This class is a spring configuration class used to configure scheduling tasks. It implements SchedulingConfigurer interface that has only method, configureTasks that accepts ScheduledTaskRegistrar class that enables us to add trigger tasks. It contains lots of other useful methods for configuring tasks but lets focus on this particular one for our current scenario. The addTriggerTask(Runnable task, Trigger trigger). This method accepts same parameters as schedule method provided by TaskScheduler. We could have as well used used this method directly as shown here. @Autowired private TaskScheduler taskScheduler; taskScheduler.schedule(runnableTask, trigger); Using addTriggerTask provides us with the advantage that the schedule task will be automatically called by spring and we don’t have to do it ourselves. In our implementation, we passed FileReportNotificationTask as our runnable task and created an inner class for the Trigger.The runnable has the business logic we would like to execute in the future. The Trigger enables us to define a date in the future when we would like to execute the task. It provides a simple interface which defines one method nextExecutionTime that accepts TriggerContext interface and returns a Java Date when we would like to execute our code public interface Trigger { Date nextExecutionTime(TriggerContext triggerContext); } The TriggerContext encapsulates all of the relevant data. It provides the following methods to get when we last ran our scheduled task. This is important as we can schedule next execution dates based on last execution dates. public interface TriggerContext { Date lastScheduledExecutionTime(); Date lastActualExecutionTime(); Date lastCompletionTime(); } These methods are self explanatory. In code above, we use lastScheduledExecutionTime() which returns last scheduled execution time of the task or null if not scheduled before The nextExecutionTime method should return the Date when want to run our scheduled task. In our implementation, if the lastScheduledExecutionTime() is null, we return the third last day of the current month since we don’t want to skip the current month. Otherwise, the next execution time will be the third last day of the next month. Below are the implementations of the two methods to get the next execution dates. We use Java LocalDateTime classes to simply calculating next dates. If you don’t want the method to execute in the future, you can return null and your runnable task won’t be scheduled. Hope this case scenario gave you an understanding of how to implement slightly complicated scheduling tasks using the spring TaskScheduler and Triggers. We could as well go on and discuss how to run your scheduled tasks in multiple threads but I will leave this out for another tutorial. Lastly, if your have more complicated tasks that you would like to persist across server reboots incase your server goes down while running your scheduled tasks or you want to run your scheduled tasks in a cluster of servers, you can consider using Quartz open source scheduling framework for java. It offers great flexibility without sacrificing simplicity and integrates well with spring. Here is a link to their official website Hope this guide was useful and helpful in understanding how to enable and implement scheduling in spring boot in various ways. Thanks for taking your time to read this article. Cheers.
https://samuel-mumo.medium.com/one-stop-guide-for-implementing-scheduling-in-spring-boot-660dec88a25e?source=post_internal_links---------7-------------------------------
CC-MAIN-2022-05
refinedweb
1,298
55.74
Hi, I am trying to reimplement a DL model in nengo_dl and struggle with getting weight decay / l2 regularisation to work. In order to implement it I have to add the l2norm of the weights to the loss during training. I am using tf.layers.dense layers in my network and tensorflow conveniently allows for a regularisation parameter in the layer. But what it does is to register the l2norm of the weights as a collection which then has to be manually added to the loss. Now, the regular loss I define as an objective which I pass to the sim.train function of nengo_dl. The problem I have is that train loop seems to be executed in a different tf.session then the network construction so that I cannot get access to the l2loss in my objective. Has someone dealt with this issue and can give me a hint on how to get around this issue? Maybe I am just not completely getting the tf.sessions mechanism, but in any case any advice would be very welcome. Using weight decay Sorry for the slow response, I missed your question earlier. Here’s a simple example of one method for applying L2 weight regularization (I tried to make it as TensorFlow-like as possible so it would look familiar; you’d do things differently if you wanted to regularize parameters in a more Nengo-style network). import nengo import numpy as np import nengo_dl import tensorflow as tf with nengo.Network() as net: stim = nengo.Node([1]) class DenseLayer(object): def pre_build(self, shape_in, shape_out): self.W = tf.get_variable( "weights", shape=(shape_in[1], shape_out[1]), regularizer=tf.contrib.layers.l2_regularizer(0.1)) def __call__(self, t, x): return tf.matmul(x, self.W) a = nengo_dl.TensorNode(DenseLayer(), size_in=1, size_out=10) nengo.Connection(stim, a) p = nengo.Probe(a) with nengo_dl.Simulator(net) as sim: def my_objective(outputs, targets): return (tf.reduce_mean(tf.square(outputs - targets)) + tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)[0]) for _ in range(10): sim.train({stim: np.ones((1, 1, 1))}, {p: np.ones((1, 1, 10))}, tf.train.GradientDescentOptimizer(1.0), objective=my_objective) print(np.linalg.norm(sim.sess.run(a.tensor_func.W))) You should see the norm of the weights decreasing every time we call sim.train (note that normally you’d just call sim.train with n_epochs=10 for a case like this, but I separated it out so that you can see the weights going down with each training iteration). The main complication you’ll note is that we have to define our own layer function instead of using tf.layers.dense. This doesn’t actually have anything to do with the regularization, it’s just a feature of the tf.layers implementation. tf.layers combines the creation of the parameter variables (e.g. connection weights) and the application of those parameters to some input x in a single step. Normally this is convenient, and works great. However, when you use tf.layers inside a tf.while_loop (which we do in NengoDL, since we want to simulate models over time), this puts the parameters inside the while loop scope. Again, normally this is fine, unless you also want to use those parameters outside the while loop scope (e.g., when doing something like computing the L2 regularization loss). TensorFlow doesn’t let you use things from inside the while loop scope on the outside. So, long story short, you can’t use tf.layers functions to do regularization if you’re using them inside a tf.while_loop. So that’s why we create our own version of tf.layers.dense, the DenseLayer TensorNode. The key feature of this is that we can separate out the parameter creation (inside the pre_build function, which will happen outside the tf.while_loop scope) from the application of those parameters (inside the __call__ function). And that lets us use tf.contrib.layers.l2_regularizer even though we’ll be running things in a while loop. (note that I didn’t include biases in DenseLayer, just to keep things simple, but they’d work in the same way). Also just to clarify this specifically, the training does run in the same Session/ Graph as the network construction. However, you need to be careful about what the current “default graph” is when you’re working with TensorFlow, as many functions operate with respect to that default graph. For example, tf.get_collection returns items from the current default graph (I’m guessing this is what you were running in to). Inside the nengo_dl.Simulator scope, the default graph is set to the correct NengoDL graph. But outside that scope we don’t control the default session, so it will depend on what else is going on in your script. I.e. tf.get_collection() # <--- returns items from the default graph, whatever that is with nengo_dl.Simulator(net) as sim: tf.get_collection() # <--- returns items from the nengo_dl.Simulator graph
https://forum.nengo.ai/t/using-weight-decay/576
CC-MAIN-2018-51
refinedweb
826
59.8
If you know it is the birthday of a friend, Emily, you might tell those gathered with you to sing “Happy Birthday to Emily”. We can make C# display the song. Read, and run if you like, the example program birthday1/birthday1.cs: using System; class Birthday1 { static void Main () { Console.WriteLine ("Happy Birthday to you!"); Console.WriteLine ("Happy Birthday to you!"); Console.WriteLine ("Happy Birthday, dear Emily."); Console.WriteLine ("Happy Birthday to you!"); } } Here the song is just a part of the Main method that is in every program. Note that we are using a function already provided to us, Console.WriteLine. We can use it over and over, wherever we like. We can alter its behavior by including a different parameter. Now we look further at writing and using your own functions. If we want this song to be just part of a larger program, and be able to refer to it repeatedly and easily, we might like to package it separately. You would probably not repeat the whole song to let others know what to sing. You would give a request to sing via a descriptive name like “Happy Birthday to Emily”. In C# we can also give a name like HappyBirthdayEmily, and associate the name with whole song by using a new function definition, also called a method. We will see many variations on method definitions. Later we will see definitions that are attached to a particular object. For now the simpler cases do not involve creating a type of object, but there is an extra word, static, needed to distinguish a function definition not attached to on object. We will also shortly look at functions more like the functions from math class, that produce or return a value. In this simple case we will not deal with returning a value. This also requires a special word in the heading: void. A void function will just be a shorthand name for something to do, a procedure to follow, in this case printing out the Happy Birthday song for Emily. (Note that the Main method for a program is also static void. This does your whole program and is not attached to an object.) Read for now: There are several parts of the syntax for a function definition to notice: Line 5: The heading starts with static void, the name of the function, and then parentheses. A more general syntax for functions that just do something is static voidFunctionName () { } Recall the conventions in Syntax Template Typography. Lines 6-11: The remaining lines form the function body. They are enclosed in braces. By convention the lines inside the braces are indented by a consistent amount. Three spaces is a common indentation. The whole definition does just that: defines the meaning of the name HappyBirthdayEmily, but it does not do anything else yet - for example, the definition itself does not yet make anything be printed. This is our first example of altering the order of execution of statements from the normal sequential order. This is important: the statements in the function definition are not executed as C# first passes over the lines. The only part of a program that is automatically executed is Main. Hence Main better refer to the newly defined function.... Look at the first statement inside Main, line 15: HappyBirthdayEmily(); Note that the static void of the function definition is missing, but we still have the function name and parentheses. When Main is running, C# goes back and looks up the definition, and only then, executes the code inside the function definition. The term for this action is a function call or function invocation. In this simple situation the format is FunctionName () While the convention for variable identifiers is to start with a lowercase letter, the convention for function names is to start with a capital letter. Hence HappyBirthdayEmily, not happyBirthdayEmily. Can you predict what the program will do? Note the two function calls to HappyBirthdayEmily. To see, load and run birthday2/birthday2.cs. The execution sequence for the program is different from the textual sequence. Execution always starts in Main: Main, and execution stops. Functions alter execution order in several ways: by statements not being executed as the definition is first read, and then when the function is called during execution, jumping to the function code, and back at the the end of the function execution. Understanding the jumping around in the code with function calls is crucial. Be sure you follow the sequence detailed above. In particular, be sure to distinguish function definition from function call. If it also happens to be Andre’s birthday, we might define a function HappyBirthdayAndre, too. Think how to do that before going on ....
http://books.cs.luc.edu/introcs-csharp/functions/firstfunc.html
CC-MAIN-2019-09
refinedweb
786
65.12
Add Amplitude session tracking support to your applications via this plugin for Analytics-Swift Note that this plugin simply adds session data for Amplitude, and events are sent via Cloud Mode. In the Xcode File menu, click Add Packages. You'll see a dialog where you can search for Swift packages. In the search field, enter the URL to this repo. You'll then have the option to pin to a version, or specific branch, as well as which project in your workspace to add it to. Once you've made your selections, click the Add Package button. Open your Package.swift file and add the following do your the dependencies section: .package( name: "Segment", url: " from: "1.1.3" ), Open the file where you setup and configure the Analytics-Swift library. Add this plugin to the list of imports. import Segment import SegmentAmplitude // <-- Add this line Just under your Analytics-Swift library setup, call analytics.add(plugin: ...) to add an instance of the plugin to the Analytics timeline. let analytics = Analytics(configuration: Configuration(writeKey: "<YOUR WRITE KEY>") .flushAt(3) .trackApplicationLifecycleEvents(true)) analytics.add(plugin: AmplitudeSession()) Your events will now be given Amplitude session data and start flowing to Amplitude via Cloud Mode. Please use Github issues, Pull Requests, or feel free to reach out to our support team. Interested in integrating your service with us? Check out our Partners page for more details. MIT License Copyright (c) 2021. Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics
https://swiftpack.co/package/segment-integrations/analytics-swift-amplitude
CC-MAIN-2022-21
refinedweb
250
59.6
To delete a file use the C++ remove() function. I've made a program that creates a log of the activities performed in it, for easy reference for the user. But I want the program to automatically delete the file after the program is shut down. What is the syntax and how do I go about doing it? If you're writing a Windows application, try the DeleteFile function available in winbase.h: BOOL DeleteFile( LPCTSTR lpFileName // pointer to name of file to delete ); the program was written to run in command prompt/dos, and I still can't get it to work. any other suggestions? Say I tried to do it with windows, where would I specify the file I wanted to be deleted with the syntax you just gave me? thanks Bob :) You're welcome. Now, please post again. That 666 (posts) is making me uneasy ;-) [img][/img] Oh, goodness gracious! I guess it would be most appropriate to make this my 667th post then. Are those suppose to be horns? They look like donkey ears. An easy way to delete a file is to just call a system command from Windows. dafile.txt is the file you want to delete. #include <stdlib.h> // needed to use system() function int main() { system("del dafile.txt"); return 0; } Ebil cscgal :twisted: Anyway, how would I do that so it deleted the file at closing? What do you mean at closing? When you close your program? If so, just make it one of your last lines. Use the remove function ..ok BOB.. but how do i use it ,, syntax etc... please bob, it's an emergancy. In header <cstdio>: int remove(const char* filename) Probably too late for you by now, as you needed an answer urgently. I hope you had the sense to look up remove() in your C++ book, or do a Google search. Ebil cscgal :twisted: Anyway, how would I do that so it deleted the file at closing? you can register a function that will be called when the program terminates with the atexit() function. this func could tehn call remove() on the file. what is the difference between remove and del ? Go easy on me. I'm trying to teach myself c++ and its a slow go for me. Just me, my book, and Google. FYI... im wanting to write a program that goes through and deletes files of a certain extension. (sort of a janitor utility) some proprietary software makes these files and will eventually fill up a hard drive. i just want a simple program I can run to clean the directory. Thanks ...
https://www.daniweb.com/programming/software-development/threads/510/syntax-for-deleting-specified-file-in-c
CC-MAIN-2017-43
refinedweb
440
84.88