text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
βŒ€
lang
stringclasses
4 values
source
stringclasses
4 values
Hello Experts, We are trying import data from flat file. We have 'Account' dimension member in column number 5. And on the basis of this we want to decide the value of View dimension. For example, If Account numbers are starting from 1, 2 or 3 then View will be 'YTD' otherwise for rest of the members View will be 'Periodic'.. In order to achieve this, we have written the below script and associated it with View dimension in Import Format. def ISMapView_All(strField,strRecord): strAccount = strField if strAccount[0:1] == "1": fdmResult = "YTD" return fdmResult elif strAccount[0:1] == "2": fdmResult = "YTD" return fdmResult elif strAccount[0:1] == "3": fdmResult = "YTD" return fdmResult else: fdmResult = "Periodic" return fdmResult But Import process itself is getting failed. Am I missing anything in above code? Any guess? Kindly suggest. Regards Nishant I don't see why you even need this script. Why don't you just map it using standard wildcard maps? Hi All, Its done. I can able to import data. As Jython scripting is a very case sensitive, need to have a script with proper indentation. Indentation was not properly given was facing issue. Thanks Nishant
https://community.oracle.com/thread/4034879
CC-MAIN-2018-26
en
refinedweb
US7840614B2 - Virtual content repository application program interface - Google PatentsVirtual content repository application program interface Download PDF Info - Publication number - US7840614B2US7840614B2 US10618494 US61849403A US7840614B2 US 7840614 B2 US7840614 B2 US 7840614B2 US 10618494 US10618494 US 10618494 US 61849403 A US61849403 A US 61849403A US 7840614 B2 US7840614 B2 US 7840614B2 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - content - repositories - repository - node - from the following application, which is hereby incorporated by reference in its entirety: SYSTEM AND METHOD FOR VIRTUAL CONTENT REPOSITORY, U.S. Provisional Patent Application Ser. No. 60/449,154, Inventors: James Owen, et al., filed on Feb. 20, 2003, SYSTEMS AND METHODS FOR PORTAL AND WEB SERVER ADMINISTRATION, U.S. Provisional Patent Application No. 60/451,174, Inventors: Christopher Bales, et al., filed on Feb. 28, 2003. This application is related to the following co-pending applications which are each hereby incorporated by reference in their entirety: FEDERATED MANAGEMENT OF CONTENT REPOSITORIES, U.S. application Ser. No. 10/618,513, Inventors: James Owen, et al., filed on Jul. 11, 2003, now U.S. Pat. No. 7,293,286, issued Nov. 6, 2007. VIRTUAL REPOSITORY CONTENT MODEL, U.S. application Ser. No. 10/618,519, Inventors: James Owen, et al., filed on Jul. 11, 2003, now U.S. Pat. No. 7,483,904, Jan. 27, 2009. VIRTUAL REPOSITORY COMPLEX CONTENT MODEL, U.S. application Ser. No. 10/618,380, Inventor: James Owen et al., filed on Jul. 11, 2003, now U.S. Pat. No. 7,415,478, issued Aug. 19, 2008. SYSTEM AND METHOD FOR A VIRTUAL CONTENT REPOSITORY, U.S. application Ser. No. 10/618,495, Inventors: James Owen, et al., filed on Jul. 11, 2003. SYSTEM AND METHOD FOR SEARCHING A VIRTUAL REPOSITORY CONTENT, U.S. application Ser. No. 10/619,165, Inventor: Gregory Smith, filed on Jul. 11, 2003. VIRTUAL CONTENT REPOSITORY BROWSER, U.S. application Ser. No. 10/618,379, Inventors: Jalpesh Patadia et al., filed on Jul. 11, 2003, now U.S. Pat. No. 7,562,298, issued Jul. 14, 2009. The present invention disclosure relates to content management, and in particular, a system and method for integrating disparate content repositories. Content repositories manage and provide access to large data stores such as a newspaper archives, advertisements, inventories, image collections, etc. A content repository can be a key component of a Web application such as a Web portal, which must quickly serve up different types of content in response to a particular user's requests. However, difficulties can arise when trying to integrate more than one vendor's content repository. Each may have its own proprietary application program interface (API), conventions for manipulating content, and data formats. Performing a search across different repositories, for example, could require using completely different search mechanisms and converting each repository's search results into a common format. Furthermore, each time a repository is added to an application, the application software must be modified to accommodate these. Various embodiments will be illustrated in terms of exemplary classes and/or objects in an object-oriented programming paradigm. It will be apparent to one skilled in the art that the present invention can be practiced using any number of different classes/objects, not merely those included here for illustrative purposes. Furthermore, it will also be apparent that the present invention is not limited to any particular software programming language or programming paradigm. A virtual or federated content repository (hereinafter referred to as β€œVCR”) 100 is a logical representation of one or more individual content repositories 108 such that they appear and behave as a single content repository from an application program's standpoint. This is accomplished in part by use of an API (application program interface) 104 and an SPI (service provider interface) 102. An API describes how an application program, library or process can interface with some program logic or functionality. By way of a non-limiting illustration, a process can include a thread, a server, a servlet, a portlet, a distributed object, a web browser, or a lightweight process. An SPI describes how a service provider (e.g., a content repository) can be integrated into a system of some kind. SPI's are typically specified as a collection of classes/interfaces, data structures and functions that work together to provided a programmatic means through which a service can be accessed and utilized. By way of a non-limiting example,, etc. In one embodiment, the API presents a unified view of all repositories to application programs and enables them to navigate, perform CRUD (create, read, update, and delete) operations, and search across multiple content repositories as though they were a single repository. Content repositories that implement the SPI can β€œplug into” the VCR. The SPI includes a set of interfaces and services that repositories can implement and extend including schema management, hierarchy operations and CRUD operations. The API and SPI share a content model 106 that represents the combined content of all repositories 108 as a hierarchical namespace of nodes (or hierarchy). Given a node N, nodes that are hierarchically inferior to N are referred to as children of N whereas nodes that are hierarchically superior to N are referred to as parents of N. The top-most level of the hierarchy is called the federated root. There is no limit to the depth of the hierarchy. In one embodiment, content repositories can be children of the federated root. Each content repository can have child nodes. Nodes can represent hierarchy information or content. Hierarchy nodes serve as a container for other nodes in the hierarchy akin to a file subdirectory in a hierarchical file system. Content nodes can have properties. In one embodiment, a property associates a name with a value of some kind. By way of a non-limiting illustration, a value can be a text string, a number, an image, an audio/visual presentation, binary data, etc. Either type of node can have a schema associated with it. A schema describes the data type of one or more of a node's properties. Referring again to In one embodiment, the API can include optimizations to improve the performance of interacting with the VCR. One or more content caches 216 can be used to buffer search results and recently accessed nodes. Content caches can include node caches and binary caches. A node cache can be used to provide fast access to recently accessed nodes. A binary cache can be used to provide fast access to the data associated with each node in a node cache. The API can also provide a configuration facility 214 to enable applications, tools and libraries to configure content caches and the VCR. In one embodiment, this facility can be implemented as a Java Management Extension (available from Sun Microsystems, Inc.). Exemplary configuration parameters are provided in Table 1. In one embodiment, content and hierarchy nodes can be represented by a Node 402 (or node). A node has a name, an id, and can also include a path that uniquely specifies an the node's location in the VCR hierarchy. By way of a non-limiting example, the path can be in a Unix-like directory path format such as β€˜/a/b/c’ where β€˜/’ is a federated root, β€˜a’ is a repository, β€˜b’ is a node in the β€˜a’ repository, and β€˜c’ is the node's name. The Node class provides methods by with a node's parent and children can be obtained. This is useful for applications and tools that need to traverse the VCR hierarchy (e.g., browsers). Nodes can be associated with zero or more Property 404 objects (or properties). A property can have a name and zero or more values 406. In one embodiment, a property's name is unique relative to the node to which the property is associated. A Value 406 can represent any value, including but not limited to binary, Boolean, date/time, floating point, integer or string values. If a property has more than one value associated with it, it is referred to as β€œmulti-valued”. A node's properties can be described by a schema. A schema can be referred to as β€œmetadata” since it does not constitute the content (or β€œdata”) of the VCR per se. Schemas can be represented by an ObjectClass 408 object and zero or more PropertyDefinition 410 objects. An ObjectClass has a schema name that uniquely identifies it within a content repository. A node can refer to a schema using the ObjectClass name. In another embodiment, a content node can define its own schema by referencing an ObjectClass object directly. In one embodiment, there is one PropertyDefinition object for each of a node's associated Property objects. PropertyDefinition objects define the shape or type of properties. Schemas can be utilized by repositories and tools that operate on VCRs, such as hierarchical browsers. By way of a non-limiting example, a hierarchy node's schema could be used to provide information regarding its children or could be used to enforce a schema on them. By way of a further non-limiting example, a VCR browser could use a content node's schema in order to properly display the node's values. In one embodiment, a PropertyDefinition can have a name and can describe a corresponding property's data type (e.g., binary, Boolean, string, double, calendar, long, reference to an external data source, etc.), whether it is required, whether it is read-only, whether it provides a default value, and whether it specifies a property choice type. A property choice can indicate. PropertyChoice objects 412 can be associated with a PropertyDefinition object to define a set of value choices in the case where the PropertyDefinition is restricted. A choice can be designated as a default value, but only one choice can be a default for a given PropertyDefinition. A PropertyDefinition object may also be designated as a primary property. By way of a non-limiting example, when a schema is associated with a node, the primary property of a node can be considered its default content. The isPrimary( ) method of the PropertyDefinition class returns true if a PropertyDefinition object is the primary PropertyDefinition. By way of a further non-limiting example, if a node contained a binary property to hold an image, it could also contain a second binary property to represent a thumbnail view of the image. If the thumbnail view was the primary property, software applications such as browser could display it by default. A ticket can utilize a user's credentials to authorize a service. In one embodiment, a ticket can be the access point for the following service interfaces: NodeOps 508, ObjectClassOps 506, and SearchOps 510. An application program can obtain objects that are compatible with these interfaces through the API RepositoryManager class. The NodeOps interface provides CRUD methods for nodes in the VCR. Nodes can be operated on based on their id or through their path in the node hierarchy. Table 2 summarizes NodeOp class functionality exposed in the API. As with the NodeOps service, there is one SPI ObjectClassOps object per repository and a single API ObjectClassOps object. The API ObjectClassOps object maps requests to one or more SPI ObjectClassOps which in turn fulfill the requests using their respective repositories. Through this service, ObjectClass and PropertyDefinition objects can be operated on based on their id or through their path in the node hierarchy. Table 3 summarizes ObjectClassOps class functionality exposed in the API. As with the NodeOps and ObjectClassOps services, there is one SPI SearchOps object per repository and a single API SearchOps object. The API SearchOps object maps requests to one or more SPI SearchOps which in turn fulfill the requests using their respective repositories. Among other things, the SearchOps services allows applications and libraries to search for properties and/or values throughout the entire VCR. In one embodiment, searches can be conducted across all Property, Value, BinaryValue, ObjectClass, PropertyChoice and PropertyDefinitions objects in the VCR. Search expressions can include but are not limited to one or more logical expressions, Boolean operators, nested expressions, object names, function calls, mathematical functions, mathematical operators, string operators, image operators, and Structured Query Language (SQL). Table 4 summarizes SearchOps class functionality exposed in the API..
https://patents.google.com/patent/US7840614B2/en
CC-MAIN-2018-26
en
refinedweb
In this post I'm drilling into some of the details of OpenShift from the perspective of the JBossAS7 cartridges that were created for Express and Flex. The basic notion in terms of providing a PaaS container is that of a cartridge. A cartridge plugs functionality into the PaaS environment, and is responsible for handling cartridge callouts known as hooks. The hooks are what handle container specific details of installing/starting/stopping/removing PaaS applications that rely on a given container type. A PaaS application may use more than one cartridge as part of the application. One example of this usage for the JavaEE applications the JBossAS7 cartridge supports would be a MySQL cartridge that provides a MySQL database for use by the application. Let's look at what the JBossAS7 cartridge does in the two environments. Express The Express environment is oriented toward a developer getting their application running. It runs with a more limited JBossAS7 server as I described in JBossAS7 Configuration in OpenShift Express. The focus is on a single node, git repository development model where you update your application in either source or binary form and push it out to the OpenShift environment to have an application running quickly. There is little you can configure in the environment in terms of the server the app runs on, and the only access to the server you have is through the command line tools and the application git repository. The Express cartridge framework supports the following cartridge hooks. The hooks highlighted in bold are the only ones the JBossAS7 cartridge provides an implementation for: - add-module - configure - This is where most of the work is done. It is a bash script which creates an application local JBossAS7 instance with it's standalone/deployments directory mapped to the user's git repository deployments content. It creates the git repository with git hooks to build and restart the server if a source development model is in effect, sets up a control shell script which handles the real work for the start/stop/restart/status hooks, links the log files to where the Express framework picks them up, updates the standalone.xml with the loopback address assigned to the application, and installs an httpd configuration to proxy the external application url to the JBossWeb container. This also starts the JBossAS7 server. - deconfigure - Removal of application and it's setup - info - post-install - post-remove - pre-install - simply checks that the java-1.6.0-openjdk httpd rpm packages are installed - reload - remove-module - restart - start - This is a simple bash script which calls out to the control shell script to start the server. This ends up calling the application's JBossAS7 bin/standalone.sh to launch the server. - status - Checks if the server is running and if so, returns the tail of the server.log. If the server is not running, reports that as the status. - stop - This is a simple bash script which calls out to the control shell script to stop the server. - update_namespace The git repository for the application contains some configuration and scripts that can be updated to control your application deployment on the server. I'll talk about those in a seperate blog entry. Flex The Flex framework is not surprisingly, much more flexible with respect to what you can control in the PaaS environment. You have control over cluster definitions, and other IaaS aspects in addition to your PaaS containers. - configure - This is a Python class where the initial setup of the application specific JBossAS7 instance is done. It lays down the JBossAS7 structure. There is integration with the Flex console configuration wizard which displays the MySQL datasource fields as well. - deconfigure - Removal of application and it's setup - post-install - Integrates the JBossWeb instrumentation module that allows tracking of web requests by Flex - start - This is a bash script finishes some configuration details like determining which port offset to use . As I described in this post Differences Between the Express and Flex JBossAS7 Configurations, Flex and Express differ in how they isolate the JBossAS7 instances. In Flex, a port offset is determined at startup time based on the existence of other http listening ports. The start script also links the standalone/deployments directory to the application git repository, and well as the log file location the Flex console looks to, and installs an httpd configuration to proxy the external application url to the JBossWeb container. - stop - This calls out to the bin/jboss-admin.sh script to shutdown the server, using the port offset information to determine how to connect to the server. Future (Codename TBD) Right now the Express/Flex environments are based on very different internal infrastructures, and even though they build on the concept of a cartridge, the implementations are different. This is not a good thing for many reasons, not the least of which is that it complicates opening up development to a wider community. To address this, the OpenShift architecture is moving to a public, open source development mode that will be hosted on github under the following Organization: The new project is called Codename TBD (not really, but it is still being discussed), and the goal is to develop a common cartridge SPI/API and infrastructure to address the current duplication of effort and limitations. To that end, I invite you to browse the existing code and docs in the github organization repositories, as well as the OpenShift Community pages. We are looking for feedback from both the end user PaaS developer as well as PaaS container providers. My involvement will be from the perspective of what PaaS notions can be pushed as standards for consieration in JavaEE 7 and 8.
https://developer.jboss.org/blogs/scott.stark/2011/08/10/openshift-expressflex-cartridge-comparision
CC-MAIN-2018-26
en
refinedweb
To quote directly from the remctl documentation: β€œremctl (the client) and remctld (the server) implement a client/server protocol for running single commands on a remote host using Kerberos v5 authentication and returning the output” I have been intending to find the chance to try out remctl for a while now as it looks like it could be very useful. In particular it should allow us to run nagios passive checks (e.g. for disk space usage) in a secure manner. It could also provide an improved method for remotely executing commands compared to the current way β€œom” just does a login using ssh. Simon had already written an LCFG component which supported a lot of the necessary configuration so I took this work and finished it by adding support for command ACLs. To install it onto a server you now just need: #include <lcfg/options/remctld.h> On the client you will need at least the remctl package, you might also want the perl module but it’s not essential: !profile.packages mEXTRA(remctl-2.13-1.inf\ remctl-perl-2.13-1.inf) Once you have installed the new packages on the server you will need to start (or restart) the LCFG xinetd component. To get it to do something useful you then need to add some commands, for example: !remctld.aclgroups mADD(foo) !remctld.aclmembers_foo mSET([email protected] [email protected]) !remctld.types mADD(om) !remctld.services_om mSET(ALL) !remctld.exec_om_ALL mSET(/usr/bin/om) !remctld.aclfile_om_ALL mSET(foo) It’s not necessary to use groups of ACLs, you can define lists of allowed and denied users for each command. This approach just allows you to use the same ACL file for multiple commands. To understand all of this requires some reading of the LCFG component docs and the remctl documentation but it’s hopefully fairly clear that this example would allow Simon and I to run om on that machine. Of particular benefit is the ability to allow specific users to run commands on a machine without giving them full shell access but still controlling the access in a secure manner. For example, a user could be allowed to restart a webserver (via om) although not allowed to login.
http://blog.inf.ed.ac.uk/squinney/tag/remctl/
CC-MAIN-2018-26
en
refinedweb
How to create File and directory in Java is probably the first things come to mind when we exposed to the file system from Java. Java provides rich IO API to access contents of File and Directory in Java and also provides lots of utility method to create a file, delete a file, read from a file, and write to file or directory. Anybody who wants to develop an especially after the introduction of java.nio package and concepts like In Memory Files, we will discuss those in probably another blog post but what it confirms is the importance of knowledge of File IO for java programmer. How to Create File and Directory in Java Example What is File in Java An important point to remember is that java.io.File object can represent both File and Directory in Java. You can check whether a File object is a file in filesystem by using utility method isFile() and whether its directory in file system by using isDirectory(). Since File permissions is honored while accessing File from Java, you can not write into a read-only files, there are utility methods like canRead() and CanWrite(). name):it will create file object inside a directory which is passed as the first argument and the file as read only in Java and listing files from to. Further Learning Complete Java Masterclass Java Fundamentals: The Java Language Java In-Depth: Become a Complete Java Engineer! Other Related Java Tutorial 11 comments : A nice post.. Never thought this can also be done.. here is my link on how to read and write into files in java Reading and Writing to Files a Java Tutorial Thanks for you comment keval . Good to know that you like my file creation tutorial. What is worth to remember is that closing any file which has been opened by java Program and handling of File related exception to make your Java program more robust. You may also like my new tutorial on files as Anonymous, thanks for your comment. Indeed using PATH separator is great idea that makes your code to run on both windows and unix, but samples are just to do it quick rather than do it perfectly but yes in production code path should not be hardcoded.? Hi , i need program for below statement , Please help me please send it my email Design & code a very simple in-memory file system and expose the file system operations e.g. create, read, write, list over HTTP. code should: * Be modular and re-usable * Be Easy to test * Demonstrate Object oriented features of the language (for Java) * Demonstrate the use of the right data structures * Demonstrate the use of the right design pattern Really nice and different Technic. Easy to understand each function. thanks. How to write program in java create folder and store multiple files then count how many files are stored in folder import java. io.DataInputStream; import java. io. File; import java. io. Exception ; Class file { public static void main(String args[]) { DataInputStream in=new DataInputStream(System. in); String r="" ; Try { r=in. readLine() ; String s1 =r +".txt"; File f1=new File(s1) ; f1. createNewFile() ; } catch(Exception e) {} } }
https://javarevisited.blogspot.com/2011/12/create-file-directory-java-example.html?showComment=1323177502988
CC-MAIN-2018-26
en
refinedweb
React & Axios JWT Authentication Tutorial with PHP & MySQL Server: Signup, Login and Logout In this tutorial, we'll learn how to use React to build login, signup and logout system and Axios to send API calls and handle JWT tokens. For building the PHP application that implements the JWT-protected REST API, check out PHP JWT Authentication Tutorial. We'll be using the same application built in the previous tutorial as the backend for our React application we'll be building in this tutorial. Prerequisites You will need to have the following prerequisites to follow this tutorial step by step: - Knowledge of JavaScript, - Knowledge of React, - Knowledge of PHP, - PHP, Composer and MySQL installed on your development machine, - Node.js and NPM installed on your system. That's it. Let's get started! Cloning the PHP JWT App Our example application implements JWT Authentication. It exposes three endpoints api/login.php api/register.php api/protected.php How to Run the PHP App First clone the GitHub repository: $ git clone Next, navigate inside the project's folder and run the following commands to install the PHP dependencies and start the development server: $ cd php-jwt-authentication-example $ composer install $ php -S 127.0.0.1:8000 Enabling CORS Since we'll be making use of two frontend and backend apps - The React/Webpack development server and the PHP server which are running from two different ports in our local machine (considered as two different domains) we'll need to enable CORS in our PHP app. Open the api/register.php, api/login.php and api/protected.php files and add the following CORS header to enable any domain to send HTTP requests to these endpoints: <?php header("Access-Control-Allow-Origin: *"); > Installing create-react-app Let's start by installing the create-react-app tool which will be used to create the React project. Open a new terminal and run the following command: $ npm install -g create-react-app create-react-app is the official tool created by the React team to quickly start developing React apps. Creating a React Project Let's now generate our React project. In your terminal, run the following command: $ create-react-app php-react-jwt-app This will generate a React project with a minimal directory structure. Installing Axios & Consuming JWT REST API We'll be using JWT for sending HTTP requests to our PHP JWT REST API so we'll need to install it first. Go back to your terminal and run the following commands to install Axios from npm: $ cd php-react-jwt-app $ npm install axios --save As of this writing, this will install axios v0.18.0. Next, let's create a component that encapsulates the code for communicating with the JWT REST API. In the src/ folder, create an utils folder then create a JWTAuth.js file inside of it: $ mkdir utils $ touch JWTAuth.js Open the src/utils/JWTAuth.js file and add the following code: import axios from 'axios'; const SERVER_URL = ""; We import axios and define the SERVER_URL variable that contains the URL of the JWT authentication server. Next, define the login() method which will be used to log users in: const login = async (data) => { const LOGIN_ENDPOINT = `${SERVER_URL}/api/login.php`; try { let response = await axios.post(LOGIN_ENDPOINT, data); if(response.status === 200 && response.data.jwt && response.data.expireAt){ let jwt = response.data.jwt; let expire_at = response.data.expireAt; localStorage.setItem("access_token", jwt); localStorage.setItem("expire_at", expire_at); } } catch(e){ console.log(e); } } First, we construct the endpoint by concatenating the server URL with the /api/login.php path. Next, we send a POST request to the login endpoint with the data passed as a parameter to the login() method. Next, if the response is successful, we store the JWT token and expiration date in the local storage. Note: Since Axios, returns a Promise, we use the async/awaitsyntax to make our code look synchronous. Next, define the register() method which creates a new user in the database: const register = async (data) => { const SIGNUP_ENDPOINT = `${SERVER_URL}/api/register.php`; try { let response = await axios({ method: 'post', responseType: 'json', url: SIGNUP_ENDPOINT, data: data }); } catch(e){ console.log(e); } } We first construct the endpoint by concatenating the server URL with the /api/register.php path. Next, we use Axios to send a POST request to the register endpoint with the data passed as a parameter to the method. Note: We use the async/await syntax to avoid working with Promises. Finally, let's define the logout() method which simply removes the JWT access token and expiration date from the local storage: const logout = () => { localStorage.removeItem("access_token"); localStorage.removeItem("expire_at"); } We use the removeItem() method of localStorage to remove the access_token and expire_at keys. Now, we need to export these methods so they can be imported from the other React components: export { login, register, logout } Calling the JWTAuth Methods in the React Component Let's now make sure our login system works as expected. Open the src/App.js file and import the register() and logout() methods from the src/utils/JWTAuth.js file: import { login, register, logout } from "./utils/JWTAuth.js"; Next, define a login() method in the App component as follows: class App extends Component { async login(){ let info = { email: "[email protected]", password: "123456789" }; await login(info); } This methods simply calls the login() method of JWTAuth.js with hardcoded user information to log the user in. Next, define the register() method as follows: async register(){ let info = { first_name: "kaima", last_name: "Abbes", email: "[email protected]", password: "123456789" }; await register(info); } Note: We don't need to wrap the logout()method since we don't have to pass any parameters to the method. Finally, update the render() method to create the buttons for login, register and logout: render() { return ( <div className="container"> <div className="row"> <h1>React JWT Authentication Example</h1> <button className="btn btn-primary" onClick = { this.register }>Sign up</button> <button className="btn btn-primary" onClick = { this.login }>Log in</button> <button className="btn btn-primary" onClick = { logout }>Log out</button> </div> </div> ); } You should be able to use these buttons to test the login() and logout() methods. Note: We used Bootstrap for styling the UI. In the next tutorial, we'll build the actual login and register UIs with forms to get the user's information and submit them to the PHP JWT authentication server. Conclusion In this tutorial, we've seen how to implement JWT authentication in React with Axios, PHP and MySQL. Sponsored Links Latest Questions and AnswersWhat Are the New Features of HTML6? What’s the HTML6 Release Date? How to detect route changes with React Router 5 Hooks? What's new with React Router 6? What Are React Router 5 Hooks and How to Use Them By Example? CodeIgniter 4 Is Released, What Are The New features of CodeIgniter 4? What’s New with Laravel 7 and Its Release Date?
https://www.techiediaries.com/react-axios-php-jwt-authentication-tutorial/
CC-MAIN-2020-45
en
refinedweb
. In Device Property Browser (menu: Tools | Device Property Browser...) the stage appears as: alternative ways to home the stage. Choose one of the following: Calibrate command Open Stage Position List dialog and press "Calibrate" button (menu: Tools | XY List...). the script will be automatically executed and you will be prompted to home the stage and warned to move objectives to the safe position. The stage adapter lets us adjust velocity and acceleration to match our application. If you use default values for velocity and acceleration, no further actions are necessary. However if you change velocity/acceleration to slow down the stage, you may run into problems with micro-manager timeouts. The default timeout interval for micro-manager is 5 seconds. If any of the move commands take more than 5 seconds (or whatever you have set instead) to complete, micro-manager will time out you will get an error message to that effect. If you get timeout error messages on lengthy move commands, try changing the micro-manager timeout interval to 10 seconds or more. To change timeout interval open Device Property Browser (menu: Tools | Device Property Browser...) and set the desired value both for the Core-TimeoutMs and XYStage-MoveTimeoutMs properties. Note that change in the Property Browser is not permanent. If you exit the application and re-start these values will return to their defaults. To make settings permanent use System/Startup configuration preset mechanism in micro-mananger. Consult micro-manager documentation for more information. For verification and testing of the stage we can load test configuration file Media:MMConfig_Thorlabs.cfg. This configuration file contains XYStage tied to the com port and a couple of device simulators (demo adapters) acting as demo camera and focus stage. The configuration probably won't work right away because the COM port on your system may be different. To edit the port information use Hardware Configuration Wizard (menu: Tools) as with any other device. After the configuration loaded without errors we can use the following script to test the stage: // Exercise XY stage import java.text.DecimalFormat; // obtain xy stage name xyStage = mmc.getXYStageDevice(); gui.clearOutput(); // report starting position x = mmc.getXPosition(xyStage); y = mmc.getYPosition(xyStage); gui.message("Starting position [um]: " + x + ", " + y); // define test points in um ArrayList xPos = new ArrayList(); ArrayList yPos = new ArrayList(); xPos.add(0.0); yPos.add(0.0); xPos.add(5000.0); yPos.add(30000.0); xPos.add(70000.); yPos.add(18000.0); DecimalFormat FMT2 = new DecimalFormat("#0.0"); for (int i=0; i<xPos.size(); i++) { start = System.currentTimeMillis(); mmc.setXYPosition(xyStage, (double)xPos.get(i), (double)yPos.get(i)); mmc.waitForDevice(xyStage); end = System.currentTimeMillis(); gui.message("Reached point " + i + " at (" + xPos.get(i) + "," + yPos.get(i) + ")" + " in " + (end-start) + " ms"); x = mmc.getXPosition(xyStage); y = mmc.getYPosition(xyStage); gui.message("Current position [um]: " + FMT2.format(x) + ", " + FMT2.format(y)); } Running this script in the Script Panel (menu: Tools | Script Panel...) should produce the following output: Starting position [um]: 70000.0, 18000.0 bsh % Reached point 0 at (0.0,0.0) in 1000 ms bsh % Current position [um]: -0.2, -0.0 bsh % Reached point 1 at (5000.0,30000.0) in 594 ms bsh % Current position [um]: 5000.2, 30000.4 bsh % Reached point 2 at (70000.0,18000.0) in 922 ms bsh % Current position [um]: 69999.8, 18000.0 bsh % If this script does not generate any errors the stage is working properly. Timing on your system may be different, depending on the motion parameteres.
https://micro-manager.org/w/index.php?title=Thorlabs&diff=prev&oldid=5382
CC-MAIN-2020-45
en
refinedweb
Discussion of all things Redux and Firebase (offical room of react-redux-firebase) In my case, I have three levels of sub-collections, like so: /accounts/[email protected]/bank/10003940 It is still returning a null, here is my firestoreConnect request: export default compose( firestoreConnect([ { collection: 'accounts', doc: '[email protected]', subcollections: [{ collection: 'bank', doc: '10003940' }], storeAs: '[email protected]' // make sure to include this } ]), connect(mapStateToProps), )(MyComponent); Could someone please let me know where I am going wrong with the query? Many thanks. @gregfenton Thanks for the reply. Looks like collection is mandatory here, I see the below error with the above code: Error: Collection or Collection Group is required to build query name But after some more research I realized that the below works: firestoreConnect([ { collection: 'accounts', doc: '[email protected]', subcollections: [{ collection: 'bank', subcollections: [{ doc: '10003940' }] }], storeAs: '[email protected]' } ]), But now the problem is I need to iterate over all the sub-collections and the sub-collections within them. For this I need the list of all sub-collections, which apparently Firebase Web SDK does not provide. Option 1: There are alternate solutions as described in Option 2: The alternative is to flatten the hierarchy by having /accounts/${userID}-bank-${bankID}. But this will lead to other issues, which are solvable, but could be expensive. So I am weighing between the above two options. Which one do you recommend? And am I heading in the right direction? Thanks and appreciate your advise. my_top/1234/my_level2/AAAA/my_level3/ACT-123without ever actually creating the my_topcollection. The doc will be created at that path and β€œbe there”, but you can’t navigate to it via the Firebase Console. Thanks @gregfenton I am developing a fintech web-application (using react-js, Firestore and cloud functions) that shows the users their bank details. My web-application is a middleware between the bank and the end-users. The bank's servers "push" data to my cloud functions via web-hooks. I receive the data and update it in Firestore. At this point, there is no way to "push" the data to the end-users (there is the push notification that Firebase provides, but that is not built for this purpose). After some research, I realized that the only way to do this is by registering the client web-application as a listener to Firestore, so that when new data is added to Firestore, the end-users directly receive these updates. I also thought about setting up websockets but then I realized that it is not possible to do it via Firebase. And the only other alternative that I can think of is polling via Rest APIs, which I was trying to avoid as it is not recommended by Firebase. This is how I have arrived at this potential solution using Firestore. banks{ }object (or array) on the /accounts/[email protected] document rather than having a subcollection with separate documents? Thanks @gregfenton . Yes, I thought about that and seems to be the best way forward. If I flatten the "tree" and remove this hierarchy then that will solve this problem. But it is surprising that the cloud functions SDK apparently provides this ability to list all docs in a sub-collection but not the client side web SDK. Anyway, I don't think it is a good design to make multiple round-trip calls to the cloud functions just to get the list of docs. You are right, I think this is the solution to this problem: like, maybe have a banks{ } object (or array) on the /accounts/[email protected] document rather than having a subcollection with separate documents? I need to restructure the way I store the data so that it fits the model you suggested above. Thank you @gregfenton , appreciate your help! This was a great exercise for me where I learnt so many new things about Firestore! [email protected]), you would have an array called banksthat contains information about each bank that you would show on the ACCOUNTS screen banks = [ { id: doc_id_of_bank_1, name: β€˜ABC Bank Inc.’, logoUrl: … }, { id: doc_id_of_bank_2, name: β€œDEF Securities Co.”, logoUrl: … } ]; banksarray on the accounts screen and when the user drills into a specific bank, I would then read the sub-collection and the document that the user is trying to view. import β€˜firebase/firestore’below import firebase from 'firebase/app’would I be able to use firestore as the storage engine for redux-persist that way? Hi folks, I've just drained a day with a react-native error I'm getting in redux-firestore. I'd appreciate any help on this, and if we cannot fix it in a day or so, we're reverting all the redux-firebase integration I just worked so hard on. The error we're seeing in both iOS and Android builds on Circle Fastlane is (view the full build here: > Task :app:bundleReleaseJsAndAssets error Unable to resolve module `babel-runtime/helpers/extends` from `node_modules/redux-firestore/lib/enhancer.js`: babel-runtime/helpers/extends could not be found within the project. Error: Unable to resolve module `babel-runtime/helpers/extends` from `node_modules/redux-firestore/lib/enhancer.js`: babel-runtime/helpers/extends could not be found within the project. If you are sure the module exists, try these steps: If you are sure the module exists, try these steps: 1. Clear watchman watches: watchman watch-del-all 1. Clear watchman watches: watchman watch-del-all 2. Delete node_modules: rm -rf node_modules and run yarn install 2. Delete node_modules: rm -rf node_modules and run yarn install 3. Reset Metro's cache: yarn start --reset-cache 3. Reset Metro's cache: yarn start --reset-cache 4. Remove the cache: rm -rf /tmp/metro-*. Run CLI with --verbose flag for more details. 4. Remove the cache: rm -rf /tmp/metro-* at ModuleResolver.resolveDependency (/home/circleci/mobile/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js:186:15) at ResolutionRequest.resolveDependency (/home/circleci/mobile/node_modules/metro/src/node-haste/DependencyGraph/ResolutionRequest.js:52:18) at DependencyGraph.resolveDependency (/home/circleci/mobile/node_modules/metro/src/node-haste/DependencyGraph.js:287:16) at Object.resolve (/home/circleci/mobile/node_modules/metro/src/lib/transformHelpers.js:267:42) at dependencies.map.result (/home/circleci/mobile/node_modules/metro/src/DeltaBundler/traverseDependencies.js:434:31) at Array.map (<anonymous>) at resolveDependencies (/home/circleci/mobile/node_modules/metro/src/DeltaBundler/traverseDependencies.js:431:18) at /home/circleci/mobile/node_modules/metro/src/DeltaBundler/traverseDependencies.js:275:33 at Generator.next (<anonymous>) at asyncGeneratorStep (/home/circleci/mobile/node_modules/metro/src/DeltaBundler/traverseDependencies.js:87:24) > Task :app:bundleReleaseJsAndAssets FAILED FAILURE: Build failed with an exception. I followed the instructions with watchman del, rm -rf node_modules, etc. with no changes in this error. Here are the versions we're using: npm ls --depth 0 β”œβ”€β”€ @babel/[email protected] β”œβ”€β”€ @babel/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-community/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-native-firebase/[email protected] β”œβ”€β”€ @react-navigation/[email protected] β”œβ”€β”€ @react-navigation/[email protected] β”œβ”€β”€ @react-navigation/[email protected] β”œβ”€β”€ @react-navigation/[email protected] β”œβ”€β”€ @segment/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ @types/[email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ UNMET PEER DEPENDENCY eslint-plugin-standard@>=4.0.0 β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ UNMET PEER DEPENDENCY react-dom@* β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ UNMET PEER DEPENDENCY [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ UNMET PEER DEPENDENCY react-native-web@* β”œβ”€β”€ UNMET PEER DEPENDENCY react-native-windows@>=0.62 β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] β”œβ”€β”€ [email protected] └── [email protected]
https://gitter.im/redux-firebase/Lobby
CC-MAIN-2020-45
en
refinedweb
Difference between revisions of "The CIO Framework - OOP344 20123" Latest revision as of 19:44, 26 November 2012 OOP344 | Weekly Schedule | Student List | Teams | Project | Student Resources Contents - 1 Objective - 2 Tips - 3 File Names - 4 Hierarchy - 5 Student Resources - 6 Issues, Releases and Due Dates - 7 CFrame - 8 CField - 9 CLabel - 10 CDialog - 11 CLineEdit - 12 CButton - 13 CValEdit - 14 CCheckMark - 15 CMenuItem (optional) - 16 CText - 17 CCheckList - 18 CMenu and MNode (optional) Objective Your objective at this stage is to create series of core classes designed to interact with the user. These Core Classes then can be used in development of any interactive application. Please note that the class definitions here are minimum requirement for the Core Classes and you are free to add any enhancements or features you find useful. However make sure that you discuss these enhancements with your team and professor to make sure they are feasible before implementation. It is highly recommended to develop the classes in the order they are stated here. You must create your own tester programs for each class (if possible); However, close to due date of each release, a tester program may be provided to help you verify the functionality of your classes. If tester programs are provided, then executables of the test programs will be available on matrix to show you how it is supposed to run. Tips Start by creating mock-up classes (class declaration and definition with empty methods that only compiles and don't do anything). Each class MUST have its own header file to hold its declaration and "cpp" file to hold its implementation. To make sure you do not do circular includes follow these simple guidelines: - Add recompilation safeguards to all your header files. - Always use forward declaration if possible instead of including a class header-file. - Use includes only in files in which the actual header file code is used. - Avoid "just in case" includes. File Names Use the following rules to create filenames for your class. -. Hierarchy CFrame | |---CDialog | | |---CField | |-------- CLabel | | |-------- CButton | | |-------- CLineEdit | | | |-------CValEdit | |-------- CText | | |-------- CCheckMark | | |-------- CCheckList | | |-------- CMenuItem | | |-------- CMenu Student Resources Help/Questions Hi people! Maybe someone can help me -- I am trying to do the copy constructor in CLabel which needs to copy cFrame but CFrame's attribute char_border[9] doesn't have a setter or a getter so far as I can see -- unless I'm misunderstanding something I need to add a getter/setter pair for char_border to CFrame or add a copy constructor that I could use. Does anyone have any suggestions? We can modify CFrame if we want to, right? Alina - email: ashtramwasser1 I did the CLabel class, for the copy constructor, the base class is CField, the attribute is void* _data, which you can cast to char*. then do deep copy for this data member. Hope this is useful for your question. Yun Yang Blog Posts Issues, Releases and Due Dates Name Format Issue and branch name format: V.V_Name example; issue: Add Text Class to the project (issue 2.9.1) issue and branch name on gitub: 2.9.1_AddTextClass Issues 0.2 Milestone (Due Mon Nov 12th, 23:59) - Add console class to project and test with cio_test (issue 1) - Create Mock-up classes - Create the class files (header and cpp) with blank methods and make sure they compile - CField Mock-up Class (issue 2.1) - CLabel Mock-up Class (issue 2.2) - CDialog Mock-up Class (issue 2.3) - CLineEdit Mock-up Class (issue 2.4) - CButton Mock-up Class (issue 2.5) - CValEdit Mock-up Class (issue 2.6) - CCheckMark Mock-up Class (issue 2.7) - CText - Add Text Class to the project (issue 2.8.1) - CText Mock-up Class (issue 2.8.2) - CCheckList Mock-up Class (issue 2.9) 0.3 Milestone - Due along with 0.4 milestone - CField, Dialog and Label - line Edit 0.4 milestone (Sun Nov 25th. 23:59) - CButton - CValEdit - CCheckMark 0.6 milestone - CText - CheckList CFrame The code for this class is provided in your repository. You must understand and use it to develop your core classes in your repository. CFrame class is responsible to create a frame or structure in which all user interface classes contain themselves in. It can draw a border around it self or be border-less. CFrame also, before displaying itself on the screen, will save the area it is about to cover, so it can redisplay them to hide itself. CFrame is base of all objects in our user interface system. #pragma once #include "cuigh.h" class CFrame{ int _row; // relative row of left top corner to the container frame or the screen if _frame is null int _col; // relative col of left top corner to the container frame or the screen if _frame is null int _height; int _width; char _border[9]; // border characters bool _visible; // is bordered or not CFrame* _frame; // pointer to the container of the frame (the frame, surrounding this frame) char* _covered; // pointer to the characters of the screen which are covered by this frame, when displayed void capture(); // captures and saves the characters in the area covered by this frame when displayed and sets // _covered to point to it void free(); // deletes dynamic memory in the _covered pointer protected: int absRow()const; int absCol()const; public: CFrame(int Row=-1, int Col=-1, int Width=-1,int Height=-1, bool Visible = false, const char* Border=C_BORDER_CHARS, CFrame* Frame = (CFrame*)0); virtual void draw(int fn=C_FULL_FRAME); virtual void move(CDirection dir); virtual void move(); virtual void hide(); virtual ~CFrame(); /* setters and getters: */ bool fullscreen()const; void visible(bool val); bool visible()const; void frame(CFrame* theContainer); CFrame* frame(); void row(int val); int row()const; void col(int val); int col()const; void height(int val); int height()const; void width(int val); int width()const; void refresh(); }; Properties int _row, holds the relative coordinate of top row of this border with respect to its container. int _col, same as _row, but for _col. int _height, height of the entity. int _width, width of the entity. char _border[9], characters used to draw the border: - _border[0], left top - _border[1], top side - _border[2], right top - _border[3], right side - _border[4], right bottom - _border[5], bottom side - _border[6], bottom left - _border[7], left side bool _visible; Indicates if the border surrounding the entity is to be drawn or not. CFrame* _frame; holds the container (another CFrame) which has opened this one (owner or container of the current CFrame). _frame will be NULL if this CFrame does not have a container, in which case, it will be full screen and no matter what the values of row, col, width and height are, CFrame will be Full Screen (no border will be drawn) char* _covered; is a pointer to a character array that hold what was under this frame before being drawn. When the CFrame wants to hide itself, it simple copies the content of this array back on the screen on its own coordinates. Methods and Constructors Private Methods void capture(); - if _covered pointer is not pointing to any allocated memory, it will call the iol_capture function to capture the area that is going to be covered by this frame and keeps its address in _covered. Protected Methods - int absRow()const; calculates the absolute row (relative to the left top corner of the screen) and returns it. - it returns the sum of row() of this border plus all the row()s of the _frames - int absCol()const; calculates the absolute column(relative to the left top corner of the screen) and returns it. - it returns the sum of col() of this border plus all the col()s of the _frames Public Methods CFrame(int Row=-1, int Col=-1, int Width=-1,int Height=-1, bool Visible = false, const char* Border=C_BORDER_CHARS, CFrame* Frame = (CFrame*)0); - Sets the corresponding attributes to the incoming values in the argument list and set _covered to null virtual void draw(int fn=C_FULL_FRAME); - First it will capture() the coordinates it is supposed to cover - If frame is fullscreen() then it just clears the screen and exits. Otherwise: - If the _visible flag is true, it will draw a box at _row and _col, with size of _width and _height using the _border characters and fills it with spaces. Otherwise it will just draw a box using spaces at the same location and same size. virtual void move(CDirection dir); First it will hide the Frame, then adjust the row and col to move to the "dir" direction and then draws the Frame back on screen. virtual void hide(); using iol_restore()it restores the characters behind the Frame back on screen. It will also free the memory pointed by _covered; virtual ~CFrame(); It will make sure allocated memories are freed. bool fullscreen()const; void visible(bool val); bool visible()const; void frame(CFrame* theContainer); CFrame* frame(); void row(int val); int row()const; void col(int val); int col()const; void height(int val); int height()const; void width(int val); int width()const; These functions set and get the attributes of the CFrame. CFrame Help/Blogs CField CField is an abstract base class that encapsulates the commonalities of all Input Outputs Console Fields which are placeable on a CDialog. All Fields could be Framed, therefore a CField is inherited from CFrame. #include "cframe.h" class CDialog; class CField : public CFrame{ protected: void* _data; public: CField(int Row = 0, int Col = 0, int Width = 0, int Height =0, void* Data = (void*) 0, bool Bordered = false, const char* Border=C_BORDER_CHARS); ~CField(); virtual int edit() = 0; virtual bool editable() const = 0; virtual void set(const void* data) = 0; virtual void* data(); void container(CDialog* theContainer); CDialog* container(); }; Attributes void* _data; Will hold the address of any type of data a CField can hold. Constructors and Methods CField(int Row = 0, int Col = 0, int Width = 0, int Height =0, void* Data = (void*) 0, bool Bordered = false, const char* Border=C_BORDER_CHARS); Passes the corresponding attributes to it's parents enforce the children to implement; - an edit() method - an editable() method that returns true if the class is to edit data and false if the class is to only display data. - a set() method to set the _data attribute to the data the class is to work with. virtual void* data(); Complied Object Files - Linux - Mac - Borland C++ 5.5 - Visual C++ 10 - Note. At least with the VS obj files, if you look at cfield.h, the method: virtual void* data(); is now: virtual void* data()const; However, this isn't the case in our header requirements. Noticed this when trying to compile with our header based on this wiki. CLabel A readonly Field that encapsulates console.display() function. (i.e it is responsible to display a short character string on the display) CLabel although, by inheritance is Frame, but it is never bordered. #include "cfield.h" class CLabel : public CField{ // int _length;) Constructors / Destructor CLabel(const char *Str, int Row, int Col, int Len = 0); passes the Row and Col to the CField constructor and then; if len is zero, it will allocate enough memory to store the string pointed by Str and then copies the Str into it. if len > 0, then it will allocate enough memory to store len chars,. ~CLabel(); makes sure that memory pointed by _data is deallocated before the object is destroyed. Methods void draw(int fn=C_NO_FRAME) ; makes a direct call to console.display(), passing _data for the string to be printed and absRow() and absCol() for row and col and _length Complied Object Files = false, const char* Border=C_BORDER_CHARS); virtual ~CDialog(); void draw(int fn = C_FULL_FRAME); int edit(int fn = C_FULL_FRAME); int add(CField* field, bool dynamic = true); int add(CField& field, bool dynamic = false); CDialog& operator<<(CField* field); CDialog& operator<<(CField& field); bool editable(); int fieldNum()const; int curIndex()const; CField& operator[](unsigned int index); CField& curField(); }; } Attributes int _fnum; Holds the number of Fields added to the Dialog bool _editable; will be set to true if any of the Fields added are editable. This is optional because it depends on how you are going to implement the collection of CFields: int _curidx; Holds the index of the Field that is currently being edited. CField* _fld. Constructors/Destructors CDialog(CFrame *Container = (CFrame*)0, int Row = -1, int Col = -1, int Width = -1, int Height = -1, bool Borderd = false, const char* Border=C_BORDER_CHARS); The constructor passes all the incoming arguments to the corresponding arguments of the apparent constructor CFrame. Then it will set all called attributes to their default values and then sets all the field pointers (_fld) to NULL. It also sets all the dynamic (_dyn) flags to false. virtual ~CDialog(); The destructor will loop through all the field pointers and if the corresponding dynamic flag is true then it will delete the field pointed to by the field pointer. Methods void draw(int fn = C_FULL_FRAME); If fn is C_FULL_FRAME, it will call its parent draw. Then It will draw all the Fields in the Dialog. If fn is Zero, then it will just draw all the Fields in the Dialog. If fn is a non-zero positive value, then it will only draw Field number fn in the dialog. (First added Field is field number one.) int edit(int fn = C_FULL_FRAME); If CDialog is not editable (all fields are non-editable), it will just display the Dialog and then waits for the user to enter a key and then terminates the function returning the key. If fn is 0 or less, then before editing, the draw method is called with fn as its argument and then editing begins from the first editable Field. If fn is greater than 0 then editing begins from the first editable key on or after Field number fn. Note that fn is the sequence number of field and not the index. (First field number is one) Start editing from field number fn; Call the edit of each field and depending on the value returned, do the following: - For ENTER_KEY, TAB_KEY and DOWN_KEY, go to next editable Field , if this is the last editable Field then restart from Field number one. - For UP_KEY go to the previous editable Field, if there is no previous editable Field, go to the last editable Field in the Dialog. - For any other key, terminate the edit function returning the character which caused the termination. int add(CField* field, bool dynamic = true); Adds the CField' pointed by field to the Fields of the Dialog; by appending the value of the field pointer after the last added field in the _fld array , setting the corresponding _dyn element to the value of dynamic argument and then increasing _fnum by one and returning it the index of added Field in the CDialog object. important note: Make sure that add() sets the container of the added CField to this CDialog object, using the container() method of CField int add(CField& field, bool dynamic = false); Makes a direct call to the first add method. CDialog& operator<<(CField* field); Makes a direct call to the first add method, ignoring the second argument and then returns the owner (current CDialog). CDialog& operator<<(CField& field); Makes a direct call to the second add method, ignoring the second argument and then returns the owner (current CDialog). bool editable(); Returns _editable; int fieldNum()const; returns _fnum. int curIndex()const; returns _curidx; CField& operator[](unsigned int index); Returns the reference of the Field with incoming index. (Note that here, the first field index is 0) CField& curField(); Returns the reference of the Field that was just being edited. CDialog Complied Object Files CLineEdit(char* Str, int Row, int Col, int Width, int Maxdatalen, int* Insertmode, bool Bordered = false, const char* Border=C_BORDER_CHARS); LineEdit, sets the Field's _data to the value of str. If LineEdit is instantiated with this constructor then it will edit an external string provided by the caller function of LineEdit. LineEdit in this case is not creating any dynamic memory, therefore _dyn is set to false (therefore the destructor will not attempt to deallocate the memory pointed by _data). The location (row and col) and Bordered are directly passed to the parent Field's constructor) CLineEdit(int Row, int Col, int Width, int Maxdatalen, int* Insertmode, bool Bordered = false, const char* Border=C_BORDER_CHARS); Works exactly like the previous constructor with one difference; since no external data is passed to be edited here, this constructor must allocate enough dynamic memory to accommodate editing of Maxdatalen characters. Then make it an empty string and set Fields's _data to point to it. Make sure _dyn is set to true in this case, so the destructor knows that it has to deallocate the memory at the end. ~CLineEdit(); If _dyn is true, it will deallocate the character array pointed by Fields's _data Methods void draw(int Refresh = C_FULL_FRAME); It will first call Frame's draw passing Refresh as an argument to it. Then it will make a direct call to console.display() to show the data kept in Field's _data. The values used for the arguments of console.display() are: - str: address of string pointed by _data + the value of _offset - row: absRow() (add one if border is visible) - col: absCol() (add one if border is visible) - len: width() (reduce by two if border is visible') int edit(); Makes a direct call to, and returns console.edit(). For the coordinates and width arguments follow the same rules as the draw function. For the rest of the arguments of console.edit(), use the attributes of CLineEdit. bool editable()const; Always return true; void set(const void* Str); Copies the characters pointed by Str into the memory pointed by Field's _data up to _maxdatalen characters. CLineEdit Complied Object Files CButton Button is a child of CField. It displays a small piece of text (usually one word or two) and accepts one key hit entry. When in edit mode, to indicate the editing mode, it will surround the text with squared brackets. #pragma once #include "cfield.h" namespace cio{ class CButton: public CField{ public: CButton(const char *Str, int Row, int Col, bool Bordered = true, const char* Border=C_BORDER_CHARS); virtual ~CButton(); void draw(int rn=C_FULL_FRAME); int edit(); bool editable()const; void set(const void* str); }; } Attributes This class does not have any attributes of its own! Constructor / Destructor CButton(const char *Str, int Row, int Col, bool Bordered = true, const char* Border=C_BORDER_CHARS); When creating a Button, allocate enough memory to hold the contents of the Str and set Field's _data to point to it. Then copy the content of Str into the newly allocated memory. Pass all the arguments directly to Field's constructor. For Field size (width and hight) do the following: For width: Set width to the length of Str + 2 (adding 2 for surrounding brackets) or if the Button is bordered set width to the length of Str + 4 (adding 2 for surrounding brackets and 2 for the borders). For height: Set the height to 1 or if the Button is bordered, set the height to 3. virtual ~CButton(); Deallocates the allocated memory pointed by Field's _data. Methods void draw(int fn=C_FULL_FRAME); Draws the Button with border around it if it is Bordered. Note that there should be a space before and after of the text that will be used to surround the text with "[" and "]" hint: - First calls Frame's draw(fn) (passing the fn argument to the parents draw) - Use console.display() to display the Button's text (pointed by Field's _data) - If not bordered - display the text at absRow() and absCol() - If bordered - display the text at absRow()+1 and absCol()+2 int edit(); First draw() the Button, then surround it by squared brackets, place the cursor under the first character of Button's text and wait for user entry. When user hits a key, if the key is ENTER_KEY or SPACE, return C_BUTTON_HIT (defined in cuigh.h) otherwise return the entered key. Complied Object FilesEdit's edit(). - After validation is done, if _help function exists, it will recall the help function using MessageStatus::ClearMessage and contianer()'s reference as arguments. - It will return the terminating key Navigation keys are Up key, Down key, Tab key or Enter key. MessageStatus is enumerated in cuigh.h CValEdit Complied Object Files Complied Object Files CMenuItem (optional) Complied Object Files Complied Object Files Complied Object Files
https://wiki.cdot.senecacollege.ca/w/index.php?title=The_CIO_Framework_-_OOP344_20123&diff=cur&oldid=90040&printable=yes
CC-MAIN-2020-45
en
refinedweb
revalidate_disk(9) [centos man page] REVALIDATE_DISK(9) The Linux VFS REVALIDATE_DISK(9) NAME revalidate_disk - wrapper for lower-level driver's revalidate_disk call-back SYNOPSIS int revalidate_disk(struct gendisk * disk); ARGUMENTS disk struct gendisk to be revalidated DESCRIPTION This routine is a wrapper for lower-level driver's revalidate_disk call-backs. It is used to do common pre and post operations needed for all revalidate_disk operations. COPYRIGHT Kernel Hackers Manual 3.10 June 2014 REVALIDATE_DISK(9) STRUCT CLASS(9) Device drivers infrastructure STRUCT CLASS(9) NAME struct_class - device classes SYNOPSIS struct class { const char * name; struct module * owner; struct class_attribute * class_attrs; struct device_attribute * dev_attrs; const struct attribute_group ** dev_groups; struct bin_attribute * dev_bin_attrs; struct kobject * dev_kobj; int (* dev_uevent) (struct device *dev, struct kobj_uevent_env *env); char *(* devnode) (struct device *dev, umode_t *mode); void (* class_release) (struct class *class); void (* dev_release) (struct device *dev); int (* suspend) (struct device *dev, pm_message_t state); int (* resume) (struct device *dev); const struct kobj_ns_type_operations * ns_type; const void *(* namespace) (struct device *dev); const struct dev_pm_ops * pm; struct subsys_private * p; }; MEMBERS name Name of the class. owner The module owner. class_attrs Default attributes of this class. dev_attrs Default attributes of the devices belong to the class. dev_groups Default attributes of the devices that belong to the class. dev_bin_attrs Default binary attributes of the devices belong to the class. dev_kobj The kobject that represents this class and links it into the hierarchy. dev_uevent Called when a device is added, removed from this class, or a few other things that generate uevents to add the environment variables. devnode Callback to provide the devtmpfs. class_release Called to release this class. dev_release Called to release the device. suspend Used to put the device to sleep mode, usually to a low power state. resume Used to bring the device from the sleep mode. ns_type Callbacks so sysfs can detemine namespaces. namespace Namespace of the device belongs to this class. pm The default device power management operations of this class. p The private data of the driver core, no one other than the driver core can touch this. DESCRIPTION A class is a higher-level view of a device that abstracts out low-level implementation details. Drivers may see a SCSI disk or an ATA disk, but, at the class level, they are all simply disks. Classes allow user space to work with devices based on what they do, rather than how they are connected or how they work. COPYRIGHT Kernel Hackers Manual 3.10 June 2014 STRUCT CLASS(9)
https://www.unix.com/man-page/centos/9/REVALIDATE_DISK/
CC-MAIN-2020-45
en
refinedweb
The problem Given an array of strings arr. String s is a concatenation of a sub-sequence of arr which have unique characters. Return the maximum possible length of s. Example test-cases Constraints 1 <= arr.length <= 16 1 <= arr[i].length <= 26 arr[i]contains only lower case English letters. How to write the code def maxLength(arr: List[str]) -> int: result = [float('-inf')] unique_char("", arr, 0, result) if not result[0] == float('-inf'): return result[0] return 0 def unique_char(cur, arr, index, result): # End of the array if index == len(arr): return # Iterating from the current word to the end of the array for index in range(index,len(arr)): # If current word + next word have all unique characters if len(set(cur + arr[index])) == len(list(cur + arr[index])): # Compare the actual lenth with the previous max result[0] = max(result[0],len(cur + arr[index])) # Make a new call with concatenate words unique_char(cur + arr[index], arr, index + 1,result) doesn’t work Hi Jeremy, What about the solution doesn’t work? Depending on your Python version, you may need to change : to:
https://ao.gl/get-the-maximum-length-of-a-concatenated-string-with-unique-characters-in-python/
CC-MAIN-2020-45
en
refinedweb
Howdy, We've got an often used little PyQt QDialog window that we use to set variables like Shot, Sequence, Shotgun Task/Context that has its own .ui file. From inside a new custom RV Python plugin, I'm calling in a way that we use a lot and we like to use as a popup dialog in RV. I guess I should ask first whether or not this is possible. I search the forum here and found a post or two that seem to imply it was. First, I have the RV plugin code, which compiles and loads. At the top, I do a import Settings_Dialog as SD Later, I call it like so: dialog = SD.Ui_Dialog() if dialog.exec_(): settings = dialog.DoIt() print settings Before describing the file "Settings_Dialog.py",...let just say, that if I execute this,...RV will error out saying: ERROR: 'Ui_Dialog' object has no attribute 'exec_' But I know this it does,..as this syntax works in non-RV related work. Just for grins, I replaced "exec_()" with "show()" and the subwindow did appear for a brief second, then disappears. The version of PyQt4 that I'm using is compatible with Python 2.6.6. Is there some inherent limitation to doing something like this? Thank you, Jim Btw: using RV 4.0.9 on Windows 7 64-bit UI_Test-1.0.rvpkg
https://support.shotgunsoftware.com/hc/zh-cn/community/posts/209497978-custom-plugin-launch-QDialog-subwindow
CC-MAIN-2020-45
en
refinedweb
Opened 5 years ago Closed 5 years ago #17568 closed Bug (fixed) i18n_patterns and LOGIN_URL, LOGOUT_URL, LOGIN_REDIRECT_URL Description Hi, There is some bug with i18n_patterns and redirection after login/logout action. Sample code: #urls.py urlpatterns = i18n_patterns('', ... (_(r'^auth/'), include('apps.my_auth.urls', namespace='auth')), ) #settings.py ... gettext_noop = lambda s: s LOGIN_URL = gettext_noop('/auth/login/') LOGOUT_URL = gettext_noop('/auth/logout/') LOGIN_REDIRECT_URL = gettext_noop('/accounts/profile/') The urls are translated (for "en" and "nl" language). Now login at redirect to /en/accounts/profile/ and also redirect to /nl/accounts/profile/ but should to /nl/nl_accounts/nl_profile/ I think there is a bug. Thanks. Attachments (2) Change History (10) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by It does not work. Django uses settings.LOGIN_URL, settings.LOGOUT_URL, settings.LOGIN_REDIRECT_URL in django.contrib.auth.views and puts them into the HttpResponseRedirect, so I nothing can do. If Django could use redirect function in place of HttpResponseRedirect this will allow to use url patterns in settings (like: LOGIN_URL = "auth:login") but it does not. comment:3 Changed 5 years ago by It's unfortunate that the default values for those settings are hardcoded URLs, but changing that would be backwards-incompatible. Have you tried redefining those settings using reverse_lazy in your project? comment:4 Changed 5 years ago by comment:5 Changed 5 years ago by Ok, I tested it to be sure: reverse_lazy does what you need it to do. comment:6 Changed 5 years ago by I tested it with reverse_lazy and it works fine. from django.conf.urls import url from django.utils.translation import ugettext as _ from django.conf.urls.i18n import i18n_patterns urlpatterns = i18n_patterns('', url(_(r'^home/$'), 'languages.views.home', name='home'), url(_(r'^login_success/$'), 'languages.views.login_success', name='login_success'), ) urlpatterns += i18n_patterns('django.contrib.auth.views', url(_(r'^login/$'), 'login', name='login'), ) My pl locale: #: urls.py:6 msgid "^home/$" msgstr "^dom/$" #: urls.py:7 msgid "^login_success/$" msgstr "^logowanie_udane/$" #: urls.py:10 msgid "^login/$" msgstr "^loguj/$" My settings: LOGIN_URL = reverse_lazy('login') LOGIN_REDIRECT_URL = reverse_lazy('login_success') Login required redirects to /pl/loguj/ After login redirects to /pl/logowanie_udane/ This is expected behavior. Patch with hint in documentation added (had problems with linking reverse_lazy - if someone knows more about rst please fix). Changed 5 years ago by Add hint to documentation about using reverse_lazy. Changed 5 years ago by Link to reverse_lazy. Also changed the example to use regular patterns() instead of i18n_patterns(), as these are mentioned at the end of the note. I think your issue comes from the fact that you're using a noop function, so your url constants will never get translated. I suggest using reverse_lazy instead. Please feel free to re-open if you disagree, but in that case please also provide an actual test case so it's easier to assess your problem.
https://code.djangoproject.com/ticket/17568
CC-MAIN-2017-13
en
refinedweb
XPathMessageContext Class Defines several XPath functions and namespace mappings commonly used when evaluating XPath expressions against SOAP documents. Assembly: System.ServiceModel (in System.ServiceModel.dll) System.Xml.XmlNamespaceManager System.Xml.Xsl.XsltContext System.ServiceModel.Dispatcher.XPathMessageContext The XPath engine has full XPath context support and uses the .NET Framework's XsltContext class in the same way that XPathNavigator does to implement this support. XsltContext is an abstract class that allows developers to implement custom XPath function libraries and declare XPath variables. XsltContext is an XmlNamespaceManager and thus also contains the namespace prefix mappings. The filter engine implements an XsltContext named XPathMessageContext. XPathMessageContext defines custom functions that can be used in XPath expressions and it declares several common namespace prefix mappings. The following table lists the custom functions defined by XPathMessageContext that can be used in XPath expressions. The following table lists the default namespaces and namespace prefixes that are declared by XPathMessageContext. Available since 3.0 Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
https://msdn.microsoft.com/en-us/library/system.servicemodel.dispatcher.xpathmessagecontext.aspx
CC-MAIN-2017-13
en
refinedweb
Phillip Ezolt wrote:> Does flock/fcntl wake up all of the processes that are waiting on the lock?> > If so, has the thundering herd problem just been pushed into flock/fcntl> instead of accept?> > >> > I'm willing to accept any well founded explanation, and this is where> > most of the concern has been coming from.yes, as can be demonstrated by running the attached program, eg: slock -s 60will start 60 contending processes, and the load average willtend towards 60. the load should tend to 1 since at most oneprocess can actually hold the file lock.at a first glance this would appear tricky to fix. fixing the accept()race looks to be trivial if we had a decent semaphore implementation.who's working on that?jan-simon.#include <unistd.h>#include <stdlib.h>#include <stdio.h>#include <errno.h>#include <fcntl.h>#include <sys/file.h>#include <string.h>static intforkn(int n){ int i; pid_t pid; for (i = 0; i < n; i++) { pid = fork(); switch (pid) { case -1: fprintf(stderr, "slock: fork: %s\n", strerror(errno)); break; case 0: return (i); default: break; } } return (n);}intmain(int c, char *v[]){ int fd = STDIN_FILENO; int ch, id, nservers, nxid; char xid[12]; nservers = 1; while ((ch = getopt(c, v, "s:")) != EOF) { switch (ch) { case 's': nservers = atoi(optarg); if (nservers < 1) nservers = 1; break; } } id = forkn(nservers); nxid = sprintf(xid, "%d ", id); for (;;) { flock(fd, LOCK_EX); flock(fd, LOCK_UN); /*write(1, xid, nxid);*/ }}
http://lkml.org/lkml/1999/5/10/70
CC-MAIN-2017-13
en
refinedweb
Opened 10 years ago Closed 6 years ago Last modified 6 years ago #5327 closed (fixed) ModelChoiceField and ChoiceField clean methods behave differently Description (last modified by ) Using both ChoiceField and ModelChoiceField, I discovered a bug in ChoiceField clean method ( or a discrepancy in behaviour) ModelChoiceField seems to be working as expected, when I call the clean method in a template like so: form.clean.city , I get the city name(e.g New York), or what would be string inside the tag <select id="5"> New York </select> This behaviour is different if the values are inside a ChoiceField, if I use the following in the template: form.clean.city, I get the city id (e.g 5 ), not the expected string or behaviour as using ModelChoiceField. [NOTE: Not calling the clean field, in either !ChoiceField or !ModelChoiceField works identcally, generating a select list ] I modified the fields.py file in django/newforms, the clean method on the ChoiceField class[line 466], would now read: def clean(self, value): """ Validates that the input is in self.choices. """ value = super(ChoiceField, self).clean(value) if value in EMPTY_VALUES: value = u'' value = smart_unicode(value) if value == u'': return value valid_values = set([smart_unicode(k) for k, v in self.choices]) if value not in valid_values: raise ValidationError(ugettext(u'Select a valid choice. That choice is not one of the available choices.')) else: value = self._choices[int(value)][1] return value Only modification is the 'else' at the end, which would assign the value to the actual string. Attachments (1) Change History (14) comment:1 Changed 10 years ago by comment:2 Changed 10 years ago by Logically the model choicefield should be able to return either an object or just its id, but that should be more explicit in the docs (it is not explicitely listed in the FormField list). Also, the ChoiceField doc doesn't say if the first or second member of the choice tuple is used as the normalized value. Changed 10 years ago by clarification of the docs concerning ModelChoices comment:3 Changed 10 years ago by comment:4 Changed 10 years ago by (fixed description formatting) comment:5 Changed 10 years ago by We should resolve the inconsistency here, I suspect. My gut feeling is that cleaning to the id value is more correct, because that is what you would assign to a field that had "choices" as an option. But whichever way we go, I'm not too comfortable with the inconsistency. ModelChoiceField subclasses ChoiceField, so it shouldn't behave wildly differently. comment:6 Changed 10 years ago by comment:7 Changed 10 years ago by ModelChoiceField will NOT return a name, it will return the object associated with the id you selected. The only difference with a ChoiceField is that ModelChoiceField will resolve the object from the id in the queryset, which is to be expected given the field name. There is no inconsistency here, and the patch only clarifies the current documentation. comment:8 Changed 10 years ago by comment:9 Changed 10 years ago by #5481 for fixes ChoiceField choices keys in the general case. IMHO PhiR's doc patch is correct: - ChoiceField normalises to the key, that is the generally expected behaviour - otherwise you get a translated string or something else back. - ModelChoiceField normalises to the model object. It will use the queryset to initialize the keys/strings and then returns back the model object from clean() I'd suggest that using form.cleaned_data.city in a template isn't the best way to get a choice label. It would be relatively easy to create a filter or a tag to do it: @register.simple_tag def choice_value_label(form, field_name): return dict(form.fields[field_name].choices)[form.cleaned_data[field_name] <span>{% choice_value form "city" %}</span> comment:10 Changed 7 years ago by Accepted that the documentation is what needs to be fixed here. comment:11 Changed 6 years ago by Most of this patch (and ticket) was taken care of years ago. I'll merge in the last tiny bits and call it done. Did some more testing, the previous fix assumed the option keys were in order 1,2,3,4,5,6,7,8,9.etc.. I had a list ordered by name, and the previous code obviosuly gave the wrong key value. The following snippet will fix this.
https://code.djangoproject.com/ticket/5327
CC-MAIN-2017-13
en
refinedweb
Controllers are responsible for taking action upon user requests (loosely following the terminology of the MVC meta pattern). The following controllers are provided out-of-the box in CubicWeb. We list them by category. They are all defined in (cubicweb.web.views.basecontrollers). Browsing: Edition: Other: All controllers (should) live in the β€˜controllers’ namespace within the global registry. Most API details should be resolved by source code inspection, as the various controllers have differing goals. See for instance the The edit controller chapter. cubicweb.web.controller contains the top-level abstract Controller class and its unimplemented entry point publish(rset=None) method. A handful of helpers are also provided there:
https://docs.cubicweb.org/book/devweb/controllers.html
CC-MAIN-2017-13
en
refinedweb
Somebody asked: I created a Web Service from a WSDL that came from Java (Apache SOAP). It contains the definition of Vector as complex type: <schema targetNamespace=”” xmlns=β€β€œ> <import namespace=””/> <complexType name=”Vector”> <sequence> <element maxOccurs=”unbounded” minOccurs=”0β€³ name=”item” type=”xsd:anyType”/> </sequence> </complexType> </schema> Then, .NET generated the following (wrong) class: <System.Xml.Serialization.SoapTypeAttribute(β€œVector”, β€œβ€œ)> _ Public Class Vector Public item() As Object End Class This seems to be correct from my point of view! Maybe not all of you are XSD-literate, but the schema specifically says there is a complex type called Vector, which is a sequence of 1:n of xsd:anyType. .NET (correctly) translates this to an array of Object. This illustrates the danger of including platform-specific types such as Java Vector in a Web Service interface – such types are not easily translatable into and out of XML Schema without some loss of information. Instead, start with XSD Schema, and generate the interface from it. Generate the server skeleton and client-side proxy from the WSDL and XSD. This is called β€œcontract first”, or β€œSchema first” design. It’s a Good Thing. But I know you all are thinking, jeez, I’d really rather use some of the more advanced classes that are available with my platform. A Java Vector or ArrayList for example. Does webservices really make me give that up? It’s fine to use those things in the implementation of your service, but since there is no well-defined mapping of those Java-specific classes to XSD (and thus, no well-defined mapping of those types to a type on a different, non-Java platform such as .NET), you will get surprises. Stick to XSD for defining the datatypes to be sent and received over a webservice interface, and you will avoid these unpleasant surprises.
https://blogs.msdn.microsoft.com/dotnetinterop/2004/11/05/java-vector-becomes-object-array/
CC-MAIN-2017-13
en
refinedweb
Does not check for local auth entries in keyring if couchdb.html is present and parseable. Bug #668409 reported by Roman Yepishev on 2010-10-29 This bug affects 4 people Bug Description STR: 1. Open seahorse, remove all desktopcouch tokens (simulate almost fresh start) 2. Stop desktopcouch service, start desktopcouch service. 3. Re-open seahorse Expected results: 2 new entries for DesktopCouch auth Actual results: Np new entries. Reason: class _Configuration( def __init__(self, ctx): ... try: ... return ... # code to add couchdb entries to keyring Workaround: remove ~/.local/ I believe couchdb should definitely check for keyring items presense. Joshua Hoover (joshuahoover) on 2010-11-11 Joshua Hoover (joshuahoover) on 2010-11-12 Joshua Hoover (joshuahoover) on 2012-10-15 Joshua Hoover (joshuahoover) on 2012-11-01 In Oneiric this causes the thunderbird to show the following error message: There was a problem opening the address book "Ubuntu One" - the message returned was: Cannot open book: Could not create DesktopcouchSession object. The workaround is to remove ~/.local/ share/desktop- couch/couchdb. html and restart desktopcouch- service
https://bugs.launchpad.net/desktopcouch/+bug/668409
CC-MAIN-2017-13
en
refinedweb
Interactive VISA control to test your SCPI commands #include <Visa.au3> _viInteractiveControl ( [$sCommand_Save_FilePath = ""] ) Type "FIND" in the Device Descriptor query to perform a GPIB search. This function lets you easily test your SCPI commands interactively. It also lets you save these commands into a file. Simply answer the questions (Device Descriptor, SCPI command and timeout). * If you click Cancel on the 1st question the interactive control ends. * If you click Cancel to the other queries, you will go back to the Device Descriptor question.
https://www.autoitscript.com/autoit3/docs/libfunctions/_viInteractiveControl.htm
CC-MAIN-2018-39
en
refinedweb
0,2 T(n,k) = number of leaves at level k+1 in all ordered trees with n+1 edges. - Emeric Deutsch, Jan 15 2005 Riordan array ((1-2x-sqrt(1-4x))/(2x^2),(1-2x-sqrt(1-4x))/(2x)). Inverse array is A053122. - Paul Barry, Mar 17 2005 T(n,k) = number of walks of n steps, each in direction N, S, W, or E, starting at the origin, remaining in the upper half-plane and ending at height k (see the R. K. Guy reference, p. 5). Example: T(3,2)=6 because we have ENN, WNN, NEN, NWN, NNE and NNW. - Emeric Deutsch, Apr 15 2005 Triangle T(n,k), 0<=k<=n, read by rows given by T(0,0)=1, T(n,k)=0 if k<0 or if k>n, T(n,0)=2*T(n-1,0)+T(n-1,1), T(n,k)=T(n-1,k-1)+2*T(n-1,k)+T(n-1,k+1) for k>=1. - Philippe DelΓ©ham, Mar 30 2007 Number of (2n+1)-step walks from (0,0) to (2n+1,2k+1) and consisting of steps u=(1,1) and d=(1,-1) in which the path stays in the nonnegative quadrant. Examples: T(2,0)=5 because we have uuudd, uudud, uuddu, uduud, ududu; T(2,1)=4 because we have uuuud, uuudu, uuduu, uduuu; T(2,2)=1 because we have uuuuu. - Philippe DelΓ©ham, Apr 16 2007, Apr 18 2007 Triangle read by rows: T(n,k)=number of lattice paths from (0,0) to (n,k) that do not go below the line y=0 and consist of steps U=(1,1), D=(1,-1) and two types of steps H=(1,0); example: T(3,1)=14 because we have UDU, UUD, 4 HHU paths, 4 HUH paths and 4 UHH paths. - Philippe DelΓ©ham, Sep With offset [1,1] this is the (ordinary) convolution triangle a(n,m) with o.g.f. of column m given by (c(x)-1)^m, where c(x) is the o.g.f. for Catalan numbers A000108. See the Riordan comment by Paul Barry. T(n, k) is also the number of order-preserving full transformations (of an n-chain) with exactly k fixed points. - Abdullahi Umar, Oct 02 2008 T(n,k)/2^(2n+1) = coefficients of the maximally flat lowpass digital differentiator of the order N=2n+3. - Pavel Holoborodko (pavel(AT)holoborodko.com), Dec 19 2008 The signed triangle S(n,k):=(-1)^(n-k)*T(n,k) provides the transformation matrix between f(n,l) := L(2*l)*5^n* F(2*l)^(2*n+1) (F=Fibonacci numbers A000045, L=Lucas numbers A000032) and F(4*l*(k+1)), k = 0, ..., n, for each l>=0: f(n,l) = sum(S(n,k)*F(4*l*(k+1)),k=0..n), n>=0, l>=0. Proof: the o.g.f. of the l.h.s., G(l;x) := sum(f(n,l)*x^n, n=0..infty) = F(4*l)/(1 - 5*F(2*l)^2*x) is shown to match the o.g.f. of the r.h.s.: after an interchange of the n- and k-summation, the Riordan property of S = (C(x)/x,C(x)) (compare with the above comments by Paul Barry), with C(x) := 1 - c(-x), with the o.g.f. c(x) of A000108 (Catalan numbers), is used, to obtain, after an index shift, first sum(F(4*l*(k))*GS(k;x), k= 0 .. infty), with the o.g.f of column k of triangle S which is GS(k;x) := sum(S(n,k)*x^n,n=k..infty) = C(x)^{k+1}/x. The result is GF(l;C(x))/x with the o.g.f. GF(l,x):= sum(F(4*l*k)*x^k, k=0..infty) = x*F(4*l)/(1-L(4*l)*x+x^2) (see a comment on A049670, and A028412). If one uses then the identity L(4*n) - 5*F(2*n)^2 = 2 (in Koshy's book [reference under A065563] this is No. 15, p. 88, attributed to Lucas, 1876), the proof that one recovers the o.g.f. of the l.h.s. from above boils down to a trivial identity on the Catalan o.g.f., namely 1/c^2(-x) = 1 + 2*x - (x*c(-x))^2. - Wolfdieter Lang, Aug 27 2012 O.g.f. for row polynomials R(x):=sum(a(n,k)*x^k,k=0..n): ((1+x) - C(z))/(x - (1+x)^2*z) with C the o.g.f. of A000108 (Catalan numbers). From Riordan ((C(x)-1)/x,C(x)-1), compare with a Paul Barry comment above. This coincides with the o.g.f. given by Emeric Deutsch in the formula section. - Wolfdieter Lang, Nov 13 2012 The A-sequence for this Riordan triangle is [1,2,1] and the Z-sequence is [2,1]. See a W. Lang link under A006232 with details and references. - Wolfdieter Lang, Nov 13 2012 From Wolfdieter Lang, Sep 20 2013: (Start) T(n, k) = A053121(2*n+1, 2*k+1). T(n, k) appears in the formula for the (2*n+1)-th power of the algebraic number rho(N):= 2*cos(Pi/N) = R(N, 2) in terms of the even indexed diagonal/side length ratios R(N, 2*(k+1)) = S(2*k+1, rho(N)) in the regular N-gon inscribed in the unit circle (length unit 1). S(n, x) are Chebyshev's S polynomials (see A049310): rho(N)^(2*n+1) = sum(T(n, k)*R(N, 2*(k+1)), k = 0..n), n >= 0, identical in N >= 1. For a proof see the Sep 21 2013 comment under A053121. Note that this is the unreduced version if R(N, j) with j > delta(N), the degree of the algebraic number rho(N) (see A055034), appears. For the even powers of rho(n) see A039599. (End) The tridiagonal Toeplitz production matrix P in the Example section corresponds to the unsigned Cartan matrix for the simple Lie algebra A_n as n tends to infinity (cf. Damianou ref. in A053122). - Tom Copeland, Dec 11 2015 (revised Dec 28 2015) T(n,k) = the number of pairs of non-intersecting walks of n steps, each in direction N or E, starting at the origin, and such that the end points of the two paths are separated by a horizontal distance of k. See Shapiro 1976. - Peter Bala, Apr 12 2017 M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, National Bureau of Standards Applied Math. Series 55, 1964 (and various reprintings), p. 796. B. A. Bondarenko, Generalized Pascal Triangles and Pyramids (in Russian), FAN, Tashkent, 1990, ISBN 5-648-00738-8. Yang, Sheng-Liang, Yan-Ni Dong, and Tian-Xiao He. "Some matrix identities on colored Motzkin paths." Discrete Mathematics 340.12 (2017): 3081-3091. G. C. Greubel, Table of n, a(n) for the first 50 rows, flattened M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, National Bureau of Standards, Applied Math. Series 55, Tenth Printing, 1972 [alternative scanned copy]. JosΓ© Agapito, Γ‚ngela Mestre, Maria M. Torres, and Pasquale Petrullo, On One-Parameter Catalan Arrays, Journal of Integer Sequences, Vol. 18 (2015), Article 15.5.1. M. Aigner, Enumeration via ballot numbers, Discrete Math., 308 (2008), 2544-2563. Quang T. Bach, Jeffrey B. Remmel, Generating functions for descents over permutations which avoid sets of consecutive patterns, arXiv:1510.04319 [math.CO], 2015 (see p. 25). P. Bala, Notes on logarithmic differentiation, the binomial transform and series reversion Paul Barry, On the Hurwitz Transform of Sequences, Journal of Integer Sequences, Vol. 15 (2012), #12.8.7. B. A. Bondarenko, Generalized Pascal Triangles and Pyramids, English translation published by Fibonacci Association, Santa Clara Univ., Santa Clara, CA, 1993; see p. 29. Eduardo H. M. Brietzke, Generalization of an identity of Andrews, Fibonacci Quart. 44 (2006), no. 2, 166-171. F. Cai, Q.-H. Hou, Y. Sun, A. L. B. Yang, Combinatorial identities related to 2x2 submatrices of recursive matrices, arXiv:1808.05736 Table 1.1. Naiomi T. Cameron and Asamoah Nkwanta, On Some (Pseudo) Involutions in the Riordan Group, Journal of Integer Sequences, Vol. 8 (2005), Article 05.3.7. Xi Chen, H. Liang, Y. Wang, Total positivity of recursive matrices, arXiv:1601.05645 [math.CO], 2016. Xi Chen, H. Liang, Y. Wang, Total positivity of recursive matrices, Linear Algebra and its Applications, Volume 471, Apr 15 2015, Pages 383-393. Johann Cigler, Some elementary observations on Narayana polynomials and related topics, arXiv:1611.05252 [math.CO], 2016. See p. 7., Catwalks, sandsteps and Pascal pyramids, J. Integer Sequences, Vol. 3 (2000), Article #00.1.6. T.-X. He, L. W. Shapiro, Fuss-Catalan matrices, their weighted sums, and stabilizer subgroups of the Riordan group, Lin. Alg. Applic. 532 (2017) 25-41, example page 32. Peter M. Higgins, Combinatorial results for semigroups of order-preserving mappings, Math. Proc. Camb. Phil. Soc. 113 (1993), 281-296. A. Laradji, and A. Umar, Combinatorial results for semigroups of order-preserving full transformations, Semigroup Forum 72 (2006), 51-62. Donatella Merlini and Renzo Sprugnoli, Arithmetic into geometric progressions through Riordan arrays, Discrete Mathematics 340.2 (2017): 160-174. See (1.1). Pedro J. Miana, Hideyuki Ohtsuka, Natalia Romero, Sums of powers of Catalan triangle numbers, arXiv:1602.04347 [math.NT], 2016 (see 2.4). A. Nkwanta, A. Tefera, Curious Relations and Identities Involving the Catalan Generating Function and Numbers, Journal of Integer Sequences, 16 (2013), #13.9.5. L. W. Shapiro, W.-J. Woan and S. Getu, Runs, slides and moments, SIAM J. Alg. Discrete Methods, 4 (1983), 459-466. L. W. Shapiro, A Catalan triangle, Discrete Math., 14, 83-90, 1976. L. W. Shapiro, A Catalan triangle, Discrete Math. 14 (1976), no. 1, 83-90. [Annotated scanned copy] Yidong Sun and Fei Ma, Minors of a Class of Riordan Arrays Related to Weighted Partial Motzkin Paths, arXiv preprint arXiv:1305.2015 [math.CO], 2013. Yidong Sun and Fei Ma, Four transformations on the Catalan triangle, arXiv preprint arXiv:1305.2017 [math.CO], 2013. Yidong Sun and Fei Ma, Some new binomial sums related to the Catalan triangle, Electronic Journal of Combinatorics 21(1) (2014), #P1.33 Charles Zhao-Chen Wang, Yi Wang, Total positivity of Catalan triangle, Discrete Math. 338 (2015), no. 4, 566--568. MR3300743. W.-J. Woan, L. Shapiro and D. G. Rogers, The Catalan numbers, the Lebesgue integral and 4^{n-2}, Amer. Math. Monthly, 104 (1997), 926-931. Row n: C(2n, n-k)-C(2n, n-k-2). a(n, k) = C(2n+1, n-k)*2*(k+1)/(n+k+2) = A050166(n, n-k) = a(n-1, k-1)+2*a(n-1, k)+a(n-1, k+1) [with a(0, 0) = 1 and a(n, k) = 0 if n<0 or n<k]. - Henry Bottomley, Sep 24 2001 T(n, 0) = A000108(n+1), T(n, k) = 0 if n<k; for k>0, T(n, k) = Sum_{j=1..n} T(n-j, k-1)*A000108(j). G.f. for column k: Sum_{n>=0} T(n, k)*x^n = x^k*C(x)^(2*k+2) where C(x) = Sum_{n>=0} A000108(n)*x^n is g.f. for Catalan numbers, A000108. Sum_{k>=0} T(m, k)*T(n, k) = A000108(m+n+1). - Philippe DelΓ©ham, Feb 14 2004 T(n, k) = A009766(n+k+1, n-k) = A033184(n+k+2, 2k+2). - Philippe DelΓ©ham, Feb 14 2004 Sum_{j>=0} T(k, j)*A039599(n-k, j) = A028364(n, k). - Philippe DelΓ©ham, Mar 04 2004 Antidiagonal sum_{k=0..n} T(n-k, k) = A000957(n+3). - Gerald McGarvey, Jun 05 2005 The triangle may also be generated from M^n * [1,0,0,0...], where M = an infinite tridiagonal matrix with 1's in the super and subdiagonals and [2,2,2...] in the main diagonal. - Gary W. Adamson, Dec 17 2006 G.f.: G(t,x)=C^2/(1-txC^2), where C=[1-sqrt(1-4x)]/(2x) is the Catalan function. From here G(-1,x)=C, i.e., the alternating row sums are the Catalan numbers (A000108). - Emeric Deutsch, Jan 20 2007 Sum_{k, 0<=k<=n}T(n,k)*x^k = A000957(n+1), A000108(n), A000108(n+1), A001700(n), A049027(n+1), A076025(n+1), A076026(n+1) for x=-2,-1,0,1,2,3,4 respectively (see square array in A067345). - Philippe DelΓ©ham, Mar 21 2007, Nov 04 2011 Sum_{k, 0<=k<=n}T(n,k)*(k+1) = 4^n. - Philippe DelΓ©ham, Mar 30 2007 Sum_{j, j>=0}T(n,j)*binomial(j,k)=A035324(n,k), A035324 with offset 0 (0<=k<=n). - Philippe DelΓ©ham, Mar 30 2007 T(n,k) = A053121(2*n+1,2*k+1). - Philippe DelΓ©ham, Apr 16 2007, Apr 18 2007 T(n,k) = A039599(n,k)+A039599(n,k+1). - Philippe DelΓ©ham, Sep 11 2007 Sum_{k, 0<=k<=n+1}T(n+1,k)*k^2 = A029760(n). - Philippe DelΓ©ham, Dec 16 2007 Sum_{k, 0<=k<=n}T(n,k)*A059841(k)= A000984(n). - Philippe DelΓ©ham, Nov 12 2008 G.f.: 1/(1-xy-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-2x-x^2/(1-.... (continued fraction). Sum_{k, 0<=k<=n} T(n,k)*x^(n-k) = A000012(n), A001700(n), A194723(n+1), A194724(n+1), A194725(n+1), A194726(n+1), A194727(n+1), A194728(n+1), A194729(n+1), A194730(n+1) for x = 0,1,2,3,4,5,6,7,8,9 respectively. - Philippe DelΓ©ham, Nov 03 2011 From Peter Bala, Dec 21 2014: (Start) This triangle factorizes in the Riordan group as ( C(x), x*C(x) ) * ( 1/(1 - x), x/(1 - x) ) = A033184 * A007318, where C(x) = (1 - sqrt(1 - 4*x))/(2*x) is the o.g.f. for the Catalan numbers A000108. Let U denote the lower unit triangular array with 1's on or below the main diagonal and zeros elsewhere. For k = 0,1,2,... define U(k) to be the lower unit triangular block array /I_k 0\ \ 0 U/ having the k X k identity matrix I_k as the upper left block; in particular, U(0) = U. Then this array equals the bi-infinite product (...*U(2)*U(1)*U(0))*(U(0)*U(1)*U(2)*...). (End) From Peter Bala, Jul 21 2015: (Start) O.g.f. G(x,t) = 1/x * series reversion of ( x/f(x,t) ), where f(x,t) = ( 1 + (1 + t)*x )^2/( 1 + t*x ). 1 + x*d/dx(G(x,t))/G(x,t) = 1 + (2 + t)*x + (6 + 4*t + t^2)*x^2 + ... is the o.g.f for A094527. (End) Conjecture: Sum_{k=0..n} T(n,k)/(k+1)^2 = H(n+1)*A000108(n)*(2*n+1)/(n+1), where H(n+1) = Sum_{k=0..n} 1/(k+1). - Werner Schulte, Jul 23 2015 From Werner Schulte, Jul 25 2015: (Start) Sum_{k=0..n} T(n,k)*(k+1)^2 = (2*n+1)*binomial(2*n,n). (A002457) Sum_{k=0..n} T(n,k)*(k+1)^3 = 4^n*(3*n+2)/2. Sum_{k=0..n} T(n,k)*(k+1)^4 = (2*n+1)^2*binomial(2*n,n). Sum_{k=0..n} T(n,k)*(k+1)^5 = 4^n*(15*n^2+15*n+4)/4. (End) The o.g.f. G(x,t) is such that G(x,t+1) is the o.g.f. for A035324, but with an offset of 0, and G(x,t-1) is the o.g.f. for A033184, again with an offset of 0. - Peter Bala, Sep 20 2015 Triangle T(n,k) starts: n\k 0 1 2 3 4 5 6 7 8 9 10 0: 1 1: 2 1 2: 5 4 1 3: 14 14 6 1 4: 42 48 27 8 1 5: 132 165 110 44 10 1 6: 429 572 429 208 65 12 1 7: 1430 2002 1638 910 350 90 14 1 8: 4862 7072 6188 3808 1700 544 119 16 1 9: 16796 25194 23256 15504 7752 2907 798 152 18 1 10: 58786 90440 87210 62016 33915 14364 4655 1120 189 20 1 ... Reformatted and extended by Wolfdieter Lang, Nov 13 2012. Production matrix begins: 2, 1 1, 2, 1 0, 1, 2, 1 0, 0, 1, 2, 1 0, 0, 0, 1, 2, 1 0, 0, 0, 0, 1, 2, 1 0, 0, 0, 0, 0, 1, 2, 1 0, 0, 0, 0, 0, 0, 1, 2, 1 - Philippe DelΓ©ham, Nov 07 2011 From Wolfdieter Lang, Nov 13 2012: (Start) Recurrence: T(5,1) = 165 = 1*42 + 2*48 +1*27. The Riordan A-sequence is [1,2,1]. Recurrence from Riordan Z-sequence [2,1]: T(5,0) = 132 = 2*42 + 1*48. (End) Example for rho(N) = 2*cos(Pi/N) powers: n=2: rho(N)^5 = 5*R(N, 2) + 4*R(N, 4) + 1*R(N, 6) = 5*S(1, rho(N)) + 4*S(3, rho(N)) + 1*S(5, rho(N)), identical in N >= 1. For N=5 (the pentagon with only one distinct diagonal) the degree delta(5) = 2, hence R(5, 4) and R(5, 6) can be reduced, namely to R(5, 1) = 1 and R(5, 6) = -R(5,1) = -1, respectively. Thus rho(5)^5 = 5*R(N, 2) + 4*1 + 1*(-1) = 3 + 5*R(N, 2) = 3 + 5*rho(5), with the golden section rho(5). (End) T:=(n, k)->binomial(2*n, n-k) - binomial(2*n, n-k-2); # N. J. A. Sloane, Aug 26 2013 Flatten[Table[Binomial[2n, n-k] - Binomial[2n, n-k-2], {n, 0, 9}, {k, 0, n}]] (* Jean-FranΓ§ois Alcover, May 03 2011 *) (Sage) # Algorithm of L. Seidel (1877) # Prints the first n rows of the triangle. def A039598_triangle(n) : D = [0]*(n+2); D[1] = 1 b = True; h = 1 for i in range(2*n) : if b : for k in range(h, 0, -1) : D[k] += D[k-1] h += 1 else : for k in range(1, h, 1) : D[k] += D[k+1] b = not b if b : print [D[z] for z in (1..h-1) ] A039598_triangle(10) # Peter Luschny, May 01 2012 (MAGMA) /* As triangle: */ [[Binomial(2*n, n-k) - Binomial(2*n, n-k-2): k in [0..n]]: n in [0.. 15]]; // Vincenzo Librandi, Jul 22 2015 (PARI) T(n, k)=binomial(2*n, n-k) - binomial(2*n, n-k-2) \\ Charles R Greathouse IV, Nov 07 2016 Mirror image of A050166. Row sums are A001700. Cf. A008313, A039599, A183134, A094527, A033184, A035324, A053122. Sequence in context: A171488 A171651 A104710 * A128738 A193673 A126181 Adjacent sequences: A039595 A039596 A039597 * A039599 A039600 A039601 nonn,tabl,easy,nice N. J. A. Sloane Typo in one entry corrected by Philippe DelΓ©ham, Dec 16 2007 approved
https://oeis.org/A039598
CC-MAIN-2018-39
en
refinedweb
1,5 Let M = n X n matrix with (i,j)-th entry a(n+1-j, n+1-i), e.g., if n = 3, M = [1 1 1; 3 1 0; 2 0 0]. Given a sequence s = [s(0)..s(n-1)], let b = [b(0)..b(n-1)] be its inverse binomial transform and let c = [c(0)..c(n-1)] = M^(-1)*transpose(b). Then s(k) = Sum_{i=0..n-1} b(i)*binomial(k,i) = Sum_{i=0..n-1} c(i)*k^i, k=0..n-1. - Gary W. Adamson, Nov 11 2001 From Gary W. Adamson, Aug 09 2008: (Start) Julius Worpitzky's 1883 algorithm generates Bernoulli numbers. By way of example [Wikipedia]: B0 = 1; B1 = 1/1 - 1/2; B2 = 1/1 - 3/2 + 2/3; B3 = 1/1 - 7/2 + 12/3 - 6/4; B4 = 1/1 - 15/2 + 50/3 - 60/4 + 24/5; B5 = 1/1 - 31/2 + 180/3 - 390/4 + 360/5 - 120/6; B6 = 1/1 - 63/2 + 602/3 - 2100/4 + 3360/5 - 2520/6 + 720/7; ... Note that in this algorithm, odd n's for the Bernoulli numbers sum to 0, not 1, and the sum for B1 = 1/2 = (1/1 - 1/2). B3 = 0 = (1 - 7/2 + 13/3 - 6/4) = 0. The summation for B4 = -1/30. (End) Pursuant to Worpitzky's algorithm and given M = A028246 as an infinite lower triangular matrix, M * [1/1, -1/2, 1/3, ...] (i.e., the Harmonic series with alternate signs) = the Bernoulli numbers starting [1/1, 1/2, 1/6, ...]. - Gary W. Adamson, Mar 22 2012 From Tom Copeland, Oct 23 2008: (Start) G(x,t) = 1/ {1 + [1-exp(x t)]/t} = 1 + 1 x + (2 + t) x^2/2! + (6 + 6t + t^2) x^3/3! + ... gives row polynomials for A090582, the f-polynomials for the permutohedra (see A019538). G(x,t-1) = 1 + 1 x + (1 + t) x^2 / 2! + (1 + 4t + t^2) x^3 / 3! + ... gives row polynomials for A008292, the h-polynomials for permutohedra. G[(t+1)x,-1/(t+1)] = 1 + (1+ t) x + (1 + 3t + 2 t^2) x^2 / 2! + ... gives row polynomials for the present triangle. (End) The Worpitzky triangle seems to be an apt name for this triangle. - Johannes W. Meijer, Jun 18 2009 If Pascal's triangle is written as a lower triangular matrix and multiplied by A028246 written as an upper triangular matrix, the product is a matrix where the (i,j)-th term is (i+1)^j. For example, 1,0,0,0 1,1,1, 1 1,1, 1, 1 1,1,0,0 * 0,1,3, 7 = 1,2, 4, 8 1,2,1,0 0,0,2,12 1,3, 9,27 1,3,3,1 0,0,0, 6 1,4,16,64 So, numbering all three matrices' rows and columns starting at 0, the (i,j) term of the product is (i+1)^j. - Jack A. Cohen (ProfCohen(AT)comcast.net), Aug 03 2010 The Fi1 and Fi2 triangle sums are both given by sequence A000670. For the definition of these triangle sums see A180662. The mirror image of the Worpitzky triangle is A130850. - Johannes W. Meijer, Apr 20 2011 Let S_n(m) = 1^m + 2^m + ... + n^m. Then, for n >= 0, we have the following representation of S_n(m) as a linear combination of the binomial coefficients: S_n(m) = Sum_{i=1..n+1} a(i+n*(n+1)/2)*C(m,i). E.g., S_2(m) = a(4)*C(m,1) + a(5)*C(m,2) + a(6)*C(m,3) = C(m,1) + 3*C(m,2) + 2*C(m,3). - Vladimir Shevelev, Dec 21 2011 Given the set X = [1..n] and 1 <= k <= n, then a(n,k) is the number of sets T of size k of subset S of X such that S is either empty or else contains 1 and another element of X and such that any two elemements of T are either comparable or disjoint. - Michael Somos, Apr 20 2013 Working with the row and column indexing starting at -1, a(n,k) gives the number of k-dimensional faces in the first barycentric subdivision of the standard n-dimensional simplex (apply Brenti and Welker, Lemma 2.1). For example, the barycentric subdivision of the 2-simplex (a triangle) has 1 empty face, 7 vertices, 12 edges and 6 triangular faces giving row 4 of this triangle as (1,7,12,6). Cf. A053440. - Peter Bala, Jul 14 2014 See A074909 and above g.f.s for associations among this array and the Bernoulli polynomials and their umbral compositional inverses. - Tom Copeland, Nov 14 2014 An e.g.f. G(x,t) = exp[P(.,t)x] = 1/t - 1/[t+(1-t)(1-e^(-xt^2))] = (1-t) * x + (-2t + 3t^2 - t^3) * x^2/2! + (6t^2 - 12t^3 + 7t^4 - t^5) * x^3/3! + ... for the shifted, reverse, signed polynomials with the first element nulled, is generated by the infinitesimal generator g(u,t)d/du = [(1-u*t)(1-(1+u)t)]d/du, i.e., exp[x * g(u,t)d/du] u eval. at u=0 generates the polynomials. See A019538 and the G. Rzadkowski link below for connections to the Bernoulli and Eulerian numbers, a Ricatti differential equation, and a soliton solution to the KdV equation. The inverse in x of this e.g.f. is Ginv(x,t) = (-1/t^2)*log{[1-t(1+x)]/[(1-t)(1-tx)]} = [1/(1-t)]x + [(2t-t^2)/(1-t)^2]x^2/2 + [(3t^2-3t^3+t^4)/(1-t)^3)]x^3/3 + [(4t^3-6t^4+4t^5-t^6)/(1-t)^4]x^4/4 + ... . The numerators are signed, shifted A135278 (reversed A074909), and the rational functions are the columns of A074909. Also, dG(x,t)/dx = g(G(x,t),t) (cf. A145271). (Analytic G(x,t) added, and Ginv corrected and expanded on Dec 28 2015.) - Tom Copeland, Nov 21 2014 The operator R = x + (1 + t) + t e^{-D} / [1 + t(1-e^(-D))] = x + (1+t) + t - (t+t^2) D + (t+3t^2+2t^3) D^2/2! - ... contains an e.g.f. of the reverse row polynomials of the present triangle, i.e., A123125 * A007318 (with row and column offset 1 and 1). Umbrally, R^n 1 = q_n(x;t) = (q.(0;t)+x)^n, with q_m(0;t) = (t+1)^(m+1) - t^(m+1), the row polynomials of A074909, and D = d/dx. In other words, R generates the Appell polynomials associated with the base sequence A074909. For example, R 1 = q_1(x;t) = (q.(0;t)+x) = q_1(0;t) + q__0(0;t)x = (1+2t) + x, and R^2 1 = q_2(x;t) = (q.(0;t)+x)^2 = q_2(0:t) + 2q_1(0;t)x + q_0(0;t)x^2 = 1+3t+3t^2 + 2(1+2t)x + x^2. Evaluating the polynomials at x=0 regenerates the base sequence. With a simple sign change in R, R generates the Appell polynomials associated with A248727. - Tom Copeland, Jan 23 2015 For a natural refinement of this array, see A263634. - Tom Copeland, Nov 06 2015 From Wolfdieter Lang, Mar 13 2017: (Start) The e.g.f. E(n, x) for {S(n, m)}_{m>=0} with S(n, m) = Sum_{k=1..m} k^n, n >= 0, (with undefined sum put to 0) is exp(x)*R(n+1, x) with the exponential row polynomials R(n, x) = Sum_{k=1..n} a(n, k)*x^k/k!. E.g., e.g.f. for n = 2, A000330: exp(x)*(1*x/1!+3*x^2/2!+2*x^3/3!). The o.g.f. G(n, x) for {S(n, m)}_{m >=0} is then found by Laplace transform to be G(n, 1/p) = p*Sum_{k=1..n} a(n+1, k)/(p-1)^(2+k). Hence G(n, x) = x/(1 - x)^(n+2)*Sum_{k=1..n} A008292(n,k)*x^(k-1). E.g., n=2: G(2, 1/p) = p*(1/(p-1)^2 + 3/(p-1)^3 + 2/(p-1)^4) = p^2*(1+p)/(p-1)^4; hence G(2, x) = x*(1+x)/(1-x)^4. This works also backwards: from the o.g.f. to the e.g.f. of {S(n, m)}_{m>=0}. (End) Seiichi Manyama, Table of n, a(n) for n = 1..10000 V. S. Abramovich, Power sums of natural numbers, Kvant, no. 5 (1973), 22-25. (in Russian) P. Bala, Deformations of the Hadamard product of power series Paul Barry, Three Études on a sequence transformation pipeline, arXiv:1803.06408 [math.CO], 2018. H. Belbachir, M. Rahmani, B. Sury, Sums Involving Moments of Reciprocals of Binomial Coefficients, J. Int. Seq. 14 (2011) #11.6.6 Hacene Belbachir and Mourad Rahmani, Alternating Sums of the Reciprocals of Binomial Coefficients, Journal of Integer Sequences, Vol. 15 (2012), #12.2.8. F. Brenti and V. Welker, f-vectors of barycentric subdivisions, arXiv:math/0606356v1 [math.CO], Math. Z., 259(4), 849-865, 2008. Patibandla Chanakya, Putla Harsha, Generalized Nested Summation of Powers of Natural Numbers, arXiv:1808.08699 [math.NT], 2018. See Table 1. T. Copeland, Generators, Inversion, and Matrix, Binomial, and Integral Transforms E. Delucchi, A. Pixton and L. Sabalka. Face vectors of subdivided simplicial complexes arXiv:1002.3201v3 [math.CO], Discrete Mathematics, Volume 312, Issue 2, January 2012, Pages 248-257. G. H. E Duchamp, N. Hoang-Nghia, A. Tanasa, A word Hopf algebra based on the selection/quotient principle, Séminaire Lotharingien de Combinatoire 68 (2013), Article B68c. M. Dukes, C. D. White, Web Matrices: Structural Properties and Generating Combinatorial Identities, arXiv:1603.01589 [math.CO], 2016. H. Hasse, Ein Summierungsverfahren fuer die Riemannsche Zeta-Reihe. Shi-Mei Ma, A family of two-variable derivative polynomials for tangent and secant, El J. Combinat. 20(1) (2013), P11. A. Riskin and D. Beckwith, Problem 10231, Amer. Math. Monthly, 102 (1995), 175-176. G. Rzadkowski, Bernoulli numbers and solitons revisited, Journal of Nonlinear Mathematical Physics, Volume 17, Issue 1, 2010. John K. Sikora, On Calculating the Coefficients of a Polynomial Generated Sequence Using the Worpitzky Number Triangles, arXiv:1806.00887 [math.NT], 2018. G. J. Simmons, A combinatorial problem associated with a family of combination locks, Math. Mag., 37 (1964), 127-132 (but there are errors). The triangle is on page 129. N. J. A. Sloane, Transforms Sam Vandervelde, The Worpitzky Numbers Revisited, Amer. Math. Monthly, 125:3 (2018), 198-206. Wikipedia, Bernoulli number. Wikipedia, Barycentric subdivision E.g.f.: -log(1-y*(exp(x)-1)). - Vladeta Jovovic, Sep 28 2003 a(n, k) = S2(n, k)*(k-1)! where S2(n, k) is a Stirling number of the second kind (cf. A008277). Also a(n,k) = T(n,k)/k, where T(n, k) = A019538. Essentially same triangle as triangle [1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, ...] DELTA [1, 1, 2, 2, 3, 3, 4, 4, 5, 5, ...] where DELTA is Deléham's operator defined in A084938, but the notation is different. Sum of terms in n-th row = A000629(n) - Gary W. Adamson, May 30 2005 The row generating polynomials P(n, t) are given by P(1, t)=t, P(n+1, t) = t(t+1)(d/dt)P(n, t) for n >= 1 (see the Riskin and Beckwith reference). - Emeric Deutsch, Aug 09 2005 From Gottfried Helms, Jul 12 2006: (Start) Delta-matrix as can be read from H. Hasse's proof of a connection between the zeta-function and Bernoulli numbers (see link below). Let P = lower triangular matrix with entries P[row,col] = binomial(row,col). Let J = unit matrix with alternating signs J[r,r]=(-1)^r. Let N(m) = column matrix with N(m)(r) = (r+1)^m, N(1)--> natural numbers. Let V = Vandermonde matrix with V[r,c] = (r+1)^c. V is then also N(0)||N(1)||N(2)||N(3)... (indices r,c always beginning at 0). Then Delta = P*J * V and B' = N(-1)' * Delta, where B is the column matrix of Bernoulli numbers and ' means transpose, or for the single k-th Bernoulli number B_k with the appropriate column of Delta, B_k = N(-1)' * Delta[ *,k ] = N(-1)' * P*J * N(k). Using a single column instead of V and assuming infinite dimension, H. Hasse showed that in x = N(-1) * P*J * N(s), where s can be any complex number and s*zeta(1-s) = x. His theorem reads: s*zeta(1-s) = Sum_{n>=0..inf} (n+1)^-1*delta(n,s), where delta(n,s) = Sum_{j=0..n} (-1)^j * binomial(n,j) * (j+1)^s. (End) The k-th row (k>=1) contains a(i, k) for i=1 to k, where a(i, k) satisfies Sum_{i=1..n} C(i, 1)^k = 2 * C(n+1, 2) * Sum_{i=1..k} a(i, k) * C(n-1, i-1)/(i+1). E.g., Row 3 contains 1, 3, 2 so Sum_{i=1..n} C(i, 1)^3 = 2 * C(n+1, 2) * [ a(1, 3)/2 + a(2, 3)*C(n-1, 1)/3 + a(3, 3)*C(n-1, 2)/4 ] = [ (n+1)*n ] * [ 1/2 + (3/3)*C(n-1, 1) + (2/4)*C(n-1, 2) ] = ( n^2 + n ) * ( n -1 + [ C(n-1, 2) + 1 ]/2 ) = C(n+1, 2)^2. See A000537 for more details ( 1^3 + 2^3 + 3^3 + 4^3 + 5^3 + ... ). - André F. Labossière, Sep 22 2003 a(n,k) = k*a(n-1,k) + (k-1)*a(n-1,k-1) with a(n,1) = 1 and a(n,n) = (n-1)!. - Johannes W. Meijer, Jun 18 2009 Rephrasing the Meijer recurrence above: Let M be the (n+1)X(n+1) bidiagonal matrix with M(r,r) = M(r,r+1) = r, r >= 1, in the two diagonals and the rest zeros. The row a(n+1,.) of the triangle is row 1 of M^n. - Gary W. Adamson, Jun 24 2011 From Tom Copeland, Oct 11 2011: (Start) With e.g.f.. A(x,t) = G[(t+1)x,-1/(t+1)]-1 (from 2008 comment) = -1 + 1/[1-(1+t)(1-e^(-x))] = (1+t)x + (1+3t+2t^2)x^2/2! + ..., the comp. inverse in x is B(x,t)= -log(t/(1+t)+1/((1+t)(1+x))) = (1/(1+t))x - ((1+2t)/(1+t)^2)x^2/2 + ((1+3t+3t^2)/(1+t)^3)x^3/3 + .... The numerators are the row polynomials of A074909, and the rational functions are (omitting the initial constants) signed columns of the re-indexed Pascal triangle A007318. Let h(x,t)= 1/(dB/dx) = (1+x)(1+t(1+x)), then the row polynomial P(n,t) = (1/n!)(h(x,t)*d/dx)^n x, evaluated at x=0, A=exp(x*h(y,t)*d/dy) y, eval. at y=0, and dA/dx = h(A(x,t),t), with P(1,t)=1+t. (Series added Dec 29 2015.)(End) Let <n,k> denote the Eulerian numbers A173018(n,k), then T(n,k) = Sum_{j=0..n} <n,j>*binomial(n-j,n-k). - Peter Luschny, Jul 12 2013 Matrix product A007318 * A131689. The n-th row polynomial R(n,x) = Sum_{k >= 1} k^(n-1)*(x/(1 + x))^k, valid for x in the open interval (-1/2, inf). Cf A038719. R(n,-1/2) = (-1)^(n-1)*(2^n - 1)*Bernoulli(n)/n. - Peter Bala, Jul 14 2014 a(n,k) = A141618(n,k) / C(n,k-1). - Tom Copeland, Oct 25 2014 For the row polynomials, A028246(n,x) = A019538(n-1,x) * (1+x). - Tom Copeland, Dec 28 2015 A248727 = A007318*(reversed A028246) = A007318*A130850 = A007318*A123125*A007318 = A046802*A007318. - Tom Copeland, Nov 14 2016 n-th row polynomial R(n,x) = (1+x) o (1+x) o ... o (1+x) (n factors), where o denotes the black diamond multiplication operator of Dukes and White. See example E11 in the Bala link. - Peter Bala, Jan 12 2018 The triangle a(n, k) starts: n\k 1 2 3 4 5 6 7 8 9 1: 1 2: 1 1 3: 1 3 2 4: 1 7 12 6 5: 1 15 50 60 24 6: 1 31 180 390 360 120 7: 1 63 602 2100 3360 2520 720 8: 1 127 1932 10206 25200 31920 20160 5040 9: 1 255 6050 46620 166824 317520 332640 181440 40320 ... [Reformatted by Wolfdieter Lang, Mar 26 2015] ----------------------------------------------------- Row 5 of triangle is {1,15,50,60,24}, which is {1,15,25,10,1} times {0!,1!,2!,3!,4!}. From Vladimir Shevelev, Dec 22 2011: (Start) Also, for power sums, we have S_0(n) = C(n,1); S_1(n) = C(n,1) + C(n,2); S_2(n) = C(n,1) + 3*C(n,2) + 2*C(n,3); S_3(n) = C(n,1) + 7*C(n,2) + 12*C(n,3) + 6*C(n,4); S_4(n) = C(n,1) + 15*C(n,2) + 50*C(n,3) + 60*C(n,4) + 24*C(n,5); etc. For X = [1,2,3], the sets T are {{}}, {{},{1,2}}, {{},{1,3}}, {{},{1,2,3}}, {{},{1,2},{1,2,3}}, {{},{1,3},{1,2,3}} and so a(3,1)=1, a(3,2)=3, a(3,3)=2. - Michael Somos, Apr 20 2013 a := (n, k) -> add((-1)^(k-i)*binomial(k, i)*i^n, i=0..k)/k; seq(print(seq(a(n, k), k=1..n)), n=1..10); T := (n, k) -> add(eulerian1(n, j)*binomial(n-j, n-k), j=0..n): seq(print(seq(T(n, k), k=0..n)), n=0..9); # Peter Luschny, Jul 12 2013 a[n_, k_] = Sum[(-1)^(k-i) Binomial[k, i]*i^n, {i, 0, k}]/k; Flatten[Table[a[n, k], {n, 10}, {k, n}]] (* Jean-François Alcover, May 02 2011 *) (PARI) {T(n, k) = if( k<0 || k>n, 0, n! * polcoeff( (x / log(1 + x + x^2 * O(x^n) ))^(n+1), n-k))}; /* Michael Somos, Oct 02 2002 */ (Sage) def A163626_row(n) : var('x') A = [] for m in range(0, n, 1) : A.append((-x)^m) for j in range(m, 0, -1): A[j - 1] = j * (A[j - 1] - A[j]) return coefficientlist(A[0], x) for i in (1..7) : print A163626_row(i) # Peter Luschny, Jan 25 2012 Dropping the column of 1's gives A053440. See also A008277. Without the k in the denominator (in the definition), we get A019538. See also the Stirling number triangle A008277. Cf. A087127, A087107, A087108, A087109, A087110, A087111, A084938 A075263. Row sums give A000629(n-1) for n >= 1. A027642, A002445. - Gary W. Adamson, Aug 09 2008 Appears in A161739 (RSEG2 triangle), A161742 and A161743. - Johannes W. Meijer, Jun 18 2009 Binomial transform is A038719. Cf. A053440, A131689. Cf. A007318, A008292, A046802, A074909, A090582, A123125, A130850, A135278, A141618, A145271, A163626, A248727, A263634. Sequence in context: A134436 A186370 A163626 * A082038 A143774 A196842 Adjacent sequences: A028243 A028244 A028245 * A028247 A028248 A028249 nonn,easy,nice,tabl N. J. A. Sloane, Doug McKenzie (mckfam4(AT)aol.com) Definition corrected by Li Guo, Dec 16 2006 Typo in link corrected by Johannes W. Meijer, Oct 17 2009 Error in title corrected by Johannes W. Meijer, Sep 24 2010 Edited by M. F. Hasler, Oct 29 2014 approved
http://oeis.org/A028246
CC-MAIN-2018-39
en
refinedweb
I read on D's documentation that it's possible to format strings with arguments as print statements, such as the following: float x = 100 / 3.0; writefln("Number: %.*g", 2, x); Number: 33.33 However, I'm wondering how I would do this if I just wanted the string equivalent, without printing it. I've looked at the std.format library but that seems way to messy for something I only need to use once. Is there anything a little bit more clear available? Import std.string or std.format and use the format function. import std.string; void main() { float x = 100 / 3.0; auto s = format("Number: %.*g", 4, x); assert(s == "Number: 33.33"); }
http://databasefaq.com/index.php/answer/1851/string-formatting-d-formatting-a-string-in-d
CC-MAIN-2018-39
en
refinedweb
Rationale for the USA Federal Budget Deficit Explain why the United States has ran a federal budget deficit for the majority of the last 80 years. Explain why the United States has ran a federal budget deficit for the majority of the last 80 years. Good-N-Safe Insurance Agency's finance department has been swamped with claims stemming from a recent hurricane. Because they're overwhelmed with work they can justify allowing one of their inspectors, who also happens to be accounting savvy, to process the necessary purchase order, approve the purchase order, make the final ap The MacBurger Company a chain of fast food restaurants, expects to earn $200 million after taxes for the current year. The company has a policy of paying out half of its net after-tax income to the holders of the company's 100 million shares of common stock. A share of the common stock of the company currently sells for eight University Athletic Wear (UAW) is evaluating several new product proposals. Resources are available for any or all of the products . UAW uses a MARR of 15% and a time span of 5 years for project evaluations. Determine which new product(s) should be chosen using Internal rate of return criteria. MARR=15% AAAcme Company is preparing an investment plan and has received three proposals with investments and returns as shown below. Which one(s) should be approved using a discounted payback period of 3 years (meaning by the end of year 3). AAAcme uses a MARR of 12%. Cash Flow Year X How does the cost of traditional health care services affect the demand for alternative care? Please list all sources. What is third-degree price discrimination? What three conditions must be met for third-degree price discrimination to be feasible? Give examples of firms that use third-degree price discrimination. A manufacturing firm is considering the mutually exclusive alternatives shown below. Determine which is a better choice at a MARR if 15% based on the IRR criterion. n Project A1 Project A2 0 $(2,500) $(3,600) 1 $1,600 $2,600 2 $1,840 $2,200 Debate the American economy. If we are facing problems with jobs and cutting taxes, how will it work if they privatize Medicare and cap social spending? Chapter 11 1. Why does the assumption of independence of risks matter in the example of insurance? What would happen to premiums if the probabilities of house burning were positively correlated? 14. Small firms can discover the abilities of their workers more quickly than large ones because they can observe the workers more Do you think that Hyatt practiced SHRM in deciding to outsource housekeeping? Can you determine if it improved or hurt the bottom line? What would you have advised Hyatt management to do to remain competitive now and in the future? Text Hyatt Report When the housekeepers at the three Hyatt hotels in the Boston area we It is sometimes said that a manager of a monopoly can charge any price and customers will still have to buy the product. Do you agree or disagree? Why?.Pr a. Discuss the problems of measuring productivity in actual work situations. b. How might productivity be measured for each of the following industries? i. Education (e.g., elementary and secondary education, higher education - undergraduate and graduate) ii. Government (e.g., the Social Security Office, the Internal Revenue As more potentially life-saving, but expensive drugs come to market, patients and insurance companies have difficult decisions to make. While in theory people would pay anything to save their own life or that of a family member, the efficacy of some of these drugs is uncertain and some are not curative. For example, Genentech's How will each of the following affect the supply for insurance? a. a larger pool of insured persons b. lower administration costs for insurance companies c. higher premiums (with no change in risk experience) d. a greater degree of risk aversion on the part of insurers a. Discuss your view of elasticity of gasoline based upon the changes we have experienced during the past few years. b. Why is the long run elasticity greater than the short run elasticity for gasoline? c. Under what economic situations will public transportation present potential competition to private automobiles? List the special characteristics of the U. S. health care market and specify how each affects health care problems. Consider a society with two people. Rosa earns an income of $120,000 per year and Jake earns an income of $45,000 per year. The government is considering a redistribution plan that would impose a 23% tax on Rosa's income and give the revenue to Jake. This policy distorts Rosa's and Jake's incentives, such that instead Rosa re Name effects of the current budget of the nation on the economy. 1. (Part A - You are considering an upgrade at a chip manufacturing plant. A new VLSI testing station costs $100,000, installed. It will save $50,000 per year in labor costs. Use five-year tax life and a tax rate of 40%. Find the rate of return (IRR). (Use two sig. figs.)) Part B- If the unit requires reprogramming calibratio How would you fix the Social Security system in the United States? Advanced Analysis, let MUa=z=10-x and MUb=z=21-2y, where is marginal utility per dollar measured in utils, x is the amount spent on product A, and y is the amount spent on product B. Assume that the consumer has $ 10 to spend on A and B_that is , x+y=10. How is the $10.00 best allocated between A and B? How much utility will the 1. - Colgate Distributing Company has the option to provide to its sales representatives a car or reimburse them the mileage for the use of its own cars. If the company provides the car, it will pay all the expenses related to it, including gas for business travels. The estimations are as follow: Car cost $15,000 Estimated Li Assume the U.S. decides to implement a system of national health insurance that provides to all citizens a basic benefit package that covers most necessary services. Those who wish to purchase private insurance may do so, but they will not be covered by national insurance and will have to pay all their medical expenses. What do The current structure of healthcare coverage in the U.S. has a combination of employer-based coverage, public program (e.g., Medicare, Medicaid) coverage and a large uninsured population. Suppose that a single-payer system is implemented where Medicare is expanded to cover all citizens. For those who prefer private insurance, th Why not have State governments levy tariffs on imports, or tax other states' products. Would this be a sensible way to raise revenues? What are the advantages/disadvantages? Provide research to support positions. Brief with bullet points will suffice. Who should pay the costs of medical research and the costs of training new physicians? Should these be paid by the government through Medicare and Medicaid? By government directly through tax dollars? By health insurers? Consider each and explain why you think it is appropriate or not.
https://brainmass.com/economics/personal-finance-savings/pg3
CC-MAIN-2018-39
en
refinedweb
Thursday, July 14, 2016 Cross Cluster Services - Achieving Higher Availability for your Kubernetes Applications Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.3 As Kubernetes users scale their production deployments we’ve heard a clear desire to deploy services across zone, region, cluster and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster multi-zone deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters. In Kubernetes 1.3, our goal was to minimize the friction points and reduce the management/operational overhead associated with deploying a service with geographic distribution to multiple clusters. This post explains how to do this. Note: Though the examples used here leverage Google Container Engine (GKE) to provision Kubernetes clusters, they work anywhere you want to deploy Kubernetes. Let’s get started. The first step is to create is to create Kubernetes clusters into 4 Google Cloud Platform (GCP) regions using GKE. - asia-east1-b - europe-west1-b - us-east1-b - us-central1-b Let’s run the following commands to build the clusters: gcloud container clusters create gce-asia-east1 \ --scopes cloud-platform \ --zone asia-east1-b gcloud container clusters create gce-europe-west1 \ --scopes cloud-platform \ --zone=europe-west1-b gcloud container clusters create gce-us-east1 \ --scopes cloud-platform \ --zone=us-east1-b gcloud container clusters create gce-us-central1 \ --scopes cloud-platform \ --zone=us-central1-b Let’s verify the clusters are created: gcloud container clusters list NAME ZONE MASTER\_VERSION MASTER\_IP NUM\_NODES STATUS gce-asia-east1 asia-east1-b 1.2.4 104.XXX.XXX.XXX 3 RUNNING gce-europe-west1 europe-west1-b 1.2.4 130.XXX.XX.XX 3 RUNNING gce-us-central1 us-central1-b 1.2.4 104.XXX.XXX.XX 3 RUNNING gce-us-east1 us-east1-b 1.2.4 104.XXX.XX.XXX 3 RUNNING The next step is to bootstrap the clusters and deploy the federation control plane on one of the clusters that has been provisioned. If you’d like to follow along, refer to Kelsey Hightower’s tutorial which walks through the steps involved. Federated Services Federated Services are directed to the Federation API endpoint and specify the desired properties of your service. Once created, the Federated Service automatically: - creates matching Kubernetes Services in every cluster underlying your cluster federation, - monitors the health of those service β€œshards” (and the clusters in which they reside), and - manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster, availability zone or regional outages.. GCP, AWS), and on-premise (e.g. on OpenStack). All you need to do is create your clusters in the appropriate cloud providers and/or locations, and register each cluster’s API endpoint and credentials with your Federation API Server. In our example, we have clusters created in 4 regions along with a federated control plane API deployed in one of our clusters, that we’ll be using to provision our service. See diagram below for visual representation. Creating a Federated Service Let’s list out all the clusters in our federation: kubectl --context=federation-cluster get clusters NAME STATUS VERSION AGE gce-asia-east1 Ready 1m gce-europe-west1 Ready 57s gce-us-central1 Ready 47s gce-us-east1 Ready 34s Let’s create a federated service object: kubectl --context=federation-cluster create -f services/nginx.yaml The β€˜β€“context=federation-cluster’ flag tells kubectl to submit the request to the Federation API endpoint, with the appropriate credentials. The federated service will automatically create and maintain matching Kubernetes services in all of the clusters underlying your federation. You can verify this by checking in each of the underlying clusters, for example: kubectl --context=gce-asia-east1a get svc nginx NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx 10.63.250.98 104.199.136.89 80/TCP 9m The above assumes that you have a context named β€˜gce-asia-east1a’ configured in your client for your cluster in that zone. The name and namespace of the underlying services will automatically match those of the federated service that you created above..XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XXX.XX Port: http 80/TCP Endpoints: \<none\> Session Affinity: None No events. The β€˜LoadBalancer Ingress’ addresses of your federated service corresponds with the β€˜LoadBalancer Ingress’ addresses of all of the underlying Kubernetes services. For inter-cluster and inter-cloud-provider networking between service shards to work correctly, your services need to have an externally visible IP address. Service Type: Loadbalancer is typically used here. Note also what. Adding Backend Pods our underlying clusters: for CLUSTER in asia-east1-a europe-west1-a us-east1-a us-central1-a do kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80 done Verifying Public DNS Records Once the Pods have successfully started and begun listening for connections, Kubernetes in each cluster (via automatic health checks) will report them as healthy endpoints of the service in that cluster..XXX.XXX.XXX, 130.XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XXX.XX nginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.XXX.XXX.XXX nginx.mynamespace.myfederation.svc.us-central1.example.com. nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.XXX.XXX.XXX, 104.XXX.XXX.XXX, 104.XXX.XXX.XXX nginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.XXX.XX.XXX nginx.mynamespace.myfederation.svc.asia-east1.example.com. nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.XXX.XX.XXX, 130.XXX.XX.XXX nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com. ... etc. Note: If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example: $aws route53 list-hosted-zones and . Discovering a Federated Service from pods Inside your Federated Clusters By default, Kubernetes clusters come preconfigured IP of services running in the local cluster. With the introduction of Federated Services and Cross-Cluster Service Discovery, this concept is extended to cover Kubernetes services running in any other cluster across your Cluster Federation, globally. To take advantage of this extended range, you use a slightly different DNS name (e.g. myservice.mynamespace.myfederation) to resolve federated services. Using a different DNS name also avoids having your existing applications accidentally traversing cross-zone or cross-region networks and you incurring perhaps unwanted network charges or latency, without you explicitly opting in to this behavior. So, using our NGINX example service above, and the federated service DNS name form just described, let’s consider an example: A Pod in a cluster in the us-central1-a availability zone needs to contact our NGINX service. Rather than use the service’s traditional cluster-local DNS name (β€œnginx.mynamespace”, which is automatically expanded to”nginx.mynamespace.svc.cluster.local”) it can now use the service’s Federated DNS name, which is”nginx.mynamespace.myfederation”. This will be automatically expanded and resolved to the closest healthy shard of my NGINX service, wherever in the world that may be. If a healthy shard exists in the local cluster, that service’s cluster-local (typically 10.x.y.z) IP address will be returned (by the cluster-local KubeDNS). This is exactly equivalent to non-federated service resolution. If the service does not exist in the local cluster (or it exists but has no healthy backend pods), the DNS query is automatically expanded to β€œnginx.mynamespace.myfederation.svc.us-central1-a.example.com”. Behind the scenes, this is finding the external IP of one of the shards closest to my availability zone. This expansion is performed automatically by KubeDNS, which returns the associated CNAME record. This results in a traversal of the hierarchy of DNS records in the above example, and ends up at one of the external IP’s of the Federated Service in the local us-central1 region. It is also possible to target service shards in availability zones and regions other than the ones local to a Pod by specifying the appropriate DNS names explicitly, and not relying on automatic DNS expansion. For example, β€œnginx.mynamespace.myfederation.svc.europe-west1.example.com” will resolve to all of the currently healthy service shards in Europe, even if the Pod issuing the lookup is located in the U.S., and irrespective of whether or not there are healthy shards of the service in the U.S. This is useful for remote monitoring and other similar applications. Discovering a Federated Service from Other Clients Outside your Federated Clusters For external clients, automatic DNS expansion described is no longer possible. automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes Cluster Federation. Handling Failures of Backend Pods and Whole Clusters Standard Kubernetes service cluster-IP’s already ensure that non-responsive individual Pod endpoints are automatically taken out of service with low latency. The Kubernetes cluster federation system automatically monitors the health of clusters and the endpoints behind all of the shards of your federated service, taking shards in and out of service as required.. Community We’d love to hear feedback on Kubernetes Cross Cluster Services. To join the community: - Post issues or feature requests on GitHub - Join us in the #federation channel on Slack - Participate in the Cluster Federation SIG Please give Cross Cluster Services a try, and let us know how it goes! – Quinton Hoole, Engineering Lead, Google and Allan Naim, Product Manager, Google
https://kubernetes.io/blog/2016/07/cross-cluster-services/
CC-MAIN-2018-39
en
refinedweb
Modified round robin pairing. TLDR; Do you want to organize a tournament where the pairing system is round robin but some players need to play against other players at a specific time? This gist is what you are looking for. If you want the full story, read on. On friday evening, I got a call from the tournament director of Sri Lanka Scrabble. he wanted to know if I could write a bit of code for him to change the pairing of the Premier Players Tournament - a round robin event. Now you might read Scrabble and Sri Lanka in the same sentence and decide to move on, but hold it right there, Sri Lanka is fast on it's way to being a global Scrabble power house. For example the young scrabblers won six prizes at the World Youth Scrabble Championship three months ago. But I digress. An odd number of players take part in this Premier Players Tournament and that means there is a bye. Another participant is Quackle. Sometimes players ask to receive the bye in a certain round so that they can rush out and attend to pressing matters in the middle of the tournament. Or if they get the bye at the start or end of the day, they can then show up later or leave early. On this occaision five of the players had wanted the timing of the bye fixed and two others wanted to play against Quackle in a specific round. If only the bye needs to be accommodated, it can be done very easily just by changing the seeds. The algorithm to generate a Berger table is described on Wikiepdia, thus it's easy to see that if someone wants to have the bye in the first round, he should be given the first seed and Bye the 10th seed (or vice verce). As pointed out by my friend Jayendra De Silva (former Tournament director for Sri Lanka Chess), you can fix the by even for all eight players simply by changing the seeds and it can be done manually even with pen and paper If in addition to 'fixing the bye' one players wants to fix a time for the game against Quackle, that can be arranged by changing the seed for Quackle. Trouble arises when more than one player wants to some combinations turn out to be impossible. In fact I spent quite a bit of time trying out various algorithms to try to 'flip pairings'. I soon reached the conclusion that the combination asked for was impossible but had not mathematical proof. So I ended up writing a brute force method to produce all possible combinations and to see if any one of them met the requirements. For 9 or 10 players, there are 3,628,800 possible pairings but it takes only a little more than half a minute to produced them all. However this is an O(n!) problem so it might well be impossible for twenty four players. Number of possible pairings is huge and bigger than even Avagadro's number. Trying out all combinations didn't produce a match, but the code can be used to generate customized round robin pairing schedules for many situations. Since many tournament directors probably will not be able to make much use of the code as it is, I will be releasing an online pairing generator real soon. There are quite a few of them already but none of them allow you to reschedule games as described above. Wifi with Qualcomm Atheros Device [168c:0042] (rev 30) on Linux Mint TLDR; sudo apt-get install build-essential linux-headers-$(uname -r) git echo "options ath10k_core skip_otp=y" | sudo tee /etc/modprobe.d/ath10k_core.conf wget tar -jxf 2015-11-05.tar.bz2 cd backports-ath10k-2015-11-05 make defconfig-ath10k make sudo make install git clone sudo cp -r ath10k-firmware/ath10k/ /lib/firmware/ sudo cp -r ath10k-firmware/QCA9377 /lib/firmware/ath10k/ cp firmware-5.bin_WLAN.TF.1.0-00267-1 firmware-5.bin This is sourced primarily from however, by the time that this post is being written the original download link in the post had become invalid. Also note that grabbing the latest backport from the Linux Kernel Backports did not work either. Right now you will want to know what this is all about. This is how you make the atheros wifi device work linux. This particular occaision it was on Linux Mint Rosa with kernerl 3.19. I had tried upgrading the kernel to the 4.2 version but wifi didn't work out of the box even though the code is supposed to have been committed. Besides isn't that where the backport comes from? Life is full of mysteries. A more meaningful way to reschedule Zyzzyva cardboxes Playing competitive Scrabble but not using Zyzzyva cardboxes? Well you should. Using Zyzzyva cardboxes but you have an unmanagable number of quizzes coming up each day? Read on. We are not going to talk about program's built in rescheduling feature. It's usefull if you miss your quizzes for a few days but it's not very usefull if the number of questions grows to unmanagble levels after you have just added a large number of words to your vocabularly. As always, prevention is better than the cure and there is way to prevent or minimize it from happening; When you learn a set of new words, make sure that you do a standard quiz with them before adding them to the cardbox. That way the questions are spaced out instead of being bunched up. Even then, if you keep adding a few hundred words each day for about week such as when preparing for a tournament, you will end up with a cardbox that looks something like this. Our strategy will be to spread these out over several days. Half the questions scheduled for the next 12 days will be pushed back by 12 days. Thus the number of questions coming up over each of the next 12 days will be halved and the number after the 12th day will increase. Twelve is a number I chose arbitarily. You can use any number but make sure it's not too large a value or the cardbox the question belongs to will need to be changed as well. How exactly do we do the change? by editing the data file. The quiz data is held in two files named ~/Zyzzyva/quiz/data/CSW12/Annagrams.db and ~/Zyzzyva/quiz/data/CSW12/Hooks.db note that ~ denotes your home folder. Close any active instances of zyzzyva and double click on the data file. If you are nervous backup the data file before you try this out. When you click you will see something like the following pop up on your desktop. If you don't see something like that, it means you do not have sqlite installed on your computer. No fear, you can install sqlite quite easily on your computer. But if that feels a bit daunting you can add the sqlite exention to chrome or firefox and edit the data file in your browser. Once you are ready, you just need to copy paste the following text into the sqlite console and click run. update questions SET next_scheduled = next_scheduled + 86400*12 where next_scheduled is not null and next_scheduled < strftime('%s','now') + 86400*12 AND next_scheduled % 2 =0 Now close sqlite3 and reopen Zyzzyva you will find that the quizzes have been rescheduled in a more meaningful manner than is possible with the built in feature. Vim and python debug Vebugger This is the first VIM based debug connection that I managed to setup. Vebugger will stop at break points and you can use it's commands to inspect variables. But it's somewhat tedious because it doesn't seem have an interactive shell like pdb does. Vebugger is a frontend that can be used with many different languages. In case of python what this actually does is connect to pdb, so I am left wondering if it would be better off to just use pdb by itself. Having said that this project looks like it's been actively maintained and hopefully in a few months time we might see some more functionality and it will become more usefull.. Vim-debug I tried to install it by adding Plugin 'jaredly/vim-debug' without any luck. Well the installation did happen by the :Dbg command wan't enabled. Then I uninstalled it with :PluginClean and tried a manual install with better results. In fact I could quite easily start the debugger after restarting vim just by :Dbg . and hey presto all sorts of splits appeared with stacks, variables etc. Impressive. And just to think this software hasn't been updated for 2 years! Now for the acid test, can it debug Django? unfortunately not. :Dbg manage.py runserver 0.0.0.0:8000 yields Traceback (most recent call last): File " ", line 1, in File "/usr/local/virtual-django-15/lib/python2.7/site-packages/vim_debug/commands.py", line 19, in debugger_cmd return start(*shlex.split(plain)) TypeError: start() takes at most 1 argument (3 given) But I am hopefull that I will be able to make something out this for example you can just create another script than invokes manage.py with the set of arguments that you need. Clunky but usable. The code is in python and looks really good if you ignore the fact that it conflicts with many keymaps including Command-T. This conflict can apparently be fixed by adding let g:vim_debug_disable_mappings = 1 to .vimrc . However in my case it didn't happen and I had no choice than to uninstall it. Vdebug It can be setup with Vundle but you need to download pydbgp from Komodo afterwards start the debug server by hitting <F5> in vim and run your script from the terminal python -S ~/bin/pydbgp -d localhost:9000 hello.py for me it didn't quite work out in the first attempt. Traceback (most recent call last): File "/home/raditha/bin/pydbgp", line 71, in import getopt ImportError: No module named getopt Turns out this is caused by the -S option for python which says don't imply 'import site' on initialization whatever that means! I am guessing it has something to do with site-packages in a virtualenv but at any rate running the script without the -S option worked! So let's leave it at that Now the acid test. Does it work with Django? Typed python ~/bin/pydbgp -d localhost:9000 manage.py runserver and was quite pleased to see it stop at the first line inside manage.py and then hit Debugger engine says it is in interactive mode,"+\ "which is not supported: closing connection" I tracked it down to runner.py in vdebug but didn't have the time to dig any deeper. If nothing else works out I will come back here DBGPavim Like Vdebug, has the Komodo remote debug client listed as a dependency. Unfortunately I couldn't elicit any response from it by pressesing vimpdb Nextup was vimpdb. According to the documentation this shouldn't be installed as a plugin but using pip. Then you need to add import vimpdb; vimpdb.set_trace() to your code and start the script. This is similar to adding import pdb; pdb.set_trace() for use with pdb. The difference here is that a new instance of vim will be launched instead of pdb when you run the script. It doesn't seem to be able to reuse an already started vim session. The basic script could be debugged wtihout any trouble and it didn't freeze or thrown an error message when django manage.py runserver eas exectued! Now the bad news, it didn't stop at a break point when a url was loaded in the browser. vim-addon-python-pdb And finally vim-addon-python-pdb but not much luck with that either. Error detected while processing function python_pdb#Setup..python_pdb#RubyBuffer: line 11: E117: Unknown function: async_porcelaine#LogToBuffer Error detected while processing function python_pdb#Setup..python_pdb#UpdateBreakPoints..python_pdb#BreakPointsBuffer: line 2: E117: Unknown function: buf_utils#GotoBuf E15: Invalid expression: buf_utils#GotoBuf(buf_name, {'create_cmd':'sp'} ) line 3: E121: Undefined variable: cmd E15: Invalid expression: cmd == 'e' Error detected while processing function python_pdb#Setup..python_pdb#UpdateBreakPoints: line 37: E117: Unknown function: vim_addon_signs#Push line 43: E716: Key not present in Dictionary: write Which PostGIS function to use? Which PostGIS function or operator do you use if you wanted to find the overlap between different lines? There are several candidates which immidiately come to mind; ~, && , ST_Overlaps and and ST_Intersects if you are in a hurry or TLDR; use ST_Overlap. The two operators They all do job but in different ways, takes different amounts of time and return slightly different results. Using only operators time 4.3237 rows 1208 Using ~ operator and ST_Overlap time 5.5201 rows 233 Using ST_Dwithin time 5.5551 rows 2500 Using ST_Intersects time 5.3128 rows 746 ST_DWithin seems to match everything to everything else. Using ~ and &&. Usin operatings seem to be a shade quicker than the other options. But the operators only deal with bounding boxes not the lines themselves so your milage may vary depending on your context. This test was carried out with a nested for loop with the other loop having 250 objects and the inner loop having 25. The data came from the road.lk carpool tables. Picking the right sample is very important in this test since different samples could produce different results so the sample was actually changed multiple times. The results followed the same pattern except for this one instance where all the routes were disjoint! Using only operators time 0.0174 rows 0 Using operators + overlap time 0.0176 rows 0 Using ST_Dwithin time 0.0177 rows 0 Using ST_Intersects time 0.0177 rows 0 I should have used number formatting but I didn't so sue me. Getting back to the first data set, the number of matches vary by a factor of eleven so we ought to check the quality of the matches. That can be done by using the overlapping distances. Using only operators time 7.1954 rows 1208 total overlap 0.199858778468 Using ~ operators and ST_Overlap time 6.9210 rows 233 total overlap 0.199858778468 Using ST_Dwithin time 10.7650 rows 2500 total overlap 0.199858778468 Using ST_Intersects time 7.1929 rows 746 total overlap 0.199858778468 Surprise, surprise all the overlapping distances are identical. So clearly the a.line && b.line greatly over estimates the number of overlaps and so does ST_Dwithin. Counter intuitively ST_Overlaps and ST_Intersects both seem to have the edge over the operators. Before winding down, I wanted to do one final test. As things stand the queries look like ' INNER JOIN pool_route b ON a.line ~ b.line OR b.line ~ a.line or a.line && b.line ' and ' INNER JOIN pool_route b ON a.line ~ b.line OR b.line ~ a.line or ST_Overlaps(a.line , b.line) ' will there be a difference if the contains operator (~) is removed? Using only operators time 7.4457 rows 1208 total overlap 0.199858778468 Using ~ operators and ST_Overlap time 6.0198 rows 64 total overlap 0.199858778468 Using ST_Dwithin time 10.7326 rows 2500 total overlap 0.199858778468 Using ST_Intersects time 7.3100 rows 746 total overlap 0.199858778468 Just ignore the timing the difference is too small to be taken seriously but look at how the number of matches has dropped dramatically from 266 to 64 for the ST_Overlap query. Yet the final overlapping distance hasn't changed. At this point I double checked and triple checked to make sure that my code doesn't have any bugs and it looks like there isn't. The only thing to do is to try with a much bigger dataset Using only operators time 317.762 rows 83075 total overlap 116.168481483 Using ST_Overlap time 223.555 rows 13884 total overlap 116.167709497 Using ST_Dwithin time 416.819 rows 160000 total overlap 116.168481483 Using ST_Intersects time 276.907 rows 46804 total overlap 116.168481483 This is with a much larger dataset, 400 objects in the outer loop, 400 objects in the inner loop. If you want to have look at the code that was used here you go: import os, sys if __name__ == '__main__': #pragma nocover # Setup environ sys.path.append(os.getcwd()) os.environ.setdefault("DJANGO_SETTINGS_MODULE", "main.settings_dev") from django.db import connection from pool.models import Route import time cursor = connection.cursor() routes = Route.objects.order_by('id')[0:100] routes2 = Route.objects.order_by('-id')[0:250] t0 = time.time() sum = 0 distance = 0 for r1 in routes: for r2 in routes2 : q = cursor.execute('SELECT count(*) , sum(ST_Length(ST_intersection(a.line,b.line))) FROM pool_route a ' ' INNER JOIN pool_route b ON a.line && b.line ' ' WHERE a.id = %s AND b.id = %s', [r1.id, r2.id]) row = cursor.fetchone() sum += row[0] if row[1]: distance += row[1] t1 = time.time() - t0 t0 = time.time() print 'Using only operators time ' , t1 , ' rows ' , sum ,' total overlap ', distance sum = 0 distance = 0 for r1 in routes: for r2 in routes2 : q = cursor.execute('SELECT count(*) , sum(ST_Length(ST_intersection(a.line,b.line))) FROM pool_route a ' ' INNER JOIN pool_route b ON ST_Overlaps(a.line , b.line) ' ' WHERE a.id = %s AND b.id = %s', [r1.id, r2.id]) row = cursor.fetchone() sum += row[0] if row[1]: distance += row[1] t1 = time.time() - t0 t0 = time.time() print 'Using ~ operators and ST_Overlap time ' , t1 , ' rows ' , sum ,' total overlap ', distance sum = 0 distance = 0 for r1 in routes: for r2 in routes2 : q = cursor.execute('SELECT count(*) , sum(ST_Length(ST_intersection(a.line,b.line))) FROM pool_route a ' ' INNER JOIN pool_route b ON ST_Dwithin(a.line, b.line, 50) ' ' WHERE a.id = %s AND b.id = %s', [r1.id, r2.id]) row = cursor.fetchone() sum += row[0] if row[1]: distance += row[1] t1 = time.time() - t0 t0 = time.time() print 'Using ST_Dwithin time ' , t1 , ' rows ' , sum, ' total overlap ', distance sum = 0 distance = 0 for r1 in routes: for r2 in routes2 : q = cursor.execute('SELECT count(*) , sum(ST_Length(ST_intersection(a.line,b.line))) FROM pool_route a ' ' INNER JOIN pool_route b ON ST_Intersects(a.line, b.line) ' ' WHERE a.id = %s AND b.id = %s', [r1.id, r2.id]) row = cursor.fetchone() sum += row[0] if row[1]: distance += row[1] t1 = time.time() - t0 print 'Using ST_Intersects time ' , t1 , ' rows ' , sum, ' total overlap ', distance Update: a bug in the code was fixed and the blog post formatted (by hand) Configure the vim plugins and other stuff. The Vim Autocomplete for Django This is the continuation of a journey that began by switching from eclipse/pydev to vim for python/django development via python mode. Looking to setup auto completion for python and django. Let's jump straight in Rope Originally rope was installed as part of python-mode and I quickly tired of it and switched to jedi it was only after installation that I discovered it to be incompatible with python-mode. That was what prompted my decision to give up on python mode (it has far too many blackboxes) and install everything manually. I managed to get auto complete working beautifully with rope or so I thought. and this is the configuration that was used let ropevim_vim_completion = 1 let ropevim_extended_complete = 1 let ropevim_enable_autoimport = 1 let g:ropevim_autoimport_modules = ["os.*","traceback","django.*"] imap <C-space> <C-R>=RopeCodeAssistInsertMode()<CR> The trouble with this setup is that auto completion would add the package name after the class name!. For example if you were to select HttpResponse from the dialog, what actually gets added is HttpResponse : django.http.HttpResponse. Using the let ropevim_vim_completion = 1 also means taking vim auto import out of the picture. The meaning of Auto import here is to automatically add the import statement to the top of the code without moving the cursor. This is one of the most usefull features of pydev If you set let ropevim_vim_completion = 0 auto complete still works but you lose the popup dialog. Instead you get a line near the bottom of the screen with the choice. You need to cycle though the options with the tab key. Jedi The real problem with jedi is that you need to know that full class name including it's package for auto complete to work. For example just typing Http<Ctrl Space> will not give you HttpResponse or any of the related packages from django.http. For me this is a real problem, I have never bothered to remember the full package names for classes in any of the languages I have ever used. There isn't a need for it. That's what IDEs are for! Conclusion. So it seems that both Jedi and Rope have their short comings. Where Jedi is strong rope is lacking and vice verce. Hay hang on a second. Can't we use both? The answer my dear madam is that you can! This is the relevent section of the .vimrc " Use both ropevim and jedi-vim for auto completion.Python-mode " let ropevim_vim_completion = 1 let ropevim_extended_complete = 1 let ropevim_enable_autoimport = 1 let g:ropevim_autoimport_modules = ["os.*","traceback","django.*"] imap <s-space> <C-R>=RopeCodeAssistInsertMode()<CR> let g:jedi#completions_command = "<C-Space>" let g:jedi#auto_close_doc = 1 Now when you are not sure of the package name you hit Shift-Space to do an rope auto import + auto complete action. You don't get a pop up but have to tab through the options. When the class has already been imported or you know the full name, hit Ctrl Space and get jedi to do the dirty work for you. Good bye python-mode I recently switched from pydev to vim for all python work. But after struggling with a nasty bug in my code for nearly two hours I was beginning to have second thoughts and started eclipse/pydev, ran it through the debugger. After that It took just 3 minutes to fix the bug including eclipse start up time. Ok, maybe it's because I haven't had any luck setting up a python debugger for vim. Maybe I should have just used pdb. Your devtools are support to make things easier not harder. Debugging isn't the only issue, django auto complete still doesn't work with either jedi or rope. I've tried about a dozen remedies listed on various places including stackoverflow with no luck at all. It's tempting to make the swith back to pydev a permanent one, but I am going to give it one more shot. The crux of the problem maybe with python-mode. Python-mode has a lot of other plugins under the hood and they are all sealed into small black boxes. It's just like driving a modern car. You are never really sure who is the boss. Now I am going to uninstall python-mode, follow the outdated guide at sontek and have another go at it. Hopefully at the end of the day, I might have a working vim setup and a modernized version of the sontek guide! This is just an as it happened commentary. After I settle in I will create a howto which will hopefully be helpfull to someone else. A couple of changes to the plugins. Instead of pyflakes I am installing syntastic and not going to install ack either. Both of these items might return to the menu if I find that vim's search features don't meet expectations. The same goes for gundo, which seems like a nice plugin, but can it replace git? make-green and pytest are to other plugins to miss out. The sontek blog gives a link to the NERD Tree plugin is given as vim-scripts/The-NERD-tree while there is a better maintained version at 'scrooloose/nerdtree' by the same guy who bring you syntastic. That's not the only plugins where the upto date version different from the location given in that blog post; others included ropevime (correct loctaion: python-rope/ropevim) and minibufexpl (correct as fholgado/minibufexpl.vim). All these plugins were installed using Vundle which probably wasn't around at the time the post was written. Vundle adding and removing plugins becomes completely painless. Now to start up. Predictably there is an error: Error detected while processing function LoadRope: line 4: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named ropevim Error detected while processing /home/raditha/.vim/bundle/TaskList.vim/plugin/tasklist.vim: line 369: E227: mapping already exists for \t Press ENTER or type command to continue pip install rope ropevim takes care of the first one while for the second you will need to create a separate mapping according to your taste in keystrokes! Now the key to the whole puzzle: " Rope AutoComplete let ropevim_vim_completion = 1 let ropevim_extended_complete = 1 let g:ropevim_autoimport_modules = ["os.*","traceback","django.*", "xml.etree"] imap <c-space> <C-R>=RopeCodeAssistInsertMode()<CR> This is something I discovered stackoverflow, with it autocomplete suddenly starts working pretty much like it does in pydev. So does auto import. Start typing and press Ctrl-Space to activate autocomplete, once the autocomplete is done do : :RopeAutoImport to have the import statement automatically added towards the top of the file. This is rather clunky and the default keymapping C-c r a c is worse! but you are free to map it to any key of your choosing and I am selecting Ctrl-F1 And that brings an abrubt end to the proceedings but I will be back soon. The photo is a public domain pic by openclips
https://www.raditha.com/blog/page/3/
CC-MAIN-2018-39
en
refinedweb
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives I'd be curious to know what people's opinions are on the matter of symbol visibility, and whether there are any new and fresh ideas in this space. Now, you might think that symbol visibility would be a rather dull and pedestrian topic for a language theorist - few research papers on language design even mention the topic. However, I think that this is actually a very interesting area of language design. Limiting the visibility of symbols has immense practical value, especially for large code bases: the main benefit is that it simplifies reasoning about the code. If you know that a symbol can only be referenced by a limited subset of the code, it makes thinking about that symbol easier. There are a number of languages which have, in pursuit of the goal of overall simplicity, removed visibility options - for example Go does not support "protected" visibility. I think this is exactly the wrong approach - reducing overall cognitive burden is better achieved by having a rich language for specifying access, which allows the programmer to narrowly tailor the subset of code where the symbol is visible. Here's a real-world example of what I mean: One of the code bases I work on is a large Java library with many subpackages that are supposed to be internal to the library, but are in fact public because Java has no way to restrict visibility above the immediate package level. In fact, many of the classes have a comment at the top saying "treat as superpackage private", but there's no enforcement of this in the language. This could easily be solved if Java had something equivalent to C++'s "friend" declaration, the subpackages could then all be made private, and declare the library as a whole to be a friend. However, I wonder if there's something that's even better... i mostly heartily concur. especially since i just did some stuff in go. i say mostly because usability is a double-edged banana. things can (a) be poorly done in the language spec or (b) even if done well then the end-users can go crazy and perhaps make horribly complicated relationships that just make the code harder to grok for the next person down the line. This is only an idea, I don't know if it is implemented anywhere. I would prefer to write only minimal visibility annotations within code, at most something like Node.js's export magic variable. Instead, visibility would be controlled by separate interface files (akin to ML's module interfaces), and you could have several interfaces for different clients - your library would have the full interface to all modules, "subclassing" libraries would have a partial interface, and client programs would have only a very limited API. This would also allow to abstract types. export The only thing that is problematic with this approach is that it would be quite cumbersome for the programmer, or at least I have not yet thought of a way to make it obvious which interface applies to which part of code. Use modules & module signatures for information hiding. Not all visibility relationships are expressible via hierarchical containment. Protected is especially useful in languages that support implementation specialization via inheritance. This is expressible with module signatures, provided you have something that can model inheritance with modules (e.g. mixin composition of modules). Suppose you have class A, with some private and some protected and some public methods. Then you define B to inherit from A. You can model this with a module A, to which you apply a signature which hides the private methods but not the public and protected methods in A. Then you do mixin composition to obtain B, then you apply a signature which hides the protected methods from B to obtain the public version of B. This is of course a bit more manual than with a protected keyword, but I'm not convinced that protected is such a common pattern that it deserves its own keyword if it can be expressed using just signatures. So we can have a bunch of things that can be wired together to achieve a goal. Or we can implement another tool that is a succinct way of getting that goal. When do we know we should add that to the system? Vs. trying to not do that, to keep things "simple"? What are people's experiences here? I always like the sound of e.g. go-lang's parsimony, but then whenever I go to use something like that it drives me freaking nuts. "Just give me protected!" I rant and rave at the screen... I don't think there is an easy rule of thumb. You would have to look at a number of case studies and see whether protected makes them clearer and easier to express. Then you decide whether that is worth the added complexity. Controlling access to information is useful for reasoning about programs. However, there are many means to achieve this, and I'm not at all fond of the 'symbol' based visibility model. Symbol based visibility is problematic for several reasons. They greatly increase coupling within the 'class' or whatever unit of visibility is specified. This can hinder reuse, refactoring, decomposition of said unit into something finer grained. Using symbols for binding can also hinder composition and inductive reasoning: gluing subprograms together (even methods) based on hidden symbols is very ad-hoc, non-uniform. In some cases, it even hinders testing of subprograms. Further, symbol-based visibility is generally second-class, inflexible, difficult to extend or precisely graduate or attenuate. My alternative of preference is the object capability model (cf. [1][2]). Attenuation can be modeled by transparently wrapping one object within another. But I also see value in typeful approaches. Linear types make it easy to reason about fan-in, fan-out, exclusivity. Modal types can make it feasible to reason about when and where access occurs, locality. Quantified (Òˆ€,Γ’Λ†Ζ’) types with polymorphism can make it easy to reason about what kind of access a given subprogram is given. Symbol visibility is simplistic, easy to compute compared to some of these other mechanisms. So performance is one area where it has an edge. More sophisticated designs, such as object capability model, would rely on deep partial evaluation and specialization for equivalent performance. Symbol based visibility is problematic for several reasons. To the extent that I understand the problems you're complaining about, I think they're solvable without getting rid of symbols or even the use of symbol visibility as the mechanism for information hiding. In particular, it sounds like many of your complaints would be solved with a way to bind to sub-theories rather than pulling in the entire theory of a set of symbols. If binding is orthogonal to use of symbols, that does eschew one complaint (re: "gluing subprograms together (even methods) based on hidden symbols is very ad-hoc"). But I don't see how it helps with the others. Could you give some examples of the problems you mention? The problems I'm noting are mostly shallow in nature. Refactoring is hindered for obvious reasons: code that calls a method private to a specific code unit cannot (directly) be refactored into a separate code unit. Testing is hindered for obvious reasons: ensuring a 'private' method is visible to the testing framework. Flexibility is reduced for obvious reasons: our visibility policies aren't themselves programmable. The remaining point, regarding composition, is simply an issue of symbol-based binding in general (be it public or private) - especially if the binding is recursive. Anyhow, while the source problems are shallow, they do add up. And that aggregation becomes the real problem. Isolated examples don't much help with exposing this class of problem. Best I can recommend is that you read about some of the alternatives I mentioned (such as object capability model, linear types) and the motivations behind them. These problems are of public/private/protected in particular. Modules like in OCaml don't have these problems because you have an explicit construct to selectively hide components: signature ascription (which is similar to casting an object to an interface). For example if you have a module Foo that you give a public signature Bar with PublicFoo = Foo : Bar, then you can still test the hidden functions in Foo by simply testing on Foo instead of PublicFoo. The same problems apply to, for example, Haskell's approach to hiding information within modules (based on export control - a form of symbol visibility management). It isn't just OO class public/private/etc.. I think refining and/or adapting interfaces is useful for the flexibility and testing concerns. It does not seem sufficient to address the other two concerns. I tend to use a combination of the object capability model and multiple specifically tailored 'interfaces' per object. As Javascript doesn't have classes (only objects), an interface takes the form of a wrapper object. The entity that creates an object can then distribute these interfaces on a need to know/do basis. In Javascript this scheme is particularly useful, as it protects (to an extent) against cross-site scripting attacks. Being javascript, this approach is of course not statically typed. But it could be. Matthew Flatt has a great paper on explicit phase separation in racket submodules. I don't think that a variety of levels, such as private, protected, package, public, etc. justifies the complexity cost. I've worked on quite a few real-world systems where goofy things were done to workaround accessibility restrictions. Sometimes, to get the job done, you need to violate encapsulation or otherwise take dependencies are implementation details. I value making this explicit, but do not value making it difficult. For example, the use of a leading underscore in Python is preferable over explicit reflection in Java. There are also different schools of thought on what the default visibility setting should be. Should you be required to export the public interface? Or hide auxiliary types & functions? I prefer the former, so long as the private bits are still accessible. With the Common.js module system for example, you can't get at internals even if you're willing to consciously violate public contracts. Although Clojure offers a {:private true} metadata, it's pretty common to see all functions defined as public and the promised/versioned/reliable functionality to be manually copied from an implementation namespace to a separate publicly documented namespace. See clojure.core.async for an example of that. As a consumer of such a library, I like this pattern, but wish it was a little less verbose to accomplish myself. There are also some unaddressed issues with aliasing mutable values, such as thread-local vars. So after the code was mangled to get the product out the door, and then the 3rd party vendor changed the library and thus broke the mangling, what happened / what happens? I think the root cause of these problems is not public/protected/private, I think it is e.g. not using open source code :-) I say that half really seriously. the root cause of these problems is [...] not using open source code Just because you can look at or change the source, doesn't mean you can deploy the change. For example, you may know for a fact that you're deploying your code on to Linux machine running a particular version of the JVM. You don't expect to change operating systems and you're unable to change JVM versions without affecting any other services running on the machines you're deploying to. The Java base classes may not expose a public mechanism for facilities it can not reliably provide for Windows. However, you know that you can safely rely on the fact that a particular private method exists for an underlying native Linux API. Another example I've encountered: The source code I was programming against existed at runtime as a dynamically loaded library, stored in read-only memory, on widely deployed consumer electronics. the 3rd party vendor changed the library and thus broke the mangling, what happened / what happens? Depends. When you knowingly violate a library contract, you need to have a contingency plan. You can 1) not upgrade 2) do feature detection, employing a fallback 3) plan with the 3rd party on a transition plan. Or any of an infinite number of other things. Hell, I've had to work around accessor protections on C# code written by me! The world of deployed software is complex. These are great examples, thanks. So (1) if the whole software and hardware stack were done sanely such that we didn't have to do all this hackery, what would it look like? (2) if we assume we can't have a sane stack all the way through (ahem) then at our top level where we consume and interface with those other things, what would that best solution look like? so that (2a) it doesn't make the same mistakes and (2b) it somehow wrangles the mistakes of others? E.g. I mean what if we had a principled approach to wrangling the hacks? p.s. you're hired! I mean, it's certainly not *ideal*, but it is very much sane... from the operational perspective of any one rational actor. The net result is that the organism of engineering teams and the software community may seem chaotic, but it's built from individually sane decisions, at least mostly. You use the word "hackery", but I want to be clear: It's only hackery if you perceive it as such. Going back to my original post, I don't think it's hackery to call private methods denoted by leading underscores in a Python codebase. It is, however, hackery to have to use reflection to do similarly in Java. The distinction as I see it: In Python, I think "Oh, this is a private method. Is it safe to use it? Yes." then I go ahead and use it. In Java, I think the same thing, but have to take perfectly sensible code and mangle it in to the exact same logic, but lifted in to the domain of reflection. It's one thing to encourage (or even require/enforce) some sort of assertion of my intention to violate encapsulation. It's an entirely separate thing to force me to dramatically switch interfaces to my language runtime to accomplish a task that is mechanically identical between the two. And it's *totally unacceptable* to disallow such behavior completely. (1) if the whole software and hardware stack were done sanely such that we didn't have to do all this hackery, what would it look like? Pretty much exactly the same, just significantly less in total (non-hackery included)! what if we had a principled approach to wrangling the hacks? I may be pretty far from what you were asking now, but hopefully this gives you some insight in to my perspective. I'll take your questions to be: "How should we handle publishing, enforcing, and amending agreements between parties within software systems?" I think that this is a very human question. In practice, this is most often solved with social (rather than technological) means. Honestly, I don't have any good technical answers. I'd just rather we discuss the problem holistically, rather than assume that there is a universal logical system that will solve the problem magically. I've said before: How does a type system prove that your game is fun? It doesn't. Similarly, how does a function being marked as "public" guarantee that somebody upstream won't just delete the function and break the API? It doesn't! it is very much sane... from the operational perspective of any one rational actor [..] it's built from individually sane decisions This strikes me as analogous to: "every voice in my head is sane, therefore I'm sane" ;) You found part of programming that is more like making sausage than math. A lot of attention used to be given to this sort of question at places like Apple in the late 80's and early 90's, where not breaking third party developers was a priority, and interface design was influenced by expected future need to make changes that don't instantly break apps depending on established api contracts. It's hard to change what has been exposed, so good design often amounted to "don't expose it if you want to reserve option to change", and paid lip service to name and version schemes. But it's only easy to do linear versions, which isn't nearly granular enough for multiple complex interactions among devs who sinter together libraries with a graph of version inter-dependencies. How should we handle publishing, enforcing, and amending agreements between parties within software systems? Symbol names have something to do with publishing. Folks who love types dearly hope enforcement is done via types, even though this might require strong AI to represent complex interface contracts correctly in a verifiable way. Amending agreements is a lot like mutation in concurrent code: don't do it when possible to avoid, because the chaos is costly to resolve. You should not change contracts without also changing names and/or types too, in a way that unambiguously tells consumers in responsible notification. At a personal level, when updating old code, it's very dangerous to change the meaning of any old method, or a field in data structures. In the worst case, it might compile and build anyway, despite static type checking, and pass tests just well enough to let you ship before you find out what you did. Much safer is a scheme to add new methods and stop using old ones (if contracts permit this). Just treat code like immutable data in functional programming, and you'll usually be fine; let old deprecated code become unused and then garbage collected. But you may never know when devs consuming a library stop using old symbols. In my daily work, if I ever change the meaning of a field, I also change its name so a build breaks if I miss a single place that needs fixing. Every single place a symbol is used must be examined to see if new behavior matches the expected contract at each point. The old name becomes a "todo: fix me" marker that must be studied everywhere it appears. There isn't a nice answer. Making sausage is not for the squeamish. It's hard to change what has been exposed, so good design often amounted to "don't expose it if you want to reserve option to change" This is why I've drawn a distinction between visible (or accessible) and published. "Exposed" is too vague. Let's go with the Apple example. It is trivial to see or call all the private bits of an Objective-C API. However, you need to go *looking* for the private stuff. Either with tools (object browsers, decompilers, etc) or with source (not so lucky in this example). Apple enforces the ban on the use of private APIs with automated validation tools on their end. They can only do this now because they control the app store, the primary distribution channel. This wasn't always true, so they had to try to discourage utilizing private functionality but not publishing their existence (primarily in documentation, including auto-complete) and by introducing some barrier to accidental use (not published in default headers). Although distribution control enables enforcement, it isn't necessary for the approach to work. A warning could be raised during build, test, lint, or load time. That warning could be treated as an error at any of those times too, even if recognized earlier. For example, a network administrator may choose not to allow private API usage in deployed applications on company workstations for fear of future compatibility issues. However, you can bet your bottom dollar that administrator would want the ability to overrule such a restriction for a critical line of business application! Can always pay somebody to fix it; might even be worth it. (Read my use of "exposed" as meaning exposed in the public contract, not merely discoverable when you poke around behind the facade. Otherwise merely existing means being exposed and there's no difference between existing and exposed.) I stipulate your points; we seem to agree. Finding entry points, then making up your own interfaces so you can call them will very often work β€” up until someone patches private code with supposedly no third-party consumers. For example, nothing stops you from re-declaring C++ classes with private fields made public, but responsibility for the contract violation is clear when this happens. To the extent Apple polices use of private interfaces, they are doing a service to third-party developers who might otherwise simply get burned when rules are broken. A bad quality experience for users reflects well on no one, so Apple has incentive to stop devs from burning themselves. I find it slightly amusing Apple gets cast as the bad guy here. You can't let people insert themselves wherever they want. (For example, you can't stop a burglar from picking locks and taking up residence in a living room easy chair. But that doesn't mean they get to stay when you come home. Finding and picking the lock doesn't grandfather a new contract they write themselves without your consent. It would make a funny comic strip panel though.) I think it's good to have tools letting you express what you wish was (publicly) visible for various sorts of use, using both general rules and explicit case-by-case white-listing as seems convenient. Additionally, I think devs should think about each entry point and decide (then document) who is expected to call what and when. When you knowingly violate a library contract, you need to have a contingency plan. You can 1) not upgrade 2) do feature detection, employing a fallback 3) plan with the 3rd party on a transition plan. Or any of an infinite number of other things. Right. Except that in practice, that doesn't work. What happens in practice is this: you provide a component A, some other party provides a component B using your A, and stupidly, relying on implementation details it is not supposed to rely on. And then there are plenty of clients Ci who use A and B together. Now you release a new version of A -- and it breaks all Ci. Now, those clients couldn't care less that it was actually B who should be blamed for all their problems. You'll get all the fire, and you'll have to deal with it. And more often than not, you'll be forced to back out of some changes, or provide stupid workarounds to keep irresponsible legacy code working. Or not change the system to begin with. That is not a scalable model of software development. Modularity only exists where it can be properly enforced. and stupidly, relying on implementation details it is not supposed to rely on An engineer takes a look at a library, estimates the implementation and assumes a number of invariants, needs a performant algorithm thus designs it against those invariants, tests it, and it works. There is a good case that the provider of component A shares the blame since he could have known that the concreteness of software development forces his users to break the abstraction. (A good module where it may be assumed that you cannot get away with a pristine abstraction exposes its innards. In a good manner, I am not claiming spaghetti code is good.) And then, somewhere, a too high-brow attitude anyway. "Look, a compiler is a functor! Now everything is neat and explained." Software development must, and will, be messy since reality will always kick in. If A breaks B then we'll fix it again. Ideally, the prohibition against reliance on encapsulated details would be tied to publication. i.e. the system will not let component B be published because it relies on internal details of component A and the publisher of A has selected a policy of not allowing such dependence. But it would be hard to enforce this technically unless everything was going through a marketplace for publication. I suppose it could be enforced legally (the license specifies no dependence on internal details). Even if you choose firm language-level rejection of encapsulation violations, the effectiveness of such measures depend on the distribution method. Are you distributing a library as a header file and binary blob or running a web service? I know a man who told me he fired a programmer because "his code looked like a painting." True story. Generalizing from that sample size of one, I doubt a marketplace for publication will be accepted. Not only do I think it's hard to enforce module encapsulation, I think the rewards from cracking open an encapsulated module are sometimes worth the cost of brittleness and voided warranties. Ideally, the module system would make it easy to publish code that respects encapsulation boundaries, but would still make it possible to publish code that doesn't. Whoever installs a disrespectful module should have to manually resolve and maintain this situation quite a bit more than if they installed a respectful one, but only because that's essential complexity. If the module system hadn't made it possible, they'd still have spent that effort plus whatever effort they needed to work around the module system. Three days ago, I probably wouldn't have said this. This thread's been food for thought. I've also been thinking about legal annotations on code, some of which could talk about what's allowed at publication time, so our positions are very similar. Releasing a new version of A doesn't necessarily break anyone. Deploying a new version of A does that. If B is locked to A version 1 and you change the relied upon internals in A version 2, then the Ci clients will only all fail if you force the new A upon them. Different language ecosystems perform at different levels of poorly in this regard. I'd like to see more progress in the versioning, release management, deployment, and other software ecosystem aspects. However, there's also some fundamental differences between releasing software libraries and deploying services. The blame will fall on the last system to change anything. If Team A deploys v2 of Service A to a shared cluster, they will break all the Ci clients. However, if Team A' deploys v2 of some Code Artifact A to a build repository, then Team B will get the public blame, and rightfully so, if they upgrade to A v2 and then redeploy without any validation. My point is basically this: Depending on internals is going to happen from time to time. How you smoothly you can deal with the repercussions is what's most important to me. Stepping back a bit (which is unhelpful in a specific situation, but helpful when planning for, say, language design), the problem seems to be a shear between A's declared interface (what it's claimed to do) and practical interface (what it actually does, for the purposes of B). We judge B by testing it, which is practical, and therefore if there's a shear between the declared and practical interfaces, the actual form of B will favor the practical over the declared. Testing is always the preferred criterion (what's that line about 'I've only proven it correct, I haven't tested it'?), so it's... impractical... to demand that B depend only on A's declared interface unless it's possible to test B using the declared interface. Language design affects the shape of B's practical dependencies on A, the shape of A's declared interface, and how well or badly matched those are to each other. Hmm. I don't follow. Are you suggesting that testing needs more privileged access (to dependent components) than ordinary execution? If so, why? If not, how is testing relevant to the problem? And how is Knuth's quote related? The problem of today's SE practice certainly isn't too much trust in proofs and too little testing -- it's too much trust in tests and too little reasoning. And FWIW, tractable reasoning is only enabled by proper (i.e., reliable) abstraction. When an abstration leaks, no privilege is needed. The client ultimately ("end user") cares that the software does what they want it to, full stop. In a showdown, actual behavior trumps abstruse mathematical contracts. It follows, logically, that the winning move for contracts is to not be in conflict with behavior. I think there needs to be a mechanism for separating implementation from interface. I don't think hiding beyond that is necessary, although I don't agree with the OO approach of combining data-hiding with objects. I find the Ada way of having modules for data hiding and tagged types for object polymorphism as a much better system. Having said that the object capability model is the way to go for an operating system. For real runtime data-hiding you need to manipulate the segment registers or page tables anyway. I think the reasons for each are different and need to be kept separate, interface/implementation data hiding is about enabling structure in large projects and allowing teams of people to work together effectively and is a static source code thing. Capabilities are about security and need runtime enforcement to be secure, and are dynamic, as I should be able to remove a permission from a running program. I like to give the hiding of implementation details a more concrete rubric: code with public contract. Whatever is public is what is necessary for the consumer of the interface. Sometimes this does mean exposing a mechanism or implementation detail-- because it's necessary for proper use. That which is enforced-private should be all those elements of the implementation irrelevant to the consumer. Capability models then have a framework to sit in. In an operating system where there is a menagerie of consumers, a fine-grained and variable approach to interface consumption fits cleanly. I try to avoid modifying the internals of libraries. I don't want to be tied to maintaining compatibility with future versions, so I stick strictly to the API, never use private-APIs. I would rather re-implement the functionality in the application than use a private-API and I would adjust development times, and prioritise features accordingly. As I prefer to use open-source libraries (even when developing on closed platforms like iOS), in the rare cases where library changes are required I have worked with the library developers to get the changes I need accepted into the library as an upstream patch, meaning I don't have to be responsible for future maintenance of that code. I understand the temptation to use private APIs and break the abstractions, but in my hard won experience it is always a bad idea.
http://lambda-the-ultimate.org/node/4965
CC-MAIN-2018-39
en
refinedweb
A PDS_Converter is used to convert an attached PDS image label to or from a detached PDS/JP2 label. More... #include <PDS_Converter.hh> A PDS_Converter is used to convert an attached PDS image label to or from a detached PDS/JP2 label. A PDS_Converter inherits from PDS_Data which contains the label to be converted. This abstract class provides a base for product-specific label converter classes. Each implementation must provide the label_recognized, write_PDS_JP2_label and write_PDS_label methods. Constructs an empty PDS_Converter. Constructs a PDS_Converter from a named file. The PDS file may be a detached PDS/JP2 file or a PDS file with an attached label. Either the write_PDS_label or write_PDS_JP2_label methods can be used to convert either source, respectively. References PDS_Data::data_blocks(). Copies a PDS_Converter. Frees the PDS_Data::PDS_Data_Block_List and its contents. References PDS_Converter::clear_data_blocks(). Assigns another PDS_Converter to this PDS_Converter. References PDS_Converter::Data_Blocks, PDS_Converter::Excluded_Data_Block_Names, PDS_Data::image_data(), PDS_Converter::Image_Data_Block_Names, PDS_Converter::Label_Size, and Vectal< T >::push_back(). Write a detached PDS/JP2 label file. N.B.: This is a pure virtual method that must be provided by the implementing class. This method converts the PDS label held by this PDS_Converter, which is expected to be from a file with an attached label, to its detached PDS/JP2. Write a PDS label file for image data to be appended. N.B.: This is a pure virtual method that must be provided by the implementing class. This method converts the the PDS label held by this PDS_Converter, which is expected to be from a detached PDS/JP2 label, to its attached PDS. Get the name of the product types that this converter is intended to process. There is no particular meaning to the string that is returned. Each PDS_Converter implementation may choose whatever name(s) it sees fit. Nevertheless, it is recommended that each space separated word in the returned string be chosen to name a recognizable product such as the value of a label parameter that is known to identify the type of product. For example, the value of the INSTRUMENT_ID parameter might be used. Reimplemented in Generic_PDS_Converter, HiPrecision_PDS_Converter, and HiRISE_PDS_Converter. References PDS_Converter::DEFAULT_PRODUCT_TYPE. Test whether the label parameters are recognized by this converter. N.B.: A false return value does not necessarily mean that this converter will be unable to process the label; only that the label does not describe a product type recognized by the converter. Implemented in Generic_PDS_Converter, HiPrecision_PDS_Converter, and HiRISE_PDS_Converter. Set the preferred size of a label file. N.B.: The preferred label size is a hint; the actual size of the label may be larger, but will not be smaller. If the label will not fit in the preferred size the label size is increased the to minimum size required rounded up to the nearest 1K (1024) bytes. References PDS_Converter::Label_Size. Get the preferred size of a label file. References PDS_Converter::Label_Size. Get the list of data blocks in the PDS label. References PDS_Converter::Data_Blocks. Refresh the list of data blocks. Both the general data blocks list and the image data block pointer are refreshed from the current PDS label parameters. The previous data blocks are deleted. References PDS_Converter::clear_data_blocks(), and PDS_Converter::image_data(). Get the Image_Data_Block from the PDS_Data::PDS_Data_Block_List. If the Image_Data_Block has has not yet been found the PDS_Data::PDS_Data_Block_List is searched for it. If there is currently no list of data blocks, an attempt is made get a new data block list. If this fails to identify an image data block an attempt is made to find a Parameter Aggregate with the PDS_Data::IMAGE_DATA_BLOCK_NAME regardless of whether a corresponding record pointer parameter is present in the label. If this succeeds an IMAGE_Data_Block is constructed from the parameter group that is found. References PDS_Data::AGGREGATE_PARAMETER, PDS_Data::data_blocks(), PDS_Converter::Data_Blocks, PDS_Converter::Excluded_Data_Block_Names, PDS_Data::find_parameter(), PDS_Converter::IMAGE_Data_Block, PDS_Data::IMAGE_DATA_BLOCK_NAME, and PDS_Converter::Image_Data_Block_Names. Referenced by main(), and PDS_Converter::refresh_data_blocks(). Set the names of parameters to be excluded from the data block list. The list of known data blocks is updated. N.B.: The current image data block names are used to refresh the data block list, so if they are to be set they should be set first. Get the names of parameters to be excluded from the data block list. References PDS_Converter::Excluded_Data_Block_Names. Set the parameter names of image data blocks. N.B.: The PDS_Data::IMAGE_DATA_BLOCK_NAME is always implicitly included even if the image names array is NULL. The label's image data block is located. At the same time the list of known data blocks is also refreshed. Get the parameter names of image data blocks. References PDS_Converter::Image_Data_Block_Names. Assemble PDS/JP2 image file description parameters. Two groups of parameters are generated: A COMPRESSED_FILE group that refers to an associated JP2 file, and an UNCOMPRESSED_FILE group that describes the source image data file used to generate the JP2 file. The UNCOMPRESSED_FILE group also includes the complete IMAGE parameters group. The source image data filename is found in the name of the Parameter Aggregate containing this PDS label representation. The image data block is used to obtain the image data definitions. Example: OBJECT = COMPRESSED_FILE FILE_NAME = "PSP_002068_2635_RED.JP2" RECORD_TYPE = UNDEFINED ENCODING_TYPE = "JP2" ENCODING_TYPE_VERSION_NAME = "ISO/IEC15444-1:2004" INTERCHANGE_FORMAT = BINARY UNCOMPRESSED_FILE_NAME = "PSP_002068_2635_RED.IMG" REQUIRED_STORAGE_BYTES = 68624340 <BYTES> ^DESCRIPTION = "JP2INFO.TXT" END_OBJECT = COMPRESSED_FILE OBJECT = UNCOMPRESSED_FILE FILE_NAME = "PSP_002068_2635_RED.IMG" RECORD_TYPE = FIXED_LENGTH RECORD_BYTES = 10860 <BYTES> FILE_RECORDS = 6319 ^IMAGE = "PSP_002068_2635_RED.IMG" OBJECT = IMAGE ... END_OBJECT = IMAGE END_OBJECT = UNCOMPRESSED_FILE References file_name(), ID, and Vectal< T >::poke_back(). Write a label file. If a file exists at the specified pathname it will be replaced with the new file. If label padding is enabled (it is not by default) and the size of the label file after writing the label parameters is less than the preferred label size the file will be padded with space characters up to the preferred size. References ID, UA::HiRISE::label_lister(), Parameter::name(), Lister::reset_total(), and Lister::total(). Clear the data blocks list. The current data blocks are deleted and the pointers set to NULL. References PDS_Converter::Data_Blocks, and PDS_Converter::IMAGE_Data_Block. Referenced by PDS_Converter::refresh_data_blocks(), and PDS_Converter::~PDS_Converter(). Class identification name with source code version and date. Reimplemented from PDS_Data. Reimplemented in Generic_PDS_Converter, HiPrecision_PDS_Converter, and HiRISE_PDS_Converter. Default product name. Referenced by PDS_Converter::product_names(). Name of the PDS/JP2 label parameter group describing the source uncompressed file. Names of the PDS/JP2 label parameter group describing the destination compressed file. GeoTIFF included. GML included. No Version number change. Names (NULL-terminated array) of data blocks to be exluded from the list of data blocks (probably because they are redundant with other data blocks). Referenced by PDS_Converter::excluded_data_block_names(), PDS_Converter::image_data(), and PDS_Converter::operator=(). Pointer to the list of data blocks found in the PDS label. Referenced by PDS_Converter::clear_data_blocks(), PDS_Converter::data_blocks(), PDS_Converter::image_data(), and PDS_Converter::operator=(). Names (NULL-terminated array) of data blocks that are Image_Data_Blocks. Referenced by PDS_Converter::image_data(), PDS_Converter::image_data_block_names(), and PDS_Converter::operator=(). Pointer to the IMAGE data block in the Data_Blocks list. Referenced by PDS_Converter::clear_data_blocks(), and PDS_Converter::image_data(). The preferred size of the attached PDS label. Referenced by PDS_Converter::label_size(), and PDS_Converter::operator=().
http://pirlwww.lpl.arizona.edu/software/HiRISE/PDS_JP2/classUA_1_1HiRISE_1_1PDS__Converter.html
CC-MAIN-2018-39
en
refinedweb
XmlValidatingReader This class is an XML reader that supports DTD and Schema validation. The type of validation to perform is contained in the ValidationType property, which can be DTD, Schema, XDR, or Auto. Auto is the default and determines which type of validation is required, if any, based on the document. If the DOCTYPE element contains DTD information, that is used. If a schema attribute exists or there is an inline <schema>, that schema is used. This class implements an event handler that you can set to warn of validation errors during Read() operations. Specifically, a delegate instance of type System.Xml.Schema.ValidationEventHandler can be set for the ValidationEventHandler event in this class. This delegate instance is invoked whenever the XmlValidatingReader finds an schema-invalid construct in the XML document it is reading, giving the delegate a chance to perform whatever error-handling is appropriate. If no event handler is registered, a XmlException is thrown instead on the first error. public class XmlValidatingReader : XmlReader : IXmlLineInfo { // Public Constructors public method XmlValidatingReader( System.IO.Stream xmlFragment, XmlNodeType fragType, XmlParserContext context); public method XmlValidatingReader(string xmlFragment, XmlNodeType fragType, XmlParserContext context); public method XmlValidatingReader(XmlReader reader); // Public Instance Properties public override field int AttributeCount{get; } // overrides XmlReader public override ... No credit card required
https://www.safaribooksonline.com/library/view/c-in-a/0596001819/re686.html
CC-MAIN-2018-39
en
refinedweb
$ cnpm install item-selection Manage multi- <select> style selections in arrays. import itemSelection from 'item-selection' const sourceList = [ 'a', 'b', 'c', 'd' ] let selection = itemSelection(sourceList) // Methods behave similarly to selection operations in a <select multiple>, or // eg. your average file manager: selection = selection.select(0) // like clicking selection.get() // ['a'] selection = selection.selectRange(2) // like shift+clicking selection.get() // ['a', 'b', 'c'] selection = selection.selectToggle(1) // like ctrl+clicking selection.get() // ['a', 'c'] An itemSelection is immutable by default, i.e. it returns a new selection object. Use import itemSelection from 'item-selection/mutable' if you want to mutate the current selection object instead. Creates a new selection manager object. All mutation methods return a new selection manager object by default. If you want to mutate and reuse the same object, use import itemSelection from 'item-selection/mutable'. Creates a selection with just the item at the given index selected. Akin to clicking an item in a <select multiple> element. If you want to add an item to the selection, use selection.add(index) instead. Also sets the initial range index to index. Deselects the item at index. Selects or deselects the item at index. Akin to Ctrl+clicking. Also sets the initial range index to index if a new item was selected. Otherwise, unsets the initial range index. Selects the given range. Inclusive. (NB: That's different from what Array.slice does!) Selects a range based on the initial range index and the index. Akin to Shift+clicking. Previously selected items that fall outside the range will be deselected. If the initial range index was not set using select(index) or selectToggle(index), selectRange only selects the given index. Adds all items to the selection. Adds the item at index to the selection. Also sets the initial range index to index. Removes the item at index from the selection. Deselect all items. Get an array of the selected items. Get an array of the selected indices. Set a custom array of selected indices.
https://developer.aliyun.com/mirror/npm/package/item-selection
CC-MAIN-2021-04
en
refinedweb
Important: Please read the Qt Code of Conduct - Qt::Popup window in debugger seizes up whole windowing system I asked this question a long time ago, but never got an answer. I am trying again. I really would appreciate some answers/help here this time, please.... If I have a Qt::Popupwindow visible while debugging and code hits a breakpoint, assert or any error which will cause the debugger to take control, the popup remains on-screen and my whole desktop windowing system seizes up, with nothing I can do about it!! :( I am Linux/Ubuntu with GNOME desktop. (In a Windows VirtualBox, but that should not be relevant.) Everything is vanilla. Has happened with every Qt Creator/Qt/OS release since I started years ago. This may (well?) not happen on Windows, or even MacOS, I do not know. I cannot show a screenshot, because once it happens I cannot do anything, I cannot click, type a character, close either the app or stop the debugger, abort, kill, run another application, click to another window, nothing! I either have to hard-kill-reboot the machine, or use Ctrl+Alt+F3 to get to re-log in in a terminal window, pkillthe paused application or the gdb, then Ctrl+Alt+F1 to get back to re-log into the desktop, and then I can use my system. Though of course by now I can't continue in the debugger for the fault, but it's a bit better than rebooting the whole VM! This is a show-stopper. It makes debugging difficult to say the least, worse I spend my life fearing that some unrelated piece of code elsewhere which happens to be called from the slot will cause a debugger break, and then I will get locked up. I paste an absolute minimal repro below: #include <QApplication> #include <QDebug> #include <QPushButton> #include <QVBoxLayout> #include <QWidget> class AWidget : public QWidget { private: QPushButton *_btn, *_btn2; QWidget *_popup; public: explicit AWidget(QWidget *parent = nullptr) : QWidget(parent) { setGeometry(100, 100, 200, 200); setLayout(new QVBoxLayout); // create a `Qt::Popup` widget _popup = new QWidget(this); _popup->setWindowFlags(Qt::Popup | Qt::FramelessWindowHint); _popup->setGeometry(300, 300, 200, 200); _popup->setLayout(new QVBoxLayout); // put a button on popup widget, connect button's click to the test slot // so the slot will be called while the popup is up-front _btn2 = new QPushButton("Click to invoke slot"); _popup->layout()->addWidget(_btn2); connect(_btn2, &QPushButton::clicked, this, &AWidget::aSlot); // put a button on this widget, connect button's click to show the popup widget _btn = new QPushButton("Click to open popup"); layout()->addWidget(_btn); connect(_btn, &QPushButton::clicked, _popup, &QWidget::show); } private slots: void aSlot() { // the slot called while a popup is up-front // disaster strikes if you have a breakpoint/assert/error here, *or in anything this calls* // because the popup stays up-front "blocks" everything in the windowing system when in the debugger qDebug() << "aSlot"; // Q_ASSERT(false); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); AWidget w; w.show(); return a.exec(); } As it stands it will work OK. Now put a breakpoint in aSlot(), or uncomment the Q_ASSERT(false)there. Be prepared for lock-up.... It is true that if you introduce _popup->hide()as the very first line in the slot, then at least you do not get seized up. However, that stops me using the popup in the normal way, and I'd have to have it there permanently while debugging for every popup/slot just in case a code error elsewhere happened to get hit, which is obviously not practical. I am looking for: - Someone to test this under Linux/GNOME/whatever to confirm they have the problem, it's not just me. - (I don't mind if someone tests behaviour under Windows, but even if it works there that will not help me.) - I want either to be told how to "fix" the lock-up behaviour if that can be done, or... - ...to be told how I can have any kind of reasonable debugging experience, what exactly do you do if you need Popup windows in your application and need to debug? I should really appreciate some answers from the experts, kindly please. The code to paste is really simple. I am currently in a situation where the app has to have a Popup and I have to be able to debug, and I'm in trouble... :( Hi, I do not have a direct answer for that but maybe some ideas to help you find what is going wrong. Since you can reproduce that in a virtual machine, can you screenshot the virtual machine screen from the host ? That might allow to see something there. Also, one thing you can do is to ssh into your virtual machine so you can check live what goes on when hitting that break point. Unless the whole machine freezes, you should be able to run tools like top, dmesg (with the help of tail) and their friends. You should even be able to restart the desktop environment from there. One other thing, which Ubuntu variants are you currently running ? There are several of them so a precise version number might help. Qt Creator and gdb versions will also be a good thing to have. @SGaist I will shortly get back to you on all this. But I know what is happening. It is quite simply: there is a modal pop-up window visible on the desktop, when the debugger breakpoint is hit. The program cannot continue because it has broken into the debugger. And no other desktop window, neither the debugger nor anything else unrelated, can be clicked to/typed into because of the modal popup being there. That is precisely what the issue is. Everything is "working" as it should. the app is paused, the debugger has been broken into at the right place, the desktop is still running, and so on. The problem is solely that because of the modal, up-front popup you cannot interact with anything on the desktop! I used to have a similar-ish problem years ago developing under Windows (not Qt), whenever a combobox was dropped down and a break was hit. The combo's "choices" is a kind of modal, popup window, and same thing used to happen: locked up-front on the desktop, cannot get rid of it, cannot access anything else at all on desktop, reboot the PC. At some point either Windows behaviour or VS debugger changed to handle it, thankfully --- or maybe it never did! I tried a QComboboxfor same problem here, to my surprise that does work OK when break (dropdown gets dismissed or whatever), but not for Qt::Popup.... Are you Linux? I think actually you are macOS? Could you just tell me what happens to you if you try this short code with a breakpoint/assert in your environment, do you seize up or not? So you may found something, using a remote session and having only started Qt Creator 4.13.3 it does indeed block all interactions once the "Signal Received" dialog appears. I have not found anything suspicious in the logs. However, one thing I could observe is that there seems to be two dialogs appearing in quick succession but I could not see the first one, only the "Signal Received". @JonB said in Qt::Popup window in debugger seizes up whole windowing system: Are you Linux? I think actually you are macOS? Could you just tell me what happens to you if you try this short code with a breakpoint/assert in your environment, do you seize up or not? I have worked with/on Qt on many platforms and even embedded ones when cross-compilation was not yet the thing it is nowadays :-) @SGaist Here is a screenshot of the VM taken from the Windows host: You can even see where it has hit the breakpoint in the debugger, and the output of aSlotmessage in Application Output from previous line. If you were in front of it, you can even see the line cursor flashing on the source line it is on. Nothing on the machine has actually "seized up/crashed", it's all just as it should be. The problem is that Popup window with Click to invoked slot on it. That is a modal, up-front window, and it is because of this that you cannot interact with anything, anywhere on the desktop. When I Ctrl+Alt+F1 to get a new login window from the VM (effectively the same as your ssh), and do my pses, it's as it should be, for the app and for gdband for Creator. If I pkill -9 theApp, or kill the gdb, or kill the Creator, then when I'm done and go back to the desktop via Ctrl+Alt+F3 the desktop has regained normal control, because the necessary process was killed. If I put _popup->hide()on the line above the breakpoint, that popup window goes away when the break is hit and all is well. But that's not a solution for debugging, a breakpoint could be anywhere in code.... I am Ubuntu 20.04, GNOME desktop, gcc 9.3.0, gdb 9.2, Creator 4.11.0, Qt 5.12. However, over the years I have had different versions of all of these and (so far as I do recall) this has always been a problem. I have given up debugging when I have any visible Popupwindow, which is much of the time in current application, and I am now fed up not being able to debug...! :( Either the desktop windowing system has (somehow) to be told to behave differently, or Qt Creator has to know about this and take some action when hitting a breakpoint etc. to "free" the modality and allow the user to continue? - J.Hilk Moderators last edited by @JonB thats super odd, the popup shouldn't grab the input from the whole windowing system, but only from your application, which should be a different process to QtCreator! Can you upload that basic example ? Would love to test it myself @J-Hilk said in Qt::Popup window in debugger seizes up whole windowing system: thats super odd, the popup shouldn't grab the input from the whole windowing system, but only from your application, which should be a different process to QtCreator! Yeah, well, like I say, similar used to happen under Windows for a combobox's dropdown getting "frozen" on-screen when a break hit, cannot interact with desktop because of that, cannot close it because at a break in the debugger. => Reboot Windows! Did it for years :( But now I'm Linux I want better! Can you upload that basic example ? ? I pasted the 30-odd lines of code in my first post above, that's all you have to try? Am I not understanding? - J.Hilk Moderators last edited by @JonB said in Qt::Popup window in debugger seizes up whole windowing system: I pasted the 30-odd lines of code in my first post above, that's all you have to try? Am I not understanding? sorry didn't see it πŸ˜” @SGaist , @J-Hilk , @whoever We can eliminate the issue from being in a slot on the Popup window. At the end of my AWidgetconstructor, get rid of the connect()and just show the popup window, followed by a breakpoint/assert/other error: // put a button on this widget, connect button's click to show the popup widget _btn = new QPushButton("Click to open popup"); layout()->addWidget(_btn); // connect(_btn, &QPushButton::clicked, _popup, &QWidget::show); _popup->show(); Q_ASSERT(false); } Because the Popup is shown, this "freezes" on the Q_ASSERT()line. You see the popup window on the desktop (though without content visible), and that's enough to exhibit the problem. EDIT Because of this I can reduce the problem to just the following 5 lines: int main(int argc, char *argv[]) { QApplication a(argc, argv); QWidget popup(nullptr, Qt::Popup); // the `Qt::Popup` flag is what causes the problem popup.show(); Q_ASSERT(false); return a.exec(); } You (I) get to see that a Popup has come up, and the desktop is dead when it hits the Q_ASSERTor you put a breakpoint on that line... This is still on-going...? I was really hoping someone would try the 5-liner on Ubuntu, or perhaps any Linux with GNOME, or without? I am seeking to know whether this experience is common? I have discovered the same lockup if I run the code from gdbin a terminal instead of from Creator. I have discovered that if I change Qt::Popupto Qt::WindowStaysOnTopHintthe seizure does not happen. I get the popup behaviour of the window being up-front. If I hit a debugger break, the window does still stay there, on top of Creator. Fair enough. But I can continue interacting with the desktop or the Creator debugger, no problem. The problem will be related to do whatever Popupcauses to happen after it has just been shown, where any mouse-click --- including elsewhere on the desktop, unrelated to the running app --- or any key press is "eaten" by the Popupwindow, dismissing itself. Because that never arrives in the debug-break case, the popup window remains up-front and no mouse click/key goes anywhere else. Can someone explain what/how Qt::Popupdoes its next-click-to-dismiss work, especially under X11, or wherever this behaviour occurs? That's likely something to look in the xcb backend. These flags behave differently depending on the underlying OS as they are mapped to the corresponding platform flags.
https://forum.qt.io/topic/121610/qt-popup-window-in-debugger-seizes-up-whole-windowing-system
CC-MAIN-2021-04
en
refinedweb
CommonMarkCommonMark A CommonMark-compliant parser for Julia. InterfaceInterface using CommonMark Create a markdown parser with the default CommonMark settings and then add footnote syntax to our parser. parser = Parser() enable!(parser, FootnoteRule()) Parse some text to an abstract syntax tree. ast = parser("Hello *world*") Write ast to a string. body = html(ast) content = "<head></head><body>$body</body>" Write to a file. open("file.tex", "w") do file latex(file, ast) println(file, "rest of document...") end Or write to a buffer, such as stdout. term(stdout, ast) Output FormatsOutput Formats Supported output formats are currently: html latex term: colourised and Unicode-formatted for display in a terminal. markdown notebook: Jupyter notebooks. ExtensionsExtensions Extensions can be enabled using the enable! function and disabled using disable!. TypographyTypography Convert ASCII dashes, ellipses, and quotes to their Unicode equivalents. enable!(parser, TypographyRule()) Keyword arguments available for TypographyRule are double_quotes single_quotes ellipses dashes which all default to true. AdmonitionsAdmonitions enable!(parser, AdmonitionRule()) Front matterFront matter Fenced blocks at the start of a file containing structured data. +++ [heading] content = "..." +++ The rest of the file... The block must start on the first line of the file. Supported blocks are: ;;;for JSON +++for TOML ---for YAML To enable provide the FrontMatterRule with your choice of parsers for the formats: using JSON enable!(parser, FrontMatterRule(json=JSON.Parser.parse)) FootnotesFootnotes enable!(parser, FootnoteRule()) MathMath Julia-style inline and display maths: Some ``\LaTeX`` math: ```math f(a) = \frac{1}{2\pi}\int_{0}^{2\pi} (\alpha+R\cos(\theta))d\theta ``` Enabled with: enable!(parser, MathRule()) TablesTables Pipe-style tables, similar to GitHub's using |. Strict alignment required for pipes. | Column One | Column Two | Column Three | |:---------- | ---------- |:------------:| | Row `1` | Column `2` | | | *Row* 2 | **Row** 2 | Column ``3`` | Enabled with: enable!(parser, TableRule()) Raw ContentRaw Content Overload literal syntax to support passing through any type of raw content. enable!(parser, RawContentRule()) By default RawContentRule will handle inline and block content in HTML and LaTeX formats. This is raw HTML: `<img src="">`{=html}. And here's an HTML block: ```{=html} <div id="main"> <div class="article"> ``` ```{=latex} \begin{tikzpicture} \draw[gray, thick] (-1,2) -- (2,-4); \draw[gray, thick] (-1,-1) -- (2,2); \filldraw[black] (0,0) circle (2pt) node[anchor=west] {Intersection point}; \end{tikzpicture} ``` This can be used to pass through different complex content that can't be easily handled by CommonMark natively without any loss of expressiveness. Custom raw content handlers can also be passed through when enabling the rule. The naming scheme is <format>_inline or <format>_block. enable!(p, RawContentRule(rst_inline=RstInline)) The last example would require the definition of a custom RstInline struct and associated display methods for all supported output types, namely: html, latex, and term. When passing your own keywords to RawContentRule the defaults are not included and must be enabled individually. AttributesAttributes Block and inline nodes can be tagged with arbitrary metadata in the form of key/value pairs using the AttributeRule extension. enable!(p, AttributeRule()) Block attributes appear directly above the node that they target: {#my_id color="red"} # Heading This will attach the metadata id="my_id" and color="red" to # Heading. Inline attributes appear directly after the node that they target: *Some styled text*{background="green"}. Which will attach metadata background="green" to the emphasised text Some styled text. CSS-style shorthand syntax #<name> and .<name> are available to use in place of id="<name>" and class="name". Multiple classes may be specified sequentially. AttributeRule does not handle writing metadata to particular formats such as HTML or LaTeX. It is up to the implementation of a particular writer format to make use of available metadata itself. The built-in html and latex outputs make use of included attributes. html will include all provided attributes in the output, while latex makes use of only the #<id> attribute. CitationsCitations Use the following to enable in-text citations and reference list generation: enable!(p, CitationRule()) Syntax for citations is similar to what is offered by Pandoc. Citations start with @. Citations can either appear in square brackets [@id], or they can be written as part of the text like @id. Bracketed citations can contain more than one citation; separated by semi-colons [@one; @two; and @three]. {#refs} # References A reference section that will be populated with a list of all references can be marked using a {#refs} attribute from AttributeRule at the toplevel of the document. The list will be inserted after the node, in this case # References. Citations and reference lists are formatted following the Chicago Manual of Style. Styling will, in future versions, be customisable using Citation Style Language styles. The reference data used for citations must be provided in a format matching CSL JSON. Pass this data to CommonMark.jl when writing an AST to a output format. html(ast, Dict{String,Any}("references" => JSON.parsefile("references.json"))) CSL JSON can be exported easily from reference management software such as Zotero or generated via pandoc-citeproc --bib2json or similar. The references data can be provided by the front matter section of a document so long as the FrontMatterRule has been enabled, though this does require writing your CSL data manually. Note that the text format of the reference list is not important, and does not have to be JSON data. So long as the shape of the data matches CSL JSON it is valid. Below we use YAML references embedded in the document's front matter: --- references: - id: abelson1996 author: - family: Abelson given: Harold - family: Sussman given: Gerald Jay edition: 2nd Editon event-place: Cambridge ISBN: 0-262-01153-0 issued: date-parts: - - 1996 publisher: MIT Press/McGraw-Hill publisher-place: Cambridge title: Structure and interpretation of computer programs type: book --- Here's a citation [@abelson1996]. {#refs} # References CommonMark DefaultsCommonMark Defaults Block rules enabled by default in Parser objects: AtxHeadingRule() BlockQuoteRule() FencedCodeBlockRule() HtmlBlockRule() IndentedCodeBlockRule() ListItemRule() SetextHeadingRule() ThematicBreakRule() Inline rules enabled by default in Parser objects: AsteriskEmphasisRule() AutolinkRule() HtmlEntityRule() HtmlInlineRule() ImageRule() InlineCodeRule() LinkRule() UnderscoreEmphasisRule() These can all be disabled using disable!. Note that disabling some parser rules may result in unexpected results. It is recommended to be conservative in what is disabled. Note Until version 1.0.0 the rules listed above are subject to change and should be considered unstable regardless of whether they are exported or not. Writer ConfigurationWriter Configuration When writing to an output format configuration data can be provided by: - passing a Dict{String,Any}to the writer method, - front matter in the source document using the FrontMatterRuleextension. Front matter takes precedence over the passed Dict. Notable VariablesNotable Variables Values used to determine template behaviour: template-engine::FunctionUsed to render standalone document templates. No default is provided by this package. The template-enginefunction should follow the interface provided by Mustache.render. It is recommended to use Mustache.jl to provide this functionalilty. Syntax for opening and closing tags used by CommonMark.jlis ${...}. See the templates in src/writers/templatesfor usage examples. <format>.template.file::StringCustom template file to use for standalone <format>. <format>.template.string::StringCustom template string to use for standalone <format>. Generic variables that can be included in templates to customise documents: abstract::StringSummary of the document. authors::Vector{String}Vector of author names. date::StringDate of file generation. keywords::Vector{String}Vector of keywords to be included in the document metadata. lang::StringLanguage of the document. title::StringTitle of the document. subtitle::StringSubtitle of the document. Format-specific variables that should be used only in a particular format's template. They are namespaced to avoid collision with other variables. html html.css::Vector{String}Vector of CSS files to include in document. html.js::Vector{String}Vector of JavaScript files to include in document. latex latex.documentclass::StringClass file to use for document. Default is article. The following are automatically available in document templates. body::StringMain content of the page. curdir::StringCurrent directory. outputfile::StringName of file that is being written to. When writing to an in-memory buffer this variable is not defined.
https://juliapackages.com/p/commonmark
CC-MAIN-2021-04
en
refinedweb
C++ Factorial of a given Number Program Hello Everyone! In this tutorial, we will learn how to find the Factorial of a given number using the C++ programming language. Code: #include <iostream> using namespace std; int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to find the Factorial of a given number ===== \n\n"; //variable declaration int i,n; //as we are dealing with the product, it should be initialized with 1. int factorial=1; //taking input from the command line (user) cout << "Enter the number to find the factorial for: "; cin >> n; //finding the factorial by multiplying all the numbers from 1 to n for (i = 1; i <= n; i++) { factorial *= i; // same as factorial = factorial * i } cout << "\n\nThe Factorial of " << n << " is: " << factorial; cout << "\n\n\n"; return 0; } Output: Now let's see what we have done in the above program. Program Explained: Let's break down the parts of the code for better understanding. What's a Factorial in Mathematics? In mathematics, the factorial of a positive integer n, denoted by n!, is the product of all positive integers less than or equal to n: Note: Factorial is only defined for non-negative numbers. (>=0) The value of 0 factorial is 1. (0! = 1) //as we are dealing with the product, it should be initialized with 1. int factorial=1; As Factorial is only defined for a non-negative integers, it always results into a positive integer value. Also, initializing it to 1 as the multiplication operation is involved in the logic given below. 1. Logic for finding the factorial using C++: // finding the factorial by multiplying all the numbers from 1 to n for (i = 1; i <= n; i++) { factorial *= i; // same as factorial = factorial * i } As per the above definition, we need to take the product of all the numbers starting from 1 to the number itself. Loop is the best way to achieve this. factorial *= i; This is same as factorial = factorial * i, but an easier way to code. This works for all the mathematical operations such as +, -, /, %. Wil recommend you to try this out on yourself to develop better understanding. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-factorial-of-a-given-number-program
CC-MAIN-2021-04
en
refinedweb
A set of SVG icons for CRUD applications (hand-picked among thousands at Material Design Icons) packaged as a React component with light & dark themes and tooltip. React-CRUD-Icons comes in Light and Dark theme. ... and 6 sizes: Tiny, Small, Medium, Large, Big, and Huge. The package can be installed via npm: npm install react-crud-icons --save You will need to install React and PropTypes separately since those dependencies aren't included in the package. Below is a simple example of how to use the component in a React view. You will also need to include the CSS file from this package (or provide your own). The example below shows how to include the CSS from this package if your build system supports requiring CSS files (Webpack is one that does). import React from "react"; import Icon from "react-crud-icons"; import "../node_modules/react-crud-icons/dist/react-crud-icons.css"; class Example extends React.Component { render() { return ( <Icon name = "edit" tooltip = "Edit" theme = "light" size = "medium" onClick = { doSomething } /> ); } } The component renders an inline SVG. To package the code, I followed the steps from the blog post Building a React component as an NPM module by Manoj Singh Negi. This article, along with any associated source code and files, is licensed under The MIT License
https://www.codeproject.com/Articles/5286750/React-Icon-Set-for-CRUD-Applications?PageFlow=FixedWidth
CC-MAIN-2021-04
en
refinedweb
A tutorial to build your first app on the decentralized web Blockstack is a network for decentralized applications. This platform leverages a serverless architecture, which helps remove critical points of vulnerability. By eliminating these weak points, which have frequently fallen victim to hacks, Blockstack makes user data more secure. Prerequisites: Knowledge of React.js will be required for this tutorial. Blockchain tech can be complicated, but getting started doesn’t have to be. Blockstack’s 3rd party sign in/sign up/authentication makes it easy to get started developing apps and publishing it to a decentralized App store like App.co I was able to publish this on App.co in less than a week After integrating Blockstack, you have the option to either send the unique username that a user creates from Blockstack to your own API and build the user object that way, or use Gaia Storage, Blockstack’s decentralized backend provider. You can also opt to have a combination of both, where private user information like phone numbers and addresses are encrypted and stored in Gaia Storage but public information like comments or posts are stored in a public API. This blog post is meant to simplify and abstract as much as possible. If you would like deep dive video tutorials, check out Tech Rally on YouTube (this is where I learned about Blockstack). For now, we’ll cover getting Blockstack Sign In/Sign Out authentication set up. Let’s get started! 1) Install Blockstack Browser 2) Create your Blockstack ID (be sure to save your Secret Recovery Key somewhere safe) In your terminal: npm init react-app blockstack-tutorial cd blockstack-tutorial npm install --save [email protected] npm install react-app-rewired --save-dev mkdir src/utils touch src/utils/constants.js open src/utils/constants.js If npm install gives you a problem, try yarn add: yarn add [email protected] yarn add react-app-rewired --save-dev 4) constants.js: import { AppConfig } from 'blockstack' export const appConfig = new AppConfig(['store_write', 'publish_data']) 5) In your terminal: touch config-overrides.js open config-overrides.js 6) config-overrides.js: 7) In your terminal: open package.json 8) package.json: 9) In your terminal: open src/App.js 10) App.js: 11) In your terminal: open src/index.js 12) index.js: 13) In your terminal: open src/App.css 14) App.css: 15) In your terminal: npm start or yarn start That’s it! Simple, but powerful β€”you are now connected to Blockstack. In part two of this series, I’ll show you How to Connect Blockstack to your Backend API I’ll be coding live Saturdays 11am-3pm EST, ask questions! Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/robghchen/how-to-build-your-first-blockchain-app-on-blockstack-2n01
CC-MAIN-2021-04
en
refinedweb
C++ Constructor Overloading Program Hello Everyone! In this tutorial, we will learn how to demonstrate the concept of Constructor Overloading, in the C++ programming language. To understand the concept of Constructor Overloading in CPP, we will recommend you to visit here:, where we have explained it from scratch. Code: #include <iostream> #include <vector> using namespace std; //defining the class shape to overload the method area() on the basis of number of parameters. class Area { //declaring member variables private: int length, breadth; public: // Constructor without argument Area() : length(5), breadth(2) { } // Defining a Constructor with two arguments: length and breadth Area(int l, int b) : length(l), breadth(b) { } void GetLength() { cout << "\nEnter the Length and Breadth of the Rectangle : \n"; cin >> length >> breadth; } int AreaCalculation() { return (length * breadth); } void DisplayArea(int a) { cout << "Area of the Rectangle is = " << a << endl; } }; //Defining the main method to access the members of the class int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate Constructor Overloading in a Class, in CPP ===== \n\n"; Area a1; //Default constructor is called Area a2(5, 2); //Parameterised constructor is called int area1, area2; a1.GetLength(); cout << "\n\nCalculating Area using a Default Constructor:" << endl; area1 = a1.AreaCalculation(); a1.DisplayArea(area1); cout << "\n\nCalculating Area using a Parameterised Constructor:" << endl; area2 = a2.AreaCalculation(); a2.DisplayArea(area2); cout << "\n\nExiting the main() method\n\n\n"; return 0; } Output: We hope that this post helped you develop better understanding of the concept of Constructor Overloading in C++. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-constructor-overloading-program
CC-MAIN-2021-04
en
refinedweb
Using JBoss Drools to Implement a E-commerce Promotion Rule Engine – Part I January 11, 2014 In this and the following articles, I will demonstrate how to implement a basic promotion engine using JBoss Drools framework. The objective here is not to provide a complete implementation but to show how Drools and business rule engine in general can be used to solve common ecommerce problems. Also to demonstrate some of the features provided by Drools Rule Engine. Introduction From a software system prospective, a business rule engine allows business to define their business rules declaratively. The system can then, via its inference engine, to match the rules defined with the β€œfacts” it observes at run time. Some of the major advantages of using a rule engine are: - Declarative Programming - Separation of Data and Logic - Centralization of Knowledge - Explanation of outcomes or actions A more comprehensive list of advantages and more detailed explanation can be found here. E-commerce Promotion Rules I define the following business rules for a ecommerce or brick-and-mortar store to decide when and what promotional discount(s) should be applied for an order. They are fictitious rules that should be commonly understandable. - Large order discount - 5% discount for order total over $1000 and less than $2000 - 10% discount for order total over $2000 - Clearance products – 10% off from a list of defined products - Time based sales – 10% off from a list of defined products within a certain date range, e.g. Christmas sales between 1/12 to 31/12. - Special Tuesday – everything 5% off on Tuesday Domain Objects (Facts) The first step to do is to define the business domain objects which will act as the facts to be processed by the rule engine. Our domain consists of the following classes: - Order – represents a single sales order - OrderLine – represents an item in an order - ClearanceProductList – represents a list of discounted products - TimeBasedSales – represented a list of products to apply discount to when date of order falls within the defined date range. A discount can be applied to each item (i.e. OrderLine) of an order. On top of that, an order discount can then be applied on the order total. I include the code snippets below: Order.java public class Order { private double orderDiscountAmount; // order discount amount private List<OrderLine> lines = new ArrayList<OrderLine>(); // calculate discounted total amount of all the items in this order public double getLineTotal() { double lineTotal = 0; for (OrderLine line : lines) { lineTotal += line.getLineAmount(); } return lineTotal; } // getter and setter methods below omitted here OrderLine.java public class OrderLine { private String sku; private int qty; // quantity bought private double unitPrice; // price for each unit private double lineDiscountAmount; // discount applied to this item public double getLineAmount() { return unitPrice * qty – lineDiscountAmount; } // getter and setter methods below omitted here ClearanceProductList public class ClearanceProductList { private List<String> skus = new ArrayList<String>(); // getter and setter methods below omitted here TimeBasedSales public class TimeBasedSales { private Date fromDate; private Date toDate; private List<String> products = new ArrayList<String>(); // getter and setter methods below omitted here JUnit Test class Include below is the unit test class snippets that I will use to run various facts (i.e. orders) against the rule engine. It also demonstrates how to set up Drools. public class BlogRuleTest { private Logger logger = Logger.getLogger(getClass()); private StatefulKnowledgeSession ksession; @Before public void setUp() { KnowledgeBuilder knowledgeBuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); knowledgeBuilder.add ( ResourceFactory.newClassPathResource(β€œcom/drools/blog/blog.drl”, getClass()), ResourceType.DRL); // verify rule file has no errors if (knowledgeBuilder.hasErrors()) { Iterator<KnowledgeBuilderError> iterator = knowledgeBuilder.getErrors().iterator(); while (iterator.hasNext()) { logger.error(iterator.next().getMessage()); } fail(β€œRule file has error”); } KnowledgeSessionConfiguration config = KnowledgeBaseFactory.newKnowledgeSessionConfiguration(); config.setOption( ClockTypeOption.get(β€œpseudo”) ); KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(knowledgeBuilder.getKnowledgePackages()); ksession = kbase.newStatefulKnowledgeSession(config, null); // Default to a non-Tuesday advanceToDayOfWeek(1); getTuesdayCalendar(); } @After public void after() { ksession.dispose(); } Note: - The rule file blog.drl is expected to be in folder com/drools/blog/blog.drl under the classpath, e.g. under /src/main/resources. - I create stateful knowledge session above. This is not really required for the task at hand. See Drools documentation for an explanation of the difference between stateless and stateful knowledge sessions. - SessionPseudoClock is used here to allow us to test time based rule. More on this later when we define the β€œSpecial Tuesday” rule - The session is disposed after each unit test is run in the after() method. On to Drools Rules OK. We are ready to implement the Drools rules. Let do this in the next post.
https://raymondhlee.wordpress.com/2014/01/11/using-jboss-drools-to-implement-a-e-commerce-promotion-rule-engine-part-i/
CC-MAIN-2018-26
en
refinedweb
Hi! I am currently stuck! 17: Review Functions. I keep getting the following error message: Oops, try again. Your function failed on the message yes. It returned 'yes' when it should have returned 'Shutting down' Can anyone tell me what is wrong with my code???? def shut_down(s): return(s) if s == 'yes': return 'Shutting down' elif s == 'no': return 'Shutdown aborted' else: return 'Sorry' Any help would be appreciated! Thank you!
https://discuss.codecademy.com/t/stuck-in-17-review-functions/49465
CC-MAIN-2018-26
en
refinedweb
. C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\Summary.txt file I've seen clashes there before & may be able to help fix them. Monitor your entire network from a single platform. Free 30 Day Trial Now! TITLE: SQL Server Setup failure. -------------------------- SQL Server Setup has encountered the following error: Unknown property. . -------------------------- IBUTTONS: OK -------------------------- I tred to install the tools from MS SQL 2008 and I tried to down load express 2008 client tools from microsoft.. I seen the same thread regarding msxml 6.0 and uninstalled all versions on the machine and still didn't go... all our machines are XP SP3 I am going to test it on a SP2 image. Connot connect to WMI provider. you do not have permission or the server is unreachable. Note that you can only manage SQL server 2005 and later servers with SQL server configuration manager. Invalid namespace [0x80041003] We are unable to install SQL 2008 client tools on a XP SP3 machine.
https://www.experts-exchange.com/questions/26813106/SQL-2008-Client-Tools.html
CC-MAIN-2018-26
en
refinedweb
Caching Over MyBatis: The Widely Used Ehcache Implementation with MyBatis Caching Over MyBatis: The Widely Used Ehcache Implementation with MyBatis Join the DZone community and get the full member experience.Join For Free Ready for feedback: How would you use Capital One’s Virtual Card Numbers API? This article represents the first Proof of Concept from series described in the previous article 4 Hands-On Approaches to Improve Your Data Access Layer Implementation and it presents how to implement Ehcache over MyBatis, how to achieve an optim configuration for it and personal opinions of the author about the chosen approach for the Data Access Layer. Throughout my research on caching over MyBatis I have discovered that Ehcache is the first option among developers when they need to implement a cache mechanism over MyBatis, using a 3rd party library. Ehcache is probably so popular because it represents an open source, java-based cache, available under an Apache 2 license. Also, it scales from in-process with one or more nodes through to a mixed in-process/out-of-process configuration with terabyte-sized caches. In addition, for those applications needing a coherent distributed cache, Ehcache uses the open source Terracotta Server Array. Last but not least, among its adopters is the Wikimedia Foundation that uses Ehcache to improve the performance of its wiki projects. Within this article, the following aspects will be addressed: 1. How will an application benefit from caching using Ehcache? Ehcache's features will be detailed in this section. 2. Hands-on implementation of the EhCachePOC project - in this section the key concepts of EhCache will be explored through a hands on implementation. 3. Summary - How has the application performance been improved after this implementation? Code of all the projects that will be implemented can be found at or if you are interested only in the current implementation, you can access it here: How will an application benefit from caching using Ehcache? The time taken for an application to process a request principally depends on the speed of the CPU and main memory. In order to "speed up" your application you can perform one or more of the following: - improve the algorithm performance - achieve parallelisation of the computations across multiple CPUs or multiple machines - upgrade the CPU speed As explained in the previous article, high availability applications should perform a small amount of actions with the database. Since the time taken to complete a computation depends principally on the rate at which data can be obtained, then the application should be able to temporarily store computations that may be reused again. Caching may be able to reduce the workload required, this means a caching mechanism should be created! Ehcache is described as : - Fast and Light Weight , having a simple API and requiring only a dependency on SLF4J. - Scalable to hundreds of nodes with the Terracotta Server Array, but also because provides Memory and Disk store for scalability into gigabytes - Flexible because supports Object or Serializable caching; also provides LRU, LFU and FIFO cache eviction policies - Standards Based having a full implementation of JSR107 JCACHE API - Application Persistence Provider because it offers persistent disk store which stores data between VM restarts - JMX Enabled - Distributed Caching Enabler because it offers clustered caching via Terracotta and replicated caching via RMI, JGroups, or JMS - Cache Server (RESTful, SOAP cache Server) - Search Compatible, having a standalone and distributed search using a fluent query language Hands-on implementation of the EhCachePOC project The implementation of EhCachePoc will look as described in the diagram below: In order to test Ehcache performance through a POC(proof of concept) project the following project setup is performed: 1. Create a new Maven EJB Project from your IDE (this kind of project is platform provided by NetBeans but for those that use eclipse, here is an usefull tutorial) . In the article this project is named EhCachePOC. 2. Edit the project's pom by adding required jars : <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.2.6</version> </dependency> <dependency> <groupId>org.mybatis.caches</groupId> <artifactId>mybatis-ehcache</artifactId> <version>1.0.2</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> <version>2.7.0</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.5</version> </dependency> 3.Add your database connection driver, in this case apache derby: <dependency> <groupId>org.apache.derby</groupId> <artifactId>derbyclient</artifactId> <version>10.11.1.1</version> </dependency> 4. Run mvn clean and mvn install commands on your project. Now the project setup is in place, let's go ahead with MyBatis implementation : 1. Configure under resources/com/tutorial/ehcachepoc/xml folder the Configuration.xml file with : <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN" ""> <configuration> <environments default="development"> <environment id="development"> <transactionManager type="JDBC"/> <dataSource type="UNPOOLED"> <property name="driver" value="org.apache.derby.jdbc.ClientDriver"/> <property name="url" value="dburl"/> <property name="username" value="cruddy"/> <property name="password" value="cruddy"/> </dataSource> </environment> </environments> <mappers> <!--<mapper resource="com/tutorial/ehcachepoc/xml/EmployeeMapper.xml" />--> </mappers> </configuration> 2. Create in java your own SQLSessionFactory implementation. For example, create something similar to com.tutorial.ehcachepoc.config. SQLSessionFactory : public class SQLSessionFactory { private static final SqlSessionFactory FACTORY; static { try { Reader reader = Resources.getResourceAsReader("com/tutorial/ehcachepoc/xml/Configuration.xml"); FACTORY = new SqlSessionFactoryBuilder().build(reader); } catch (Exception e){ throw new RuntimeException("Fatal Error. Cause: " + e, e); } } public static SqlSessionFactory getSqlSessionFactory() { return FACTORY; } } 3. Create the necessary bean classes, those that will map to your sql results, like Employee: public class Employee implements Serializable { private static final long serialVersionUID = 1L; private Integer id; private String firstName; private String lastName; private String adress; private Date hiringDate; private String sex; private String phone; private int positionId; private int deptId; public Employee() { } public Employee(Integer id) { this.id = id; } @Override public String toString() { return "com.tutorial.ehcachepoc.bean.Employee[ id=" + id + " ]"; } } 4. Create the IEmployeeDAO interface that will expose the ejb implementation when injected: public interface IEmployeeDAO { public List<Employee> getEmployees(); } 5. Implement the above inteface and expose the implementation as a Stateless EJB (this kind of EJB preserves only its state, but there is no need to preserve its associated client state): @Stateless(name = "ehcacheDAO") @TransactionManagement(TransactionManagementType.CONTAINER) public class EmployeeDAO implements IEmployeeDAO { private static Logger logger = Logger.getLogger(EmployeeDAO.class); private SqlSessionFactory sqlSessionFactory; @PostConstruct public void init() { sqlSessionFactory = SQLSessionFactory.getSqlSessionFactory(); } @Override public List<Employee> getEmployees() { logger.info("Getting employees....."); SqlSession sqlSession = sqlSessionFactory.openSession(); List<Employee> results = sqlSession.selectList("retrieveEmployees"); sqlSession.close(); return results; } } 5. Create the EmployeeMapper.xml that contains the query named "retrieveEmployees" <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "" > <mapper namespace="com.tutorial.ehcachepoc.mapper.EmployeeMapper" > " > select id, first_name, last_name, hiring_date, sex, dept_id from employee </select> </mapper> If you remember the CacherPOC setup from the previously article, then you can test your implementation if you add EhCachePOC project as dependency and inject the IEmployeeDAO inside the EhCacheServlet. Your CacherPOC pom.xml file should contain : <dependency> <groupId>${project.groupId}</groupId> <artifactId>EhCachePoc</artifactId> <version>${project.version}</version> </dependency> and your servlet should look like: @WebServlet("/EhCacheServlet") public class EhCacheServlet extends HttpServlet { private static Logger logger = Logger.getLogger(EhCacheServlet.class); @EJB(beanName ="ehcacheDAO") IEmployeeDAO employeeDAO; private static final String LIST_USER = "/listEmployee.jsp"; @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { String forward= LIST_USER; List<Employee> results = new ArrayList<Employee>(); for (int i = 0; i < 10; i++) { for (Employee emp : employeeDAO.getEmployees()) { logger.debug(emp); results.add(emp); } try { Thread.sleep(3000); } catch (Exception e) { logger.error(e, e); } } req.setAttribute("employees", results); RequestDispatcher view = req.getRequestDispatcher(forward); view.forward(req, resp); } } Run your CacherPoc implementation to check if your Data Access Layer with MyBatis is working or download the code provided at But if a great amount of employees is stored in database, or perhaps the retrieval of a number of 10xemployeesNo represents a lot of workload for the database. Also, can be noticed that the query from the EmployeeMapper.xml retrieves data that almost never changes (id, first_name, last_name, hiring_date, sex cannot change; the only value that might change in time is dept_id); so a caching mechanism can be used. Below is described how this can be achieved using EhCache: 1. Configure directly under the resources folder the ehcache.xml file with: <?xml version="1.0" encoding="UTF-8"?> <!-- caching configuration --> <ehcache> <defaultCache eternal="true" maxElementsInMemory="1000" timeToIdleSeconds="3600" timeToLiveSeconds="3600" maxEntriesLocalHeap="1000" maxEntriesLocalDisk="10000000" memoryStoreEvictionPolicy="LRU" statistics="true" /> </ehcache> This xml explains that the Memory Store is used for an LRU (Last Recently Used) caching strategy, sets the limits for the number of elements allowed for storage, their time to be idle and their time to live. The Memory Store strategy is often chosen because is fast and thread safe for use by multiple concurrent threads, being backed by LinkedHashMap. Also, all elements involved in the caching process are suitable for placement in the Memory Store. Another approach can be tried: storing cache on disk. This can be done by replacing the ehcache tag content with: diskStore <defaultCache eternal="true" maxElementsInMemory="1000" overflowToDisk="true" diskPersistent="true" timeToIdleSeconds="0" timeToLiveSeconds="0" memoryStoreEvictionPolicy="LRU" statistics="true" /> Unlike the memory store strategy, the disk store implementation is suitable only for elements which are serializable can be placed in the off-heap; if any non serializable elements are encountered, those will be removed and WARNING level log message emitted. The eviction is made using the LFU algorithm and it is not configurable or changeable. From persistency point of view, this method of caching allows control of the cache by the disk persistent configuration; if false or omitted, disk store will not persist between CacheManager restarts. 2. Update EmployeeMapper.xml to use the previous implemented caching strategy: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "" > <mapper namespace="com.tutorial.ehcachepoc.mapper.EmployeeMapper" > <cache type="org.mybatis.caches.ehcache.EhcacheCache"/> " useCache="true"> select id, first_name, last_name, hiring_date, sex, dept_id from employee </select> </mapper> By adding the line <cache type="org.mybatis.caches.ehcache.EhcacheCache"/> and specifying on the query useCache="true" you are binding the ehcache.xml configuration to your DataAccessLayer implementation. Clean, build and redeploy both EhCachePOC and CacherPoc projects; now retrieve your employees for two times in order to allow the in-memory cache to store your values. When you run your query for the first time, your application will execute the query on the database and retrieve the results. Second time you access the employee list, your application will access the in-memory storage. Summary - How has the application performance been improved after this implementation? An application's performances depend on a multitude of factors - how many times a cached piece of data can and is reduced by the application - the proportion of the response time that is alleviated by caching where P is proportion speed up and S is speed up. Let's take the application from this article as example and calculate the speed up. When the application ran the query without caching,a JDBC transaction is performed and in your log will be something similar to : INFO: 2014-11-27 18:01:30,020 [EmployeeDAO] INFO com.tutorial.hazelcastpoc.dao.EmployeeDAO:38 - Getting employees..... INFO: 2014-11-27 18:01:39,148 [JdbcTransaction] DEBUG org.apache.ibatis.transaction.jdbc.JdbcTransaction:98 - Setting autocommit to false on JDBC Connection [org.apache.derby.client.net.NetConnection40@1c374fd] INFO: 2014-11-27 18:01:39,159 [retrieveEmployees] DEBUG com.tutorial.hazelcastpoc.mapper.EmployeeMapper.retrieveEmployees:139 - ==> Preparing: select id, first_name, last_name, hiring_date, sex, dept_id from employee INFO: 2014-11-27 18:01:39,220 [retrieveEmployees] DEBUG com.tutorial.hazelcastpoc.mapper.EmployeeMapper.retrieveEmployees:139 - ==> Parameters: INFO: 2014-11-27 18:01:39,316 [retrieveEmployees] DEBUG com.tutorial.hazelcastpoc.mapper.EmployeeMapper.retrieveEmployees:139 - <== Total: 13 while running the queries with Ehcache caching the JDBC transaction is performed only once (to initialize the cache) and after that the log will look like : INFO: 2014-11-28 18:04:50,020 [EmployeeDAO] INFO com.tutorial.ehcachepoc.dao.EmployeeDAO:38 - Getting employees..... INFO: 2014-11-28 18:04:50,020 [EhCacheServlet] DEBUG com.tutorial.cacherpoc.EhCacheServlet:41 - com.tutorial.crudwithjsp.model.Employee[ id=1 ] Let's look at the time that each of our 10 times requests has scored: - the first not cached version of 10 times requests took about 57 seconds and 51 milliseconds, - while the cached requests scored a time of 27seconds and 86 miliseconds. In order to apply Amdhal's law for the system the following input is needed: - Un-cached page time: 60 seconds - Database time : 58 seconds - Cache retrieval time: 28seconds - Proportion: 96.6% (58/60) (P) The expected system speedup is thus: 1 / (( 1 – 0.966) + 0.966 / (58/28)) = 1 / (0.034 + 0. 966/2.07) = 2 times system speedup This result can be improved of course, but the purpose of this article was to prove that caching using Ehcache over MyBatis offers a significant improvement to what used to be available before its implementation. Learn more from: Learn how Capital One’s Virtual Card Numbers can benefit digital merchants and consumers. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/caching-over-mybatis-widelly
CC-MAIN-2018-26
en
refinedweb
Defensive copying Immutable objects java.util.Dateobjects, but in an immutable form. For example, you could use: LocalDateTime, or LocalDate(JDK 8+) String Longrepresenting the number of milliseconds since some initial epoch DateTimeclass in the date4j library also addresses the issue, but when JDK 8+ is available, it likely shouldn't be used.) Reasons for preferring an immutable representation: java.util.Dateclass requires more care java.util.Dateclass itself has many deprecations, replaced by methods in DateFormat, Calendar, and GregorianCalendar Here, a LocalDate is used to encapsulate a date: import java.time.LocalDate; import java.util.Objects; public class Film { public static void main(String... args) { Film film = new Film("The Lobster", LocalDate.parse("2015-10-16")); log(film.getReleasedOn()); } public Film(String name, LocalDate releasedOn){ this.name = name; this.releasedOn = releasedOn; } public String getName() { return name; } public LocalDate getReleasedOn() { return releasedOn; } //...elided // PRIVATE private String name; /** Immutable. Has no time-zone information. The time-zone is simply treated as implicit, according to the context. */ private LocalDate releasedOn; private static void log(Object thing){ System.out.println(Objects.toString(thing)); } }
http://javapractices.com/topic/TopicAction.do;jsessionid=4D0FE8213FBC2C17B7BA3A279E6C354F?Id=81
CC-MAIN-2018-26
en
refinedweb
Endian-swap a given number of bytes #include <unistd.h> void swab( const void * src, void * dest, ssize_t nbytes ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The swab() function copies nbytes bytes, pointed to by src, to the object pointed to by dest, exchanging adjacent bytes. The nbytes argument should be even: If copying takes place between objects that overlap, the behavior is undefined.
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/s/swab.html
CC-MAIN-2018-26
en
refinedweb
New exercises added to: Ruby, Java, PHP, Python, C Programming, Matplotlib, Python NumPy, Python Pandas, PL/SQL, Swift Inheritance (IS-A) vs. Composition (HAS-A) Relationship Description One of the advantages of an Object-Oriented programming language is code reuse. There are two ways we can do code reuse either by the vimplementation of inheritance (IS-A relationship), or object composition (HAS-A relationship). Although the compiler and Java virtual machine (JVM) will do a lot of work for you when you use inheritance, you can also get at the functionality of inheritance when you use composition.. For example, House is a Building. But Building is not a House. It is a key point to note that you can easily identify the IS-A relationship. Wherever you see an extends keyword or implements keyword in a class declaration, then this class is said to have IS-A relationship. HAS-A Relationship: Composition(HAS-A) simply mean the use of instance variables that are references to other objects. For example Maruti has Engine, or House has Bathroom. Let’s understand these concepts with an example of Car class. package relationships; class Car { // Methods implementation and class/Instance members private String color; private int maxSpeed; public void carInfo(){ System.out.println("Car Color= "+color + " Max Speed= " + maxSpeed); } public void setColor(String color) { this.color = color; } public void setMaxSpeed(int maxSpeed) { this.maxSpeed = maxSpeed; } } As shown above, Car class has a couple of instance variable and few methods. Maruti is a specific type of Car which extends Car class means Maruti IS-A Car. class Maruti extends Car{ //Maruti extends Car and thus inherits all methods from Car (except final and static) //Maruti can also define all its specific functionality public void MarutiStartDemo(){ Engine MarutiEngine = new Engine(); MarutiEngine.start(); } } Maruti class uses Engine object’s start() method via composition. We can say that Maruti class HAS-A Engine. package relationships; public class Engine { public void start(){ System.out.println("Engine Started:"); } public void stop(){ System.out.println("Engine Stopped:"); } } RelationsDemo class is making object of Maruti class and initialized it. Though Maruti class does not have setColor(), setMaxSpeed() and carInfo() methods still we can use it due to IS-A relationship of Maruti class with Car class. package relationships; public class RelationsDemo { public static void main(String[] args) { Maruti myMaruti = new Maruti(); myMaruti.setColor("RED"); myMaruti.setMaxSpeed(180); myMaruti.carInfo(); myMaruti.MarutiStartDemo(); } } If we run RelationsDemo class we can see output like below. Comparing Composition and Inheritance - It is easier to change the class implementing composition than inheritance. The change of a superclass impacts the inheritance hierarchy to subclasses. - You can't add to a subclass a method with the same signature but a different return type as a method inherited from a superclass. Composition, on the other hand, allows you to change the interface of a front-end class without affecting back-end classes. - Composition is dynamic binding (run-time binding) while Inheritance is static binding (compile time binding) -. - With both composition and inheritance, changing the implementation (not the interface) of any class is easy. The ripple effect of implementation changes remains inside the same class. - Don't use inheritance just to get code reuse If all you really want is to reuse code and there is no is-a relationship in sight, use composition. - Don't use inheritance just to get at polymorphism If all you really want is a polymorphism, but there is no natural is-a relationship, use composition with interfaces. Summary - IS-A relationship based on Inheritance, which can be of two types Class Inheritance or Interface Inheritance. - Has-a relationship is composition relationship which is a productive way of code reuse. Amazon promo codes to get huge discounts for limited period (USA only).
https://www.w3resource.com/java-tutorial/inheritance-composition-relationship.php
CC-MAIN-2018-26
en
refinedweb
import "github.com/jrwren/juju/api/uniter" action.go charm.go endpoint.go environ.go relation.go relationunit.go service.go settings.go unit.go uniter.go NewState creates a new client-side Uniter facade. Defined like this to allow patching during tests. CharmsURL takes an API server address and an optional environment tag and constructs a base URL used for fetching charm archives. If the environment tag empty or invalid, it will be ignored. Action represents a single instance of an Action call, by name and params. NewAction makes a new Action with specified name and params map. Name retrieves the name of the Action. Params retrieves the params map of the Action. Charm represents the state of a charm in the environment. ArchiveSha256 returns the SHA256 digest of the charm archive (bundle) bytes. NOTE: This differs from state.Charm.BundleSha256() by returning an error as well, because it needs to make an API call. It's also renamed to avoid confusion with juju deployment bundles. TODO(dimitern): 2013-09-06 bug 1221834 Cache the result after getting it once for the same charm URL, because it's immutable. ArchiveURL returns the url to the charm archive (bundle) in the environment storage. String returns the charm URL as a string. URL returns the URL that identifies the charm. Endpoint represents one endpoint of a relation. It is just a wrapper around charm.Relation. No API calls to the server-side are needed to support the interface needed by the uniter worker. Environment represents the state of an environment. func (e Environment) Name() string Name returns the human friendly name of the environment. func (e Environment) UUID() string UUID returns the universally unique identifier of the environment. Relation represents a relation between one or two service endpoints. Endpoint returns the endpoint of the relation for the service the uniter's managed unit belongs to. Id returns the integer internal relation key. This is exposed because the unit agent needs to expose a value derived from this (as JUJU_RELATION_ID) to allow relation hooks to differentiate between relations with different services. Life returns the relation's current life state. Refresh refreshes the contents of the relation from the underlying state. It returns an error that satisfies errors.IsNotFound if the relation has been removed. String returns the relation as a string. func (r *Relation) Tag() names.RelationTag Tag returns the relation tag. func (r *Relation) Unit(u *Unit) (*RelationUnit, error) Unit returns a RelationUnit for the supplied unit. RelationUnit holds information about a single unit in a relation, and allows clients to conveniently access unit-specific functionality. func (ru *RelationUnit) Endpoint() Endpoint Endpoint returns the relation endpoint that defines the unit's participation in the relation. func (ru *RelationUnit) EnterScope() error EnterScope ensures that the unit has entered its scope in the relation. When the unit has already entered its relation scope, EnterScope will report success but make no changes to state. Otherwise, assuming both the relation and the unit are alive, it will enter scope. If the unit is a principal and the relation has container scope, EnterScope will also create the required subordinate unit, if it does not already exist; this is because there's no point having a principal in scope if there is no corresponding subordinate to join it. Once a unit has entered a scope, it stays in scope without further intervention; the relation will not be able to become Dead until all units have departed its scopes. NOTE: Unlike state.RelatioUnit.EnterScope(), this method does not take settings, because uniter only uses this to supply the unit's private address, but this is not done at the server-side by the API. func (ru *RelationUnit) LeaveScope() error LeaveScope signals that the unit has left its scope in the relation. After the unit has left its relation scope, it is no longer a member of the relation; if the relation is dying when its last member unit leaves, it is removed immediately. It is not an error to leave a scope that the unit is not, or never was, a member of. func (ru *RelationUnit) PrivateAddress() (string, error) PrivateAddress returns the private address of the unit and whether it is valid. NOTE: This differs from state.RelationUnit.PrivateAddress() by returning an error instead of a bool, because it needs to make an API call. func (ru *RelationUnit) ReadSettings(uname string) (params.RelationSettings, error) ReadSettings returns a map holding the settings of the unit with the supplied name within this relation. An error will be returned if the relation no longer exists, or if the unit's service is not part of the relation, or the settings are invalid; but mere non-existence of the unit is not grounds for an error, because the unit settings are guaranteed to persist for the lifetime of the relation, regardless of the lifetime of the unit. func (ru *RelationUnit) Relation() *Relation Relation returns the relation associated with the unit. func (ru *RelationUnit) Settings() (*Settings, error) Settings returns a Settings which allows access to the unit's settings within the relation. func (ru *RelationUnit) Watch() (watcher.RelationUnitsWatcher, error) Watch returns a watcher that notifies of changes to counterpart units in the relation. Service represents the state of a service. CharmURL returns the service's charm URL, and whether units should upgrade to the charm with that URL even if they are in an error state (force flag). NOTE: This differs from state.Service.CharmURL() by returning an error instead as well, because it needs to make an API call. Life returns the service's current life state. Name returns the service name. OwnerTag returns the service's owner user tag. Refresh refreshes the contents of the Service from the underlying state. String returns the service as a string. func (s *Service) Tag() names.ServiceTag Tag returns the service's tag. func (s *Service) Watch() (watcher.NotifyWatcher, error) Watch returns a watcher for observing changes to a service. func (s *Service) WatchRelations() (watcher.StringsWatcher, error) WatchRelations returns a StringsWatcher that notifies of changes to the lifecycles of relations involving s. Settings manages changes to unit settings in a relation. Delete removes key. func (s *Settings) Map() params.RelationSettings Map returns all keys and values of the node. TODO(dimitern): This differes from state.Settings.Map() - it does not return map[string]interface{}, but since all values are expected to be strings anyway, we need to fix the uniter code accordingly when migrating to the API. Set sets key to value. TODO(dimitern): value must be a string. Change the code that uses this accordingly. Write writes changes made to s back onto its node. Keys set to empty values will be deleted, others will be updated to the new value. TODO(dimitern): 2013-09-06 bug 1221798 Once the machine addressability changes lands, we may need to revise the logic here to take into account that the "private-address" setting for a unit can be changed outside of the uniter's control. So we may need to send diffs of what has changed to make sure we update the address (and other settings) correctly, without overwritting. type State struct { *common.EnvironWatcher *common.APIAddresser // contains filtered or unexported fields } State provides access to the Uniter API facade. Action returns the Action with the given tag. func (st *State) ActionFinish(tag names.ActionTag, status string, results map[string]interface{}, message string) error ActionFinish captures the structured output of an action. func (st *State) AllMachinePorts(machineTag names.MachineTag) (map[network.PortRange]params.RelationUnit, error) AllMachinePorts returns all port ranges currently open on the given machine, mapped to the tags of the unit that opened them and the relation that applies. BestAPIVersion returns the API version that we were able to determine is supported by both the client and the API Server. Charm returns the charm with the given URL. func (st *State) Environment() (*Environment, error) Environment returns the environment entity. ProviderType returns a provider type used by the current juju environment. TODO(dimitern): We might be able to drop this, once we have machine addresses implemented fully. See also LP bug 1221798. Relation returns the existing relation with the given tag. RelationById returns the existing relation with the given id. Service returns a service state by tag. Unit provides access to methods of a state.Unit through the facade. Unit represents a juju unit as seen by a uniter worker. AddMetrics adds the metrics for the unit. func (u *Unit) AssignedMachine() (names.MachineTag, error) AssignedMachine returns the unit's assigned machine tag or an error satisfying params.IsCodeNotAssigned when the unit has no assigned machine.. CharmURL returns the charm URL this unit is currently using. NOTE: This differs from state.Unit.CharmURL() by returning an error instead of a bool, because it needs to make an API call. ClearResolved removes any resolved setting on the unit. ClosePort sets the policy of the port with protocol and number to be closed. TODO(dimitern): This is deprecated and is kept for backwards-compatibility. Use ClosePorts instead. ClosePorts sets the policy of the port range with protocol to be closed. ConfigSettings returns the complete set of service charm config settings available to the unit. Unset values will be replaced with the default value for the associated option, and may thus be nil when no default is specified. Destroy, when called on a Alive unit, advances its lifecycle as far as possible; it otherwise has no effect. In most situations, the unit's life is just set to Dying; but if a principal unit that is not assigned to a provisioned machine is Destroyed, it will be removed from state directly. DestroyAllSubordinates destroys all subordinates of the unit. EnsureDead sets the unit lifecycle to Dead if it is Alive or Dying. It does nothing otherwise. HasSubordinates returns the tags of any subordinate units. IsPrincipal returns whether the unit is deployed in its own container, and can therefore have subordinate services deployed alongside it. NOTE: This differs from state.Unit.IsPrincipal() by returning an error as well, because it needs to make an API call. func (u *Unit) JoinedRelations() ([]names.RelationTag, error) JoinedRelations returns the tags of the relations the unit has joined. Life returns the unit's lifecycle value. MeterStatus returns the meter status of the unit. Name returns the name of the unit. OpenPort sets the policy of the port with protocol and number to be opened. TODO(dimitern): This is deprecated and is kept for backwards-compatibility. Use OpenPorts instead. OpenPorts sets the policy of the port range with protocol to be opened. PrivateAddress returns the private address of the unit and whether it is valid. NOTE: This differs from state.Unit.PrivateAddress() by returning an error instead of a bool, because it needs to make an API call. TODO(dimitern): We might be able to drop this, once we have machine addresses implemented fully. See also LP bug 1221798. PublicAddress returns the public address of the unit and whether it is valid. NOTE: This differs from state.Unit.PublicAddres() by returning an error instead of a bool, because it needs to make an API call. TODO(dimitern): We might be able to drop this, once we have machine addresses implemented fully. See also LP bug 1221798. Refresh updates the cached local copy of the unit's data. RequestReboot sets the reboot flag for its machine agent func (u *Unit) Resolved() (params.ResolvedMode, error) Resolved returns the resolved mode for the unit. NOTE: This differs from state.Unit.Resolved() by returning an error as well, because it needs to make an API call Service returns the service. ServiceName returns the service name. func (u *Unit) ServiceTag() names.ServiceTag ServiceTag returns the service tag. SetCharmURL marks the unit as currently using the supplied charm URL. An error will be returned if the unit is dead, or the charm URL not known. SetStatus sets the status of the unit. String returns the unit as a string. Tag returns the unit's tag. func (u *Unit) Watch() (watcher.NotifyWatcher, error) Watch returns a watcher for observing changes to the unit. func (u *Unit) WatchActions() (watcher.StringsWatcher, error) WatchActions returns a StringsWatcher for observing the ids of Actions added to the Unit. The initial event will contain the ids of any Actions pending at the time the Watcher is made. func (u *Unit) WatchAddresses() (watcher.NotifyWatcher, error) WatchAddresses returns a watcher for observing changes to the unit's addresses. The unit must be assigned to a machine before this method is called, and the returned watcher will be valid only while the unit's assigned machine is not changed. func (u *Unit) WatchConfigSettings() (watcher.NotifyWatcher, error) WatchConfigSettings returns a watcher for observing changes to the unit's service configuration settings. The unit must have a charm URL set before this method is called, and the returned watcher will be valid only while the unit's charm URL is not changed. func (u *Unit) WatchMeterStatus() (watcher.NotifyWatcher, error) WatchMeterStatus returns a watcher for observing changes to the unit's meter status. Package uniter imports 11 packages (graph). Updated 2016-07-26. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years).
https://godoc.org/github.com/jrwren/juju/api/uniter
CC-MAIN-2018-26
en
refinedweb
In order to fully use LinqToSql in an ASP.net 3.5 application, it is necessary to create DataContext classes (which is usually done using the designer in VS 2008). From the UI perspective, the DataContext is a design of the sections of your database that you would like to expose to through LinqToSql and is integral in setting up the ORM features of LinqToSql. My question is: I am setting up a project that uses a large database where all tables are interconnected in some way through Foreign Keys. My first inclination is to make one huge DataContext class that models the entire database. That way I could in theory (though I don't know if this would be needed in practice) use the Foreign Key connections that are generated through LinqToSql to easily go between related objects in my code, insert related objects, etc. However, after giving it some thought, I am now thinking that it may make more sense to create multiple DataContext classes, each one relating to a specific namespace or logical interrelated section within my database. My main concern is that instantiating and disposing one huge DataContext class all the time for individual operations that relate to specific areas of the Database would be impose an unnecessary imposition on application resources. Additionally, it is easier to create and manage smaller DataContext files than one big one. The thing that I would lose is that there would be some distant sections of the database that would not be navigable through LinqToSql (even though a chain of relationships connects them in the actual database). Additionally, there would be some table classes that would exist in more than one DataContext. Any thoughts or experience on whether multiple DataContexts (corresponding to DB namespaces) are appropriate in place of (or in addition to) one very large DataContext class (corresponding to the whole DB)?
http://ansaurus.com/question/1949-are-multiple-datacontext-classes-ever-appropriate
CC-MAIN-2020-16
en
refinedweb
Ok, I know we’ve been going on about custom cells / cell factories a bit recently, but I wanted to do one more post about a very useful topic: caching within cell content. These days β€˜Hello World’ has been replaced by building a Twitter client, so I’ve decided to frame this topic in terms of building a Twitter client. Because I don’t actually care about the whole web service side of thing, I’ve neglected to implement the whole β€˜real data’ / web services aspect of it. If you want to see an actual running implementation with real data, have a look at William AntΓ΄nio’s Twitter client, which is using this ListCell implementation. So, in all the posts to this site related to cells, I’m sure you’ve probably come to appreciate the ways in which you should create a ListView or TreeView with custom cell factories. Therefore, what I really want to cover in this post is just the custom cell implementation, and the importance of caching. A Twitter client wouldn’t be a true client without showing the users profile image, so this is my target for caching. Without caching, each time the cell was updated (i.e. the content changes due to scrolling, or when we scroll a user out of screen and then back in), we’d have to redownload and load the image. This would lead to considerable lag and a poor user experience. What we need to do is load the image once, cache it, and reuse it whenever the image URL is requested by a cell. At the same time, we don’t want to run the PC dry of memory by loading all profile images into memory. Enter: SoftReference caching. Word of warning: I’m not a caching expert. It is possible that I’ve done something stupid, and I hope you’ll let me know, but I believe that the code below should at least be decent. I’ll happily update this example if anyone gives me useful feedback. Check out the code below, and I’ll continue to discuss it afterwards. [jfx] import model.Tweet; import java.lang.ref.SoftReference; import java.util.HashMap; import javafx.geometry.HPos; import javafx.geometry.VPos; import javafx.util.Math; import javafx.scene.control.Label; import javafx.scene.control.ListCell; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.scene.layout.Container; import javafx.scene.text.Font; import javafx.scene.text.FontWeight; // controls whether the cache is used or not. This _really_ shouldn’t be false! def useCache = true; // map of String -> SoftReference (of Image) def map = new HashMap(); def IMAGE_SIZE = 48; public class TwitterListCell extends ListCell { // used to represent the users image var imageView:ImageView; // a slightly bigger and bolder label for the persons name var personName:Label = Label { font: Font.font("Arial", FontWeight.BOLD, 13); } // the message label var message:Label = Label { textWrap: true } override var node = Container { content: bind [ imageView, personName, message ] override function getPrefHeight(width:Number):Number { def w = listView.width; Math.max(IMAGE_SIZE, personName.getPrefHeight(w) + message.getPrefHeight(w)); } override function doLayout():Void { var x:Number = -1.5; var y:Number = 0; var listWidth = listView.width; var cellHeight = height; // position image Container.positionNode(imageView, x, y, IMAGE_SIZE, cellHeight, HPos.CENTER, VPos.TOP, false); // position text at the same indent position regardless of whether // an image exists or not x += IMAGE_SIZE + 5; var textWidth = listWidth – x; var personNameHeight = personName.getPrefHeight(textWidth); Container.resizeNode(personName, textWidth, personNameHeight); Container.positionNode(personName, x, y, listWidth – x, personNameHeight, HPos.LEFT, VPos.TOP, false); y += personNameHeight; Container.resizeNode(message, textWidth, message.getPrefHeight(textWidth)); Container.positionNode(message, x, y, listWidth – x, height – personNameHeight, HPos.LEFT, VPos.TOP, false); } } override var onUpdate = function():Void { var tweet = item as Tweet; personName.text = tweet.person.name; message.text = tweet.message; // image handling if (map.containsKey(tweet.person.image)) { // the image has possibly been cached, so lets try to get it var softRef = map.get(tweet.person.image) as SoftReference; // get the image out of the SoftReference wrapper var image = softRef.get() as Image; // check if it is null – which would be the case if the image had // been removed by the garbage collector if (image == null) { // we need to reload the image loadImage(tweet.person.image); } else { // the image is available, so we can reuse it without the // burden of having to download and reload it into memory. imageView = ImageView { image: image; } } } else { // the image is not cached, so lets load it loadImage(tweet.person.image); } }; function loadImage(url:String) { // create the image and imageview var image = Image { url: url height: IMAGE_SIZE preserveRatio: true backgroundLoading: true } imageView = ImageView { image: image; } if (useCache) { // put into cache using a SoftReference var softRef = new SoftReference(image); map.put(url, softRef); } else { map.remove(url); } } } [/jfx] You’ll note that in this example most of the code is pretty standard. A few variables are created for the image and text, and then I’ve gone the route of laying the content out in a Container, but you can achieve a similar layout using the available layout containers. Following this I have defined an onUpdate function, which is called whenever the cell should be updated. This is usually called due to a user interaction, which may potentially change the Cell.item value, which would of course require an update of the cell’s visuals. The bulk, and most important part, of the onUpdate function deals with loading the users profile image, or retrieving and reusing the cached version of it. Note the use of the global HashMap, which maps between the URL of the users image and the Image itself. Because it is global (i.e. static), this map will be available, and used, by all TwitterListCell instances. Also important to note is that I didn’t put the ImageView itself into the HashMap as a Node can not be placed in multiple positions in the scenegraph, but an Image can be. The rest of the code in this class really just deals with the fact that a SoftReference may clear out it’s reference to the Image object if the garbage collector needs the memory, in which case we need to reload the image again. The other obvious part is the need to also put the image into the cache if it’s not already there. Shown below is the end result, but remember that there is a working version of this demo in William AntΓ΄nio’s Twitter client, which is a very early work in progress. I hope this might be useful to people, and as always we’re keen to hear your thoughts and feedback, and what you’re hoping us to cover. Until next time – cheers! πŸ™‚
http://fxexperience.com/2010/06/custom-cell-caching/
CC-MAIN-2020-16
en
refinedweb
04 October 2010 11:26 [Source: ICIS news] BUDAPEST (ICIS)--BASF is expecting to complete the long-awaited spin-off of its styrenics business, named Styrolution, by the end of 2010, sources close to the company said on Monday. The spin-off company would include styrene and polystyrene outside of ?xml:namespace> Only styrene in No clear partner has been defined, he added. The name Styrolution was registered by BASF in June 2010. A BASF spokesman said on the sidelines of the meeting that a spin-off or potential sale of the business could be made within the next two years, adding that it was in an improved financial.
http://www.icis.com/Articles/2010/10/04/9398314/epca-10-basf-styrenics-spin-off-styrolution-set-for-end-of-2010.html
CC-MAIN-2013-20
en
refinedweb
- Padding. - Viewing Profile: Reputation: Zoner ZonerMember Since 13 Apr 2009 Offline Last Active Oct 29 2012 04:01 AM Community Stats - Group Members - Active Posts 231 - Profile Views 2,039 - Member Title Member - Age Age Unknown - Birthday Birthday Unknown - Gender Not Telling User Tools Contacts Zoner hasn't added any contacts yet. #4958584 CSM (based on nvidia's paper) swimming Posted by Zoner on 12 July 2012 - 05:25 PM #4937519 Number of arrays CPU can prefetch on Posted by Zoner on 04 May 2012 - 07:18 PM. #4923514 releasing a game built with old DirectX SDK's Posted by Zoner on 19 March 2012 - 09:14 PM #4909984 C++ SIMD/SSE optimization Posted by Zoner on 05 February 2012 - 06:11 PM The MSDN docs are a jumbled mess (and in multiple 'docs', SSE, SSE2, SSE4, some AVX are fairly separate doc-wise). #include <mmintrin.h> // MMX #include <xmmintrin.h> // SSE1 #include <emmintrin.h> // SSE2 #if (MATH_LEVEL & MATH_LEVEL_SSE3) #include <pmmintrin.h> // Intel SSE3 #endif #if (MATH_LEVEL & MATH_LEVEL_SSSE3) #include <tmmintrin.h> // Intel SSSE3 (the extra S is not a typo) #endif #if (MATH_LEVEL & MATH_LEVEL_SSE4_1) #include <smmintrin.h> // Intel SSE4.1 #endif #if (MATH_LEVEL & MATH_LEVEL_SSE4_2) #include <nmmintrin.h> // Intel SSE4.2 #endif #if (MATH_LEVEL & MATH_LEVEL_AES) #include <wmmintrin.h> // Intel AES instructions #endif #if (MATH_LEVEL & (MATH_LEVEL_AVX_128|MATH_LEVEL_AVX_256)) #include <immintrin.h> // Intel AVX instructions #endif //#include <intrin.h> // Includes all MSVC intrinsics, all of the above plus the crt and win32/win64 platform intrinsics #4908266 If developers hate Boost, what do they use? Posted by Zoner on 31 January 2012 - 08:43 PM I've never met anyone that admitted to even using C++ iostreams, let alone liking them or using them for anything beyond stuff in an academic environment (i.e. homework). STL and Boost pretty much require exception handling to be enabled. This is a dealbreaker for a lot of people, especially with codebases older than modern C++ codebases that are exception-safe. You are more or less forced to be 'C with Classes, type traits, and the STL/Boost templates and that don't allocate memory'. RAII design more or less requires exception handling for anything useful, as you can't put any interesting code in the constructors without being able to unwind (i.e. two phase initialization is required). The cleanup-on-scope aspect is useful though without exception handling, since the destructors aren't supposed to throw anyway. STL containers have poor to non-existant control over their memory management strategies. You can replace the 'allocator' for a container but it is near useless when the nodes linked list are forced to use the same allocator as the data they are pointing to, ruling out fixed size allocators for those objects etc. This is a lot of the motivation behind EASTL, having actual control, as the libraries are 'too generic'. And memory management ties heavily into threading: We use Unreal Engine here which approaches the 'ridiulous' side of the spectrum on the amount of dynamic memory allocation it does at runtime. The best weapon to fight this (as we cannot redesign the engine) is to break up the memory management into lots of heaps and fixed size allocators, so that any given allocation is unlikely or not at all going to contend with a lock from other threads. Stack based allocators are also a big help, but are very not-C++-like. My rule of thumb for using these libraries is if doesn't allocate memory, it is probably ok to use: algorithms for std::sort is quite useful even without proper STL containers, and outperforms qsort by quite a lot due to being able to inline everything. Type traits (either MS extensions, TR1, or Boost) can make your own templates quite a bit easier to write I've also never seen the need for thread libraries, the code just isn't that interesting or difficult to write (and libraries tend to do things like making the stack size hard to set, or everyone uses the library in their own code and you end up with 22 thread pools and 400 threads etc) #4907800 C++ SIMD/SSE optimization Posted by Zoner on 30 January 2012 - 04:57 PM Awesome, this works right out of the box! This code is around 30% faster than the native c++ code. Thanks for the fast response Zoner? The loop will likely need to be unrolled 2-4 more times as to pipeline better (i.e. use more registers until it starts spilling over onto the stack) If the data is aligned, the load and store can use the aligned 'non-u' versions instead. SIMD intrinics can only be audited by looking at optimized code (unoptimized SIMD code is pretty horrific), basically when an algorithm gets too complicated it has to spill various XMM registers onto the stack. So you have to build the code, check out the asm in a debugger and see if it is doing that or not. This is much less of a problem with 64 bit code as there are twice as many registers to work with. Re-using the same variables should work for a lot of code, although making the pointers use __restrict will probably be necessary so it can schedule the code more aggressively. If the restrict is helping the resulting asm should look something like: read A do work A read B do work B store A do more work on B read C store B do work C store C vs read A do work A store A read B do work B store B read C do work C store C #4907781 C++ SIMD/SSE optimization Posted by Zoner on 30 January 2012 - 03:56 PM static const __m128i GAlphaMask = _mm_set_epi32(0xFF000000,0xFF000000,0xFF000000,0xFF000000); // make this a global not in the function void foo() { unsigned int* f = (unsigned int*)frame; unsigned int* k = (unsigned int*)alphaKey; size_t numitems = mFrameHeight * mFrameWidth; size_t numloops = numitems / 4; size_t remainder = numitems - numloops * 4; for (size_t index=0;index<numloops; ++index) { __m128i val = _mm_loadu_si128((__m128i*)f); __m128i valmasked = _mm_and_si128(val, GAlphaMask); __m128i shiftA = _mm_srli_epi32(valmasked , 8); __m128i shiftB = _mm_srli_epi32(valmasked , 16); __m128i shiftC = _mm_srli_epi32(valmasked , 24); __m128i result = _mm_or_si128(_mm_or_si128(shiftA, shiftB), _mm_or_si128(shiftC, GAlphaMask)); _mm_storeu_si128((__m128i*)k, result); f += 4; k += 4; } // TODO - finish remainder with non-simd code } The loop will likely need to be unrolled 2-4 more times as to pipeline better (i.e. use more registers until it starts spilling over onto the stack) If the data is aligned, the load and store can use the aligned 'non-u' versions instead. #4905182 Forcing Alignment of SSE Intrinsics Types onto the Heap with Class Hierachies Posted by Zoner on 22 January 2012 - 02:03 PM This is ultimately windows code, as they have an aligned heap available out of the box (_aligned_malloc) mDEFAULT_ALIGNMENT is 16 in my codebase, ideally you would pass in the alignof the type here but the C++ ABI only passes in size to new (and you don't have the ability to get the type information either). void* mAlloc(zSIZE size, zSIZE alignment) { void* pointer = _aligned_malloc(size, alignment); if (pointer == null) { throw std::bad_alloc(); } return pointer; } void mFree(void* pointer) { _aligned_free(pointer); } void* operator new(zSIZE allocationSize) { return mAlloc(allocationSize, mDEFAULT_ALIGNMENT); } void* operator new[](zSIZE allocationSize) { return mAlloc(allocationSize, mDEFAULT_ALIGNMENT); } void operator delete(void* pointer) { mFree(pointer); } void operator delete[](void* pointer) { mFree(pointer); } #4905180 Forcing Alignment of SSE Intrinsics Types onto the Heap with Class Hierachies Posted by Zoner on 22 January 2012 - 02:01 PM A combination of an aligned allocator and compiler-specific alignment attributes should suffice. For Visual C++, look at __declspec(align). For GCC, look at __attribute__((aligned)). My intrinsic wrappers assert the alignment in the constructors and copy constructors. It can be useful to leave these asserts enabled in release builds for a while, as on the Windows it seems allocations from the debug heap are aligned sufficiently for SSE2. The SSE types __m128 and friends already have declspec align 16 applied to them for you. Placing them as a member in a struct will promote the structs alignment. Looking at the original data structures from the first post, the compiler should be generating 12 bytes of padding before the worldMatrix member in struct Transform, and also padding 4 bytes between the struct and the base class (as their alignments are different) struct TestA { zBYTE bytey; }; struct TestB : public TestA { vfloat vecy; }; zSIZE sizeA = sizeof(TestA); zSIZE alignA = alignof(TestA); zSIZE sizeB = sizeof(TestB); zSIZE alignB = alignof(TestB); zSIZE offset_bytey = offsetof(TestB, bytey); zSIZE offset_vecy = offsetof(TestB, vecy); watch window: sizeA=1 alignA=1 sizeB=32 alignB=16 offset_bytey=0 offset_vecy=16 #4833576 Return Values Posted by Zoner on 10 July 2011 - 09:50 PM Wow, thanks for being so thourough! I'm perplexed as to why so many heavyweight math libraries seem to worry about this! D3DX being one of them. EDIT: Ok, so it's probably still better to use:Vector3DAdd (a, b, dest); than:dest = a + b; Right? To avoid temp objects? Also, I hear that if the return value is named, it will have to construct/destruct it regardless. I'm trying to see the assembly myself but the compiler optimizes out all my test code EDIT2: Also, just to get this out of the way, yes, the profiler is telling me that vector math could go to be faster since my physics engine is choking atm. I've written a few math libraries, and the temporaries are rarely a problem, provided you keep the structs containing the objects pod-like. The last big complication I've seen is that sometimes SIMD wrapper classes don't interoperate well with exception handling being enabled. Not a problem, exceptions can be turned off! D3DX is structured the way it is for a few reasons: - It is C like on purpose, D3D has had a history of being supported for C to some degree, as you can still find all those wonderfully ugly macros in the headers to call the COM functions from C. - The D3DX dll is designed to be able to run on a wide variety of hardware (aka real old), most applications pick a minspec for the CPU instruction set and use that. SSE2 is a pretty solid choice these days (and is also the guaranteed minimum for x64). However if you write the math using D3DX and 'plain C/C++' it will work on all platforms. - The x86 ABI can't pass SSE instructions by value in the x86 ABI, so pointers (or references) are used instead. - The x64's ABI can pass SSE datatypes by value at the code level, but on the backend they are always passed by pointer, so only inlined code can keep the values in registers. With this restriction you might as well explicitly use pointers or references in the code, so you can see the 'cost' better, and also to cross-compile back in 32 bit. - A lot of 'basic' SIMD math operations take more than two arguments, which don't fit into existing C++ operators. This basically causes you to structure the lowest level of a math library in terms of C-like function primitives, of which the operator overloads can use as needed to provide syntactic sugar. - Some functions return more than one value, which also gets to be rather annoying without wrapper it in a struct, some tuple container, so a lot of times its easier to just have multiple out arguments. For example: A function that computes sin and cos simultaneous, frequently can be done at the same or similar cost as either sin or cos on quite a bit of hardware. Another example: Matrix inversion functions can also return the determinant as they have to compute it anyway as part of their inversion. Microsoft did take the time to do some runtime patchups to the function calls to call CPU specific functions (SSE, SSE2 etc), so you basically end up with this mix of 'better than plain C code' and 'worse than pure SIMD code'. #4833250 Mip maps..... no understanding Posted by Zoner on 09 July 2011 - 11:22 PM The lower the resolution of the mipmap, the better it maps to the cache on the GPU, which speeds things up (quite a lot actually). The hardware normally automatically picks which mipmap level to display quite well, except when working in screen space style effects. The filtering modes work in three 'dimensions': mag filter - filter applied when the image is up-resed (typically when you are already rendering the largest mip level and there isn't another one to switch to) min filter - filter applied when the image is down-resed mip filter - filter applied between mip levels (on, off, linear) When the mip filter is set to linear, the hardware picks a blend of of two mipmap levels to display, so the effect looks more seamless. The UV you feed into the fetch causes the hardware to fetch the color from two miplevels, and it automatically crossfades them together. If you set the mipfilter to nearest, it will only fetch one mip, and this will typically generate seams in the world you render, where the resolution of the texture jumps (as the hardware selects them automatically in most cases). This is faster however since it only has to do half the work. When the mag filter is set to linear, the hardware fetches a 2x2 block of pixels from a single mip level, and crossfades them together with a biliinear filter. If the filter is set to anisotropic, it uses a proprietary multi-sample kernel to sample multiple sets of pixels from the image in various patterns. The number of samples corresponds to the anisotropic setting (from 2 to 16), at a substantial cost to performance in most cases. However it helps maintain the image quality when the polygons are nearly parallel to the camera, and this can be pretty important for text on signs, stripes on roads, and other objects that tend to mip to transparent values too fast (chain link fences). You can set the hardware in quite a few configurations, as these settings are more or less mutually exclusive with each other. #4832710 First person weapon with different FOV in deferred engine Posted by Zoner on 08 July 2011 - 04:00 AM First off, the good news: Provided the near and far planes are the same for the projections, the depth values will be equivalent. What is different is the FOV is different so the screen space XY positions of the pixels will come out different, which generally only matters when de-projecting a screen pixel back into the world. For most effects that do this, the depth is usable as-is for depth based fog, and whatnot. This only requires making your artists cope with the same near plane as the world (and they will beg and scream for a closer plane for scopes and things that get right up on the near plane, but you have to say NO!, you get the custom FOV but you get it with this limitation). And the bad news: The depths are quite literally equivalent, which means the weapon will draw 'in the world' and have the rather annoying behavior of poking through walls you walk up to. So the fix is to render the gun several times, primarily for depth-only or stencil rendering. One possibility (and the one we use) is to render the gun with a viewport set to a range of 0/0.0000001 in order to get the gun to occlude the world. This is good for performance reasons, but bad if you have post process effects that absolutely must be able to to sample pixels from 'behind the gun'. This is a trade off someone has to sign off on. Performance usually wins that argument though, so we have opted to have the guns occlude everything (including hardware occlusion queries!). Another possibility is to render a pass to create a stencil mask of the weapon and occlude with that, but there are some complications that need to be understood, which I will talk about down near the bottom of this post. Forward renderers can just draw the gun later in the frame at their leisure for the most part, after clearing depth (another thing I need to explain later) and drawing the gun. Deferred rendering doesn't have it as easy, as you need the gun to exist properly in the GBuffers when doing lighting passes, for both performance reasons, and to accept and real-time shadowmaps properly along with the world. More good news: Aside from the case of the gun or the player needing to cast shaders, the weapon will generally look just fine lit (and shadowed!) with the not-quite correct de-projected world space position. The depth will be completely correct from the view-origin's point of view, and the gun itself wont be too far from a correct XY screen position so it will just light and shadow just fine. UNLESS you attach a tiny light directly to the gun, at which point the light needs its position adjusted to be in the guns coordinate system instead of the world, so the gun looks correct when lit by the light. Muzzle flash sprites and whatnot have a similar problem, but in reverse, in that the sprite needs to be placed in the world correctly relative to the gun's barrel. More bad news: Getting the gun into the GBuffer properly and without breaking performance can be a bit tricky. We store a version of the scene depth's W coordinate in the alpha channel of the framebuffer (which is also the same buffer that is the the GBuffer's emissive buffer when it is generated). This is true of the PC and PS3. Rendering is basically Clear Depth, Render Gun Depth to the super-tight viewport, Render Depth, Render Scene GBuffer, Clear Z, Render Gun to GBuffer, perform lighting, translucency, post process etc. We can clear Z in this setup because the rest of the engine reads the alpha channel version of the depth for everything. The XBOX version read's the Z-buffer directly, so we have to preserve world depth values, so instead of 'Clear Z' we render the GUN twice, once a depth-always write and the second with the traditional less-equal test. This is necessary because the viewport clamped depths are not something you want the game to be using. This particular method is an extremely bad idea for PC DX9 hardware in general (NVIDIA's in particular). The hardware is going to fight you: You might be tempted to use oDepth in a shader. This is a bad idea, in that it disables the early depth & stencil reject feature of the hardware when the pixel shader outputs a custom depth. It is also not necessary for getting guns showing up correctly with a custom FOV. It is also a bad idea because you will also need to run a pixel shader when doing depth-only rendering, and it is extremely slow to do this (hardware LOVES rendering depth-only no-pixel-shader setups!). This is also the same reason why you should limit allowing masked textures to be used in shadowmap rendering, as they are significantly slower to render into the shadowmap (somewhere between 4 and 20x slower, its kind of insane how big of a difference it can be). Getting it visually correct is not the real challenge. The real challenges lie in how many ways the hardware can break and performance can go off a cliff. The early-depth and early-stencil reject behaviors of the hardware are particularlly finiky, which gets progressively worse the older the hardware is. NVIDIA's name for these culls are called ZCull and SCull. ATI(AMD whatever) calls it Hi-Z and Hi-Stencil. These early-rejects can be disabled both by some combinations of render states, as well as changing your depth test direction in the middle of the frame. When these early-rejects are not working your pixel shaders will execute for all pixels, even if depth or stencil tests kill the pixels. The result will be visually correct, but the official location for these depth and stencil tests is after the pixel shader. Writing depth or stencil while testing depth or stencil will disable the early-reject for the draw calls doing this. This is sometimes unavoidable, but luckily only affects the specific draw calls that are setup this way. On a lot of NVIDIA hardware, if you change the depth write direction (like I mentioned doing a pass of 'always' before 'lessequal' in order to the fix the Z-buffer on the XBOX), the zcull and scull will be disabled UNTIL THE NEXT DEPTH & STENCIL CLEAR. I expect this to be better or a non-problem with Geforce 280 series and newer, but haven't looked into it for sure. This also means you should always clear both at least at the start of the frame (and use the API's to do it, and not render a quad). This also makes the alternating depth lessequal/greaterequal every other frame trick to try and avoid depth clears a colossally bad idea. The early-stencil test is very limited. On most hardware It pretty much caches the result of a group of stencil tests on some block size number of pixels and compresses it down to a a few bits. This means that using the stencil buffer for storing anything other than 0 and 'non-zero' pretty much worthless. And if you test for anything other than ==0 or !=0, the early stencil reject is not likely to work for you. It also means sharing the stencil buffer with a second mask is extremely difficult if you care about performance, and I definitely don't recommend trying it unless you can afford a second depth buffer with its own stencil buffer. #4826787 Write BITMAPINFOHEADER image data to IDirect3DTexture9 Posted by Zoner on 23 June 2011 - 08:11 AM The pBits pointer from the lock is the address of the upper left corner of the texture. The lock structure contains how many bytes to advance the pointer to get to the next row (it might be padded!). A general texture copy loop that doesn't require format conversion (ARGB to BGRA swizzling etc) can use memcpy, but needs to be written correctly: const BYTE* src = bitmapinfo.pointer.whatever; size_t numBytesPerRowSrc = bitmapinfo.width * (bitmapinfo.bitsperpixel/8); // warning: pseudocode BYTE* dst = lock.pBits; size_t numBytesPerRowDst = lock.pitch; size_t numBytesToCopyPerRow = min(numBytesPerRowSrc, numBytesPerRowDst); for (y=0; y<numRows; ++y) { memcpy(dst, src, numBytesToCopyPerRow); src += numBytesPerRowSrc; dst += numBytesPerRowDst; } As an optimimzation you can test if (numBytesPerRowSrc==numBytesPerRowDst) and do it with a single memcpy instead of with a loop. You are most likely to run into pitch being padded with non-power of two textures, and other special cases (a 1x2 U8V8 textures that is 2 bytes, will likely yield a pitch of at least 4 bytes for instance). If you need to do format conversion its easiest to cast the two pointers to a struct mapped to the pixel layout, and replace the memcpy with custom code. #4823820 Depth of Field Posted by Zoner on 15 June 2011 - 04:05 PM Typically a pipeline looks something like this: depth pass opaque pass (possibly with forward lighting) opaque lighting pass (either deferred or additional passes for multiple lights) opaque post processing (screen space depth fog) translucency pass (glass, particles, etc) post processing (dof, bloom, distortion, color correction, etc) final post processing (gamma correction) SSAO ideally is part of the lighting pass (and even better if it can be made to only affect diffuse lighting) Screen space fog is easy to do for opaque values (as they have easy to access depth information) but then you need to solve fog for translucency. In Unreal, DOF and Bloom are frequently combined into the same operation, though this restricts the bloom kernel quite a bit, but it is fast. So to answer the question: if it is wrong, change it. Screen space algorithms are pretty easy to swap or merge, especially compared to rendering a normal scene. A natural order should fall out pretty quickly. #4818883 For being good game programmer ,good to use SDL? Posted by Zoner on 02 June 2011 - 04:41 PM - It probably has the wrong open source license (its unusable as-is on platforms that don't have dll support), and for the platform's you would need the commercial license won't come with code to run on that platform (PS3, XBOX, etc) anyway, which more or less makes me wonder why the commercial license of SDL costs money. - You still have to deal with shaders (GLSL, Cg, HLSL), which arguably is where a huge portion of the code is going to live. Supporting more than one flavor is a huge amount of work, which can be mitigated with a language neutral shader generator (editor etc), which is also a huge amount of work to create. - Graphics API's in the grand scheme of things aren't all that complicated since the hardware does all of the work, using the APIs raw or even making a basic wrapper for one is a pretty trivial thing to do. - For C++ the real time consuming things end up being things like serialization, proper localization support and string handling, and memory management (multiple heaps, streaming textures etc)
http://www.gamedev.net/user/153524-zoner/?tab=reputation
CC-MAIN-2013-20
en
refinedweb
Exception Hunter - 3.0 Learning Exception Hunter - 3.0 Analyzing Your Code To start analyzing your code: - Add the assemblies. When you first start Exception Hunter, you need to add the assemblies that you want to analyze. If another assembly is referenced by an added assembly, it is added automatically. Any referenced assembly that cannot be found on your file system is identified as "Not Found"; you can then browse to locate it, or ignore it in the analysis. - Locate the method you want to analyze. To select the method you want to analyze, you can search for a method using the Find box, or drill down to view all namespaces, classes, structs, and their methods. - View the results. Exceptions are listed by type. You can explore the list of exceptions by selecting an exception type to see all the places in your code that the exception is thrown. - Drill down through the stack trace for the selected exception class. To find situations in which the exception may be thrown, view the source code of the method selected in the stack trace. - Adjust the options, if required. From the Tools menu, select Options to display the Options dialog box, in which you can set a number of options for how Exception Hunter analyzes your code. For example, you can set the version of the .NET Framework for detecting exceptions, or use a more detailed analysis, which detects more exceptions but can take longer to run. Hints are available for each option in the Options dialog. Notes - Exception Hunter cannot detect exceptions that may be thrown when following delegate calls, for example Event Handler calls. You should therefore analyze the target methods for such delegates. We recommend that you wrap any exceptions and throw them as a domain-specific exception type. - Static classes appear in the list as abstract sealed classes, as this is how they are represented by the .NET CLR. - Runtime Exceptions (other than some NullReferenceExceptions and InvalidCastExceptions) generated by the .NET CLR are not detected by Exception Hunter. Was this article helpful? Thanks for your feedback!
http://www.red-gate.com/SupportCenter/Content/Exception_Hunter/help/3.0/exc_usingexceptionhunter
CC-MAIN-2013-20
en
refinedweb
Languages natively. The document at Discusses this situation. It is clearly mentioned that SharePoint Portal Server 2003 does not support a mixture of different localized portal servers on the server farm, nor does it support a mixture of different localized Windows Server 2003 servers. All servers running Windows Server 2003 in a farm topology must be in the same language, and all servers running SharePoint Portal Server 2003 in a server farm must be in the same language Still, if you want to perform localization in webparts for whatever reasons, it is simple to do so. We will create a simple MOSS 2007 webpart with localization support. Healthcare.resx Healthcare.de.resx for German Healthcare.fr.resx for French Note that you cannot add App_GlobalResources to the project since it is only valid for the web sites And not the class library projects. using System.Globalization; using System.Resources; We will pick the language settings from the web.config file. Add the following tag to the <appsettings> section of the web.config file of your sharepoint web site. <add key=culture Add the following class variables to your webpart source file. CultureInfo cult = CultureInfo.CreateSpecificCulture(ConfigurationSettings.AppSettings["culture"]); ResourceManager rm; This will create a CultureInfo object based on the language settings in the web.config file. In the constructor of your webpart add a line like this rm = new ResourceManager("CustomWebParts.HealthCare", this.GetType().Assembly); Where HealthCare is the name of your resource file ex HealthCare.resx and HealthCare.de.resx And CustomWebParts is the namespace of your webpart. Basically this line creates a resource manager for the specific resource file. Now we will load all the strings which are used in the webpart source to the resource file. This is fairly easy. Now to load strings at any point in the webpart source we will use the GetString() method of ResourceManager class. For Ex protected override void CreateChildControls() { base.CreateChildControls(); this.Title = rm.GetString("CarePlan", cult); } This method sets the title of the webpart to the appropriate language based on the web.config file The above localization is not specific to sharepoint instead it's an asp.net 2.0 feature. I can mail the source code of the webpart to anyone if needed. Send your suggestions and comments. PingBack from pleaes can you send me source code my email is [email protected] Please send your source code to me! My email is [email protected] Thanks in advance! --------------------------- Forest Ling can u send me the code to [email protected] [email protected] Can u please send me the entire source code to my mail ID i.e. [email protected] can u send me the source code my email id is [email protected] can you send me source code [email protected] Dear Friend, Is it possible to implement a controll, to change language MOSS language. In back end it should load a resource files based on choosen language. Thank in advance (we are ready to pay for this solution) Will this code work for Sharepoint 2003 as well? I am trying to make my custom webpart read the value from appsettings of the web.config. Thanks [email protected] Hi I learnt about localization(static contents) or any reference links. Thanks in Advance
http://blogs.msdn.com/b/mahuja/archive/2008/03/31/localization-in-webparts.aspx
CC-MAIN-2013-20
en
refinedweb
Welcome.and std::wstringtypes (see <string>) instead of raw char[]arrays. C++ Standard Library containers like vector, list, and mapinstead of raw arrays or custom containers. See <vector>, <list>, and <map>. C++ Standard Library algorithms instead of manually coded ones. Exceptions, to report and handle error conditions. Lock-free inter-thread communication using C++ Standard Library std::atomic<>(see <atomic>) instead of other inter-thread communication mechanisms. Inline lambda functions instead of small functions implemented separately. Range-based for loops to write more robust loops that work with arrays, C++ Standard Library++: #include <vector> void f() { // Assume circle and shape are user-defined types circle* p = new circle( 42 ); vector<shape*> v = load_shapes(); for( vector<circle*>::iterator i = v.begin(); i != v.end(); ++i ) { if( *i && **i == *p ) cout << **i << " is a match\n"; } // CAUTION: If v's pointers own the objects, then you // must delete them all before v goes out of scope. // If v's pointers do not own the objects, and you delete // them here, any code that tries to dereference copies // of the pointers will cause null pointer exceptions. for( vector<circle*>::iterator i = v.begin(); i != v.end(); ++i ) { delete *i; // not exception safe } // Don't forget to delete this, too. delete p; } // end f() Here's how the same thing is accomplished in modern C++: #include <memory> #include <vector> void f() { // ... auto p = make_shared<circle>( 42 ); vector<shared_ptr<shape>> v = load_shapes(); for( auto& s : v ) { if( s && *s == *p ) { cout << *s << " is a match\n"; } } } In modern C++, you don't have to use new/delete or explicit exception handling because you can use smart pointers instead. When you use the auto type deduction and lambda function, you can write code quicker, tighten it, and understand it better. And a range-based for loop is cleaner, easier to use, and less prone to unintended errors than a C-style C++ Standard Library. - Uniform Initialization and Delegating Constructors Object Lifetime And Resource Management Objects Own Resources (RAII) - Pimpl For Compile-Time Encapsulation - - String and I/O Formatting (Modern C++) Errors and Exception Handling Portability At ABI Boundaries For more information, see the Stack Overflow article Which C++ idioms are deprecated in C++11. See also C++ Language Reference Lambda Expressions C++ Standard Library Visual C++ language conformance Feedback
https://docs.microsoft.com/en-us/cpp/cpp/welcome-back-to-cpp-modern-cpp?redirectedfrom=MSDN&view=vs-2019
CC-MAIN-2019-43
en
refinedweb
On Thu, Jul 05, 2012 at 02:43:42PM +0300, Andy Shevchenko wrote:> The issue is in plain array of the numbers that are assigned to the devices.> Somehow looks better to have real namespaces, or even hide irq number in the API> struct device, request_irq(), but keep reference between driver and PIC via some> object.> So, given solution just hides an issue, but doesn't resolve it fully> from my p.o.v.This is unrelated to what you're talking about. The devices concernedare mostly MFDs which use their own interrupts. Where other things needto use the interrupt numbers then legacy mappings should still be usedand IRQ domains have no effect at all on the situation.[unhandled content-type:application/pgp-signature]
https://lkml.org/lkml/2012/7/5/161
CC-MAIN-2019-43
en
refinedweb
Gra: $ grails create-domain-class grails.data.Movie Second we will generate the asynchronous controller using the new generate-async-controller command: $ grails generate-async-controller grails.data.Movie Grails now generates an asynchronous controller with the name MovieController. Below you can see the default implementation of the index method: def index(Integer max) { params.max = Math.min(max ?: 10, 100) Movie.async.task { [movieInstanceList: list(params), count: count() ] }.then { result -> respond result.movieInstanceList, model:[movieInstanceCount: result.count] } } The async namespace makes sure GORM methods in the task method will be performed in a different thread and therefore is asynchronous. The task method which is used, returns a [Promises]() object which you can use to perform callback operations like onError and onComplete.
https://blog.jdriven.com/2014/10/grails-generate-async-controller/
CC-MAIN-2019-43
en
refinedweb
After two base proxy classes are added, we need to make WorkerMessagingProxy class derive from these two base classes. Created attachment 27375 [details] Proposed Patch This patch is to make WorkerMessagingProxy derive from two base proxy classes introduced in issue 23776. The next patch is to change to use different proxy pointers. ChangeLog: WorkerMessaingProxy sp These header files seem to be missing from the patch: #include "WorkerContextProxyBase.h" #include "WorkerObjectProxyBase.h" I see that have the header files in another patch. I'd recommend setting the "depends on" field above to make this more clear. This looks good to me (just needs the typo fixed in the change log). Created attachment 27415 [details] Proposed Patch Comment on attachment 27375 [details] Proposed Patch new patch obsoletes previous one. It would be nice to fix the typo: ChangeLog: WorkerMessaingProxy sp Created attachment 27442 [details] Proposed Patch All fixed. Thanks. Comment on attachment 27415 [details] Proposed Patch New patch makes this one obsolete. Looks good to me. Comment on attachment 27442 [details] Proposed Patch r=me. I think that to validate this change, you need to also change the type of Worker::m_messagingProxy though. // Only use these methods on the worker object thread. - void terminate(); bool askedToTerminate() const { return m_askedToTerminate; } There's only one method left here, so the comment needs to be adjusted. Committed revision 40781.
https://bugs.webkit.org/show_bug.cgi?id=23777
CC-MAIN-2019-43
en
refinedweb
Modal Dialog with Qt Components on Meego How to make a Modal Dialog with Qt Components on MeeGo There is a QML Dialog element in Qt Quick Components in MeeGo 1.2 Harmattan API. But this dialog is not modal - i.e. it can be closed by pressing on any part of the dialog's window, not only on the buttons. Such behavior is not good in some cases - any accidental touch can close the dialog's window. There is no way through the API to make this dialog not respond on background clicks. Surely we can make such a window ourselves without using Dialog element, but it is not a quick or proper way. From the Dialog's source it can be discovered that a background click generates a privateClicked signal. Let's disable it by adding 2 lines into Dialog's creation: signal privateClicked onPrivateClicked: {} and we get truly modal dialog. Full example of page with dialog import QtQuick 1.1 import com.nokia.meego 1.0 Page { QueryDialog { id: quDialog signal privateClicked onPrivateClicked: {} anchors.centerIn: parent titleText: "Modal Dialog" rejectButtonText: "Cancel" onRejected: { console.log ("Rejected");} } Component.onCompleted: quDialog.open() }
https://wiki.qt.io/Modal_Dialog_with_Qt_Components_on_Meego
CC-MAIN-2019-43
en
refinedweb
Create scalable images that integrate well with your app’s text, and adjust the appearance of those images dynamically. Framework - UIKit Overview Symbol images give you a consistent set of icons to use in your app, and ensure that those icons adapt to different sizes and to app-specific content. A symbol image contains a vector-based shape that scales without losing its sharpness. Like you do with a template image, you apply a tint color to that shape to generate its final appearance in your app. You use template images in places where you display a simple shape or glyph, such as a bar button item. Although symbol images are images, they also support many traits typically associated with text. In fact, many of the system symbol images include letters, numbers, or symbolic characters in their content. For example, the system provides symbol images for mathematical operators for addition, subtraction, multiplication, and division. You can also apply text-related traits to a symbol image to make it look similar to the surrounding text: Apply a font text style to a symbol image so that it matches text with the same style. Font text styles also cause symbol images to scale to match the current Dynamic Type setting. Apply weights, such as thin, heavy, or bold, to a symbol image. Scale and style symbol images to match the font you use for text. Align a symbol image with neighboring text by using the image’s baseline. The system provides a collection of standard symbol images for you to use, including images for folders, the trash can, favorite items, and many more. Symbol images also adapt automatically to the current trait environment, reducing the work required to support differently sized interfaces. To browse the available symbol images, use the SF Symbols app, which you can download from the design resources page at developer.apple.com. You can also create symbol image files for your app’s custom iconography, as described in Creating Custom Symbol Images for Your App. Load a Symbol Image To load a symbol image, you must know its name. When configuring an image view in a storyboard file, you can browse the list of symbol image names in the Attribute inspector. When loading symbol images from your code, look up the name for any system symbol images using the SF Symbols app. For custom symbol images, create a Symbol Image Set asset in your asset catalog and load the asset by name. UIKit provides different paths for loading symbol images, based on whether the system provides the image or you do: Load system-supplied symbol images using the system, Image Named: system, or Image Named: compatible With Trait Collection: systemmethods of Image Named: with Configuration: UIImage. Load your app’s custom symbol images using the image, Named: image, or Named: in Bundle: compatible With Trait Collection: imagemethods of Named: in Bundle: with Configuration: UIImage. Each method looks only for its designated image type, which avoids namespace collisions between your custom images and the system images. The following example code loads the multiply system symbol image: UIImage* image = [UIImage systemImageNamed:@"multiply.circle.fill"]; Apply a Specific Appearance to a Symbol Image When you use the system or image method, UIKit returns an image object with the symbol image information. When you display that symbol image in an image view, the system applies default styling to it. An image with the default styling might appear out of place next to bold text or text that uses the UIFont text style. To make the symbol image blend in with the rest of your content, create a UIImage object with information about how to style the image. Configure the object with the text style you use for neighboring labels and text views, or specify the font you use in those views. You can add weight information to give the symbol image a thinner or thicker appearance. You can also specify whether you want the image to appear slightly larger or smaller than the neighboring text. Assign your configuration data to the preferred property of the UIImage containing your symbol image. Typically, you apply configuration data only to image views. For other types of system views, UIKit typically provides configuration data based on system requirements. For example, bars configure the symbol images in their bar button items to match the bar’s configuration. The only other time you might use configuration data is when drawing the image directly. In that case, use the image method to create a version of your image that includes the specified configuration data. Align Symbol Images with a Text Label by Using a Baseline When positioning an image view containing a symbol image next to a label, you can align the views using their baselines. To align views in your storyboard, select the two views and add a First Baseline constraint. Programmatically, you create this constraint by setting the first of both views to be equal, as shown in the following code example: NSLayoutConstraint.activate([ imageView!.firstBaselineAnchor.constraint(equalTo: label!.firstBaselineAnchor) ]) All system-provided symbol images include baseline information, and UIImage exposes the baseline value as an offset from the bottom of the image. Typically, the baseline of a symbol image aligns with the bottom of any text that appears in the image, but even symbol images without text have a baseline. In addition, you can add a baseline to any image by calling its image method. The following code example loads an image and adds a baseline 2 points up from the bottom of the image. let image = UIImage(named: "MyImage") let baselineImage = image?.withBaselineOffset(fromBottom: 2.0)
https://developer.apple.com/documentation/uikit/uiimage/configuring_and_displaying_symbol_images_in_your_ui?language=objc
CC-MAIN-2019-43
en
refinedweb
import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt import seaborn as sns import random sns.set() from jupyterthemes import jtplot # jtplot.style() jtplot.style('grade3', context='paper', fscale=1.5, ticks=True, grid=False) # jtplot.figsize(x=15., y=9.,aspect=1.2) %matplotlib inline The first thing we should do at the very beginning of any data analysis is to get a feel for the data. Questions on the following lines should be asked, df = pd.read_csv("HR_comma_sep.csv") df.head() df.describe() df.info() sales 14999 non-null object salary 14999 non-null object dtypes: float64(2), int64(6), object(2) memory usage: 1.1+ MB # let's clean up the data df.rename(columns={'sales':'department'},inplace=True) df_n = pd.get_dummies(df,columns=['department','salary']) df_n.head() 5 rows Γ— 21 columns So, no missing values in this dataset. Let's move on to understanding the correlation between various features. Correlation corr_mat = df_n.corr() # Set up the matplotlib figure f, ax = plt.subplots(figsize=(15, 10)); # Draw the heatmap using seaborn sns.heatmap(corr_mat, square=True, ax=ax); /home/skd/anaconda2/envs/data-science-portfolio/lib/python2.7/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family [u'sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) /home/skd/anaconda2/envs/data-science-portfolio/lib/python2.7/site-packages/matplotlib/figure.py:1743: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect. warnings.warn("This figure includes Axes that are not " A couple of interesting observations. df.head() How does satisfaction level vary among employees who left? From the plot, it seems that employees who left are less satisfied on an average. plt.figure(figsize=(10,6)) sns.boxplot(x='left',y='satisfaction_level',data=df); Does having more number of projects have any impact on employee churn? From the plots, it seems that people who churn (1) generally tend to have more projects than people who stay (0). From the Boxplot/CDF we can see that, ~50% of churners have 4 or more projects. fig,axarr = plt.subplots(ncols=2,figsize=(12,6)) # plt.figure(figsize=(10,6)) sns.boxplot(x='left',y='number_project',data=df,ax=axarr[0]); sns.kdeplot(df[df.left==0].number_project,label=0,cumulative=True,ax=axarr[1]); sns.kdeplot(df[df.left==1].number_project,label=1,cumulative=True,ax=axarr[1]); axarr[0].set_ylabel("Number of projects") axarr[1].set_ylabel("CDF") axarr[1].set_xlabel("Number of projects"); Does staying in the company for a long time make the employees more vulnerable to churn? left_over_years = pd.crosstab(df.time_spend_company,df.left,margins=True,normalize='index') left_over_years = left_over_years[left_over_years.index!='All'] left_over_years = left_over_years.round(2) left_over_years.head() fig, ax = plt.subplots(figsize=(10,6)) _ = left_over_years.plot(kind='bar',ax=ax) _ = plt.xlabel("number of years at the company") _ = plt.ylabel("% of employees") _ = plt.title("Distribution of employees after n-years") It's seen that employees start churning after completing 3 years at the company. There's an upward trend from year-3 onwards till year-5. Almost 57% of people who completed 5 years at the company churn. Then we see a downward trend after year 6. People who has spent 7 years or more are unlikely to churn. Why do people churn the most after 5 years? Is it because of promotion or some other factors are at play? employee_spent_5_years = df[df.time_spend_company==5] emp_5_yrs_df = pd.crosstab(employee_spent_5_years.promotion_last_5years, employee_spent_5_years.left,margins=True,normalize='index') emp_5_yrs_df = emp_5_yrs_df[emp_5_yrs_df.index!='All'] emp_5_yrs_df = emp_5_yrs_df.round(2) emp_5_yrs_df emp_5_yrs_df.plot(kind='bar') <matplotlib.axes._subplots.AxesSubplot at 0x7f738993e9d0> This is interesting. 57% of the employees who haven't been given any promotion in last 5 years surely churn. While, 94% of employees who have been given promotion in last 5 years surely stay!
https://nbviewer.jupyter.org/github/saikatkumardey/data-science/blob/master/employee-churn/employee-churn-EDA.ipynb
CC-MAIN-2019-43
en
refinedweb
Version 1.0d The GC-100 Network Adapter’s modular design concept provides a variety of capabilities that arecombined within a single enclosure. Each module provides a particular function, such as infrared (IR),digital input, or relay closures. A module may support one or more connectors of the same type. Forexample, an IR module has three independent IR outputs; whereas, a serial module has only one DB9connector for serial data. This is because the number of connectors a module can support is dictated byits 1.5 inch physical width. It is important to understand that a module’s address is determined solely by its physical position withinthe GC-100 enclosure. The concept is that each module occupies 1.5 inches of front panel space, even ifit’s part of a larger printed circuit board containing other module types. At power on, module addressesare assigned starting with 0 for the left-most modules and increasing sequentially to the right until allmodule addresses are assigned (see figure 1a). This presents a consistent programming interface asadditional modules are added in the empty locations. A connector’s address is its position within a module, starting at 1 on the left and increasing to the right. Acomplete connector address includes the module address and the connector location within the moduleseparated by a colon. See figure 1a for examples of connector addresses. Note: a connector’s addressdoes not necessarily have to agree with the front panel label. Below, the IR connector at address 5:3 islabeled as 6 on the front panel of the GC-100-12. GC-100-12 Figure 1a The GC-100 is set by default to use DHCP to automatically obtain an IP address. To determine the IPaddress of a GC-100 using DHCP, run the iHelp Discovery Utility. Within three seconds of power up, theGC-100 will announce its IP address and display it in the list. The GC-100 will also periodically announceits IP address at intervals between 10 and 60 seconds while powered up. If the GC-100 is connecteddirectly to a PC by way of a crossover cable, or if there isn’t a DHCP server present, the GC-100 will use192.168.1.70 as the default IP address. The IP address and DHCP settings can be changed on the GC-100 internal setup web pages by entering the GC-100’s IP address into a web browser’s address bar.Follow the link to Network Setup and select either DHCP or static and, if using a static IP, enter the new IP address and select β€œApply.” The GC-100 will restart with the newly assigned IP address. In most network environments, the GC-100 can also be accessed by name. The network name of a GC-100 is β€œGC-100_XXXXXXXXXXXX_GlobalCache” where X is the 12 character MAC address printed onthe bottom of the GC-100. For example, if the MAC address is 000C1E012345 the GC-100 network namewould be GC100_ 000C1E012345_GlobalCache. The beacon message has the following format (Note: The symbols β€œ<” and β€œ>” are actually included in thecode.): AMXB<-UUID=GC100_000C1E0AF4E1_GlobalCache><-SDKClass=Utility><- Make=GlobalCache><-Model=GC-100-12><-Revision=1.0.0><Config-Name=GC-100><Config- URL=> The UUID value contains the unique MAC address of the GC-100 and is also the name registered withthe DHCP server. The β€œModel” value can either be GC-100-12 or GC-100-06. A GC-100-18 will reportback as model GC-100-12. Communication with the GC-100 is accomplished by opening a TCP socket on Port 4998. All commandsand data, with the exception of serial (RS232) data, are communicated through Port 4998. Port 4998 isused for such things as GC-100 status, IR data, toggling relays, and reading digital input states. Allinformation, with the exception of serial data, is communicated by comma delimited ACSII text stringterminated by a carriage return (↡). Serial data is communicated over Ports 4999 and higher. Serial connections with the lowest modulenumber will communicate over Port 4999; serial connections with the next higher module number willcommunicate over Port 5000, and so on. Only one IR command can be executed at a time, so special consideration must be given when sendingback-to-back IR commands. In addition, IR commands may be set up to repeat their IR timing patternmultiple times, for increasing volume or fast forwarding a tape drive. This characteristic can be used tocreate some very desirable results. See β€œBack-to-Back IR Commands” in section 5.2.3. 5. Command Set Commands are always initiated by short ASCII string representing the command type. Typically, addressand data information will follow. The structures of GC-100 commands are described in the followingsections. Text enclosed in brackets ( <text> ) must be substituted by its ASCII definition. Multiple ASCIIchoices are divided by separator ( | ) characters. Note: commands are case sensitive. setstate,<connectoraddress>,<state>↡ where; <connectoraddress> is 3:2 (3rd module, 2nd relay in module) <state> is 1 (close contacts on a β€œnormally open” relay) setstate,3:2,1↡ getdevices The GC-100 command is used to determine installed modules and capabilities. Each module respondswith its address and type. This process is completed after receiving an endlistdevices response. Sent to GC-100: device where; <moduleaddress> is |1|2|3|4|….|n| <moduletype> is |3 RELAY|3 IR|1 SERIAL| endlistdevices getversion,<moduleaddress> The module version number may be obtained from any or all modules in a GC-100. Modules combinedon the same printed circuit board will have the same version number. getversion,<moduleaddress>↡ where; <moduleaddress> is |1|2|3|4|….|n| version version,<moduleaddress>,<textversionstring>↡ where; <moduleaddress> is |1|2|3|4|….|n| <textversionstring> can be any ACSII string blink The blink command is used to blink the power LED on the GC-100. This is especially helpful in a rackmount situation where the installer may need to positively identify a particular GC-100. blink,<onoff>↡ where: <onoff> is |0|1|. A value of 1 starts the power LED blinking, and a value of 0 stops it. set_NET The set_NET command allows the developer to configure the network settings of a GC-100 via the TCPconnection without having to access the web configuration pages. set_NET,0:1,<configlock>,<IP settings>↡ where: <configlock> is |LOCKED|UNLOCKED| <IP settings> is |DHCP|STATIC,IP address,Subnet,Gateway| Example : set_NET,0:1,DHCP↡ This will select DHCP IP address assignment set_NET,0:1,STATIC,192.168.1.70,255.255.255.0,192.168.1.1↡ This will select static IP address assignment and will assign the IP address values supplied. get_NET This command will retrieve the current network settings and return a comma delimited string with thesettings. get_NET,0:1↡ set_IR This command allows the developer to configure each of the individual IR ports as either IR output orsensor input. The possible modes are IR out, Sensor in, Sensor in with Auto-notify, and IR out no carrier.* set_IR,<connectoraddress>,<mode>↡ where: <connectoraddress> as defined in section 1 <mode> is |IR|SENSOR|SENSOR_NOTIFY|IR_NOCARRIER| Example: set_IR,5:2,SENSOR_NOTIFY↡ This will setup IR port # 5 as sensor input with auto-notify. get_IR This command will retrieve the current mode setting for a particular port. get_IR,<connectoraddress>↡ where: <connectoraddress> is as defined in section 1 set_SERIAL This command allows the developer to configure the serial ports on a GC-100. set_SERIAL,<connectoraddress>,<baudrate>,<flowcontrol>,<parity>↡ where: <connectoraddress> is as defined in section 1 <baudrate> is |57600|38400|19200|9600|4800|2400|1200| <flowcontrol> is |FLOW_HARDWARE|FLOW_NONE| <parity> is |PARITY_NO|PARITY_ODD|PARITY_EVEN| Example: set_SERIAL,1:1,38400,FLOW_HARDWARE,PARITY_NO↡ This will setup serial port #1 on the GC-100 to operate at 38400 baud, use hardware flow control, and no parity. get_SERIAL This command will retrieve the current serial settings for a particular serial port. get_SERIAL,<connectoraddress>↡ unknowncommand An unknowncommand response will be sent by the GC-100 if a command is not understood. This canhappen if, for example, a connector is set up as a digital input and the command sent is sendir. 5.2 IR Commands 5.2.1 IR Structure An IR, or infrared, transmission is created by sending an IR timing pattern to the GC-100. This pattern isa collection of <on> and <off> states modulated with a carrier frequency ( Ζ’ ) which is present during the<on> state. A carrier frequency is typically between 35 to 45 KHz with some equipment manufacturersusing 200 KHz and above. The length of time for an <on> or <off> state is calculated in units of thecarrier frequency period. For example, an <on> value of 24 modulated with a 40 KHz carrier frequencyproduces an <on> state of 600Β΅S, as calculated below. Figure 5.2a illustrates an IR timing pattern that would be created for the value shown below. IR timingpatterns typically have a long finally <off> value to ensure the next IR command is not interpreted as partof the current IR transmission. Period 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 5.2a5.2.2 Sending IRsendir Control of IR devices is accomplished through the sendir command. Since IR commands may takeseveral hundred milliseconds to complete, the GC-100 provides an acknowledgment to indicate when it isready to accept the next command. sendir,<connectoraddress>,<ID>,<frequency>,<count>,<offset>,<on1>, <off1>,<on2>,<off2>,….,onN,offN↡ (where N is less than 128 or a total of 256 numbers) where; <connectoraddress> is as defined in section 1. <ID> is |0|1|2|…|65535| (1) (for the completeir command, see below) <frequency> is |20000|20001|….|500000| (in hertz) <count> is |0|1|2|….|31| (2) (the IR command is sent <count> times) <offset> is |1|3|5|….|255| (3) (used if <count> is greater than 1, see below) <on1> is |1|2|…|65635| (4) (measured in periods of the carrier frequency) <off1> is |1|2|…|65635| (4) (measured in periods of the carrier frequency) (1) The <ID> is an ASCII number generated by the sender of the sendir command, which is included later in the completeir command to indicate completion of the respective sendir transmission. (2) The <count> is the number of times an IR transmission is sent, if it is not halted early via a stopir or another IR command (see section 4.2.3). In all cases, the preamble is only sent once, see <offset> below. A <count> of β€œ0” is a special case where the IR timing pattern is continually repeated until halted. However, IR transmission will halt on its own after the IR timing pattern repeats 65535 times. (3) An <offset> applies when the <count> is greater than one. For IR commands that have preambles, an <offset> is employed to avoid repeating the preamble during repeated IR timing patterns. The <offset> value indicates the location within the timing pattern to start repeating the IR command as indicated below. The <offset> will always be an odd value since a timing pattern begins with an <on> state. For proper GC-100 operation, all <on> and <off> values in the timing pattern must be 4 or higher.All of the conditions above must be met for valid sendir commands. When a variable is missing or outsidethe accepted range an unknowncommand will be sent by the GC-100. As an exercise, the sendircommands below will trigger a GC-100-12 unknowncommand response. completeir All sendir commands are acknowledged with a completeir response from the GC-100 after completion ofthe IR transmission. The completeir response is associated with the sendir command through an <ID>.When utilized, the <ID>s can provide a unique identifier to determine which IR transmission hascompleted. completeir,<connectoraddress>,<ID>↡ where; <connectoraddress> is as defined in section 1. <ID> is |0|1|2|…|65535| The following will send the IR timing sequence illustrated in figure 4.2a to the 5 th IR connector onthe GC-100-12 shown in figure 1a. sendir,5:2,2445,40000,1,1,4,5,6,5↡ In the next example, the following two IR commands will send the same IR timing pattern. Note: thecarrier frequency is 34.5 KHz and <ID>s are different so as to provide unique completeiracknowledgments. The following is a simple IR timing pattern of 24,12,24,960 which is sent four timeswith a preamble of 34,48: sendir,5:2,4444,34500,1,1,34,48,24,12,24,960,24,12,24,960,24,12,24,960,24,12,24,960↡ sendir,5:2,45234,34500,4,3,34,48,24,12,24,960↡ completeir,5:2,4444↡ completeir,5:2,45234↡ The second IR command structure is the recommended method, avoiding long commands and allowingrepeats of the command to be halted if requested. See (5.2.3) below. A general discussion is necessary to better understand how IR commands are executed in the GC-100.IR commands are executed one at a time which, with large <count> values, may take several seconds tocomplete transmission. If a new IR command is received during execution of an earlier IR command, theIR command in progress will terminate; no further repeat timing patterns, due to a remaining <count>value, are transmitted. Therefore, IR commands with a <count> of 1 will always finish before the next IRcommand is started. Only the remaining portion of an IR command that may arise from restarting arepeating timing pattern is discarded. 34,48,24,12,24,960,24,12,24,960,24,12,24,960,24,12,24,960 ⇑ assume the next IR command is sent during the 24 state. However, if the next IR command is received at the location shown above, the repeating timing pattern24,12,24,960 is halted after completion of the current <count>; creating the timing pattern below. 34,48,24,12,24,960,24,12,24,960 This characteristic may be exploited to create desirable effects, such as increasing audio volume, fast-forwarding a DVD player, or any control requiring continuous IR transmissions. By using an appropriatelyhigh <count>, an IR command repeats until the desired volume or DVD scene is reached, whereupon it isthen halted by sending the next sendir or stopir command. In either case, when a sendir or stopir is usedto halt a previous IR command a completeir acknowledgment is not sent from the GC-100. Other non-IR commands are not affected by IR transmissions and execute when received. However, if asendir command is sent before an earlier IR transmission is finished, the new IR command will remain inthe GC-100 input queue along with all other non-IR subsequent commands until the present IRtransmission has halted, as explained above. This may take several hundred milliseconds. stopir A stopir command is used to halt repeating IR transmissions. After receipt of stopir the present IRtransmission will halt at the end of the current timing pattern. Any remaining <count> will be discarded. stopir,<connectoraddress>↡ where; <connectoraddress> is as defined in section 1. The GC-100 sends out notifications for digital input state changes as well as allowing the inputs to bepolled for their current state at any time. Digital input connectors are the same connectors used for IR output. The connector configuration isdetermined by the GC-100 configuration on an individual connector basis. For the following commands tooperate correctly the connector being addressed must be configured for digital input. If a commandrequests information from an improperly configured connector, an unknowncommand response will besent from the GC-100. getstate getstate,<connectoraddress>↡ where; <connectoraddress> is define in section 1. state state,<connectoraddress>,<inputstate>↡ where; <connectoraddress> is defined in section 1. <inputstate> is |0|1| Note: A "1" represents a high digital voltage level input or absence of an input (no connection) and a "0" is a low input. statechange If the sensor port has been configured as β€œSensor In with Auto-Notify”, the GC-100 automatically sends anotification message upon a state change of that digital input connector as follows: statechange,<connectoraddress>,<inputstate>↡ GC-100 relays are activated by sending a "1" state and deactivated with a "0." Activation of a normallyopen contact will close (or connect) the relay output pins, while a normally closed contact will open (ordisconnect) the relay output pins. Note: relay states are not preserved through a power cycle and allrelays will return to their inactive state until a 1 state is re-sent. setstate setstate,<connectoraddress>,<outputstate>↡ where; <connectoraddress> is defined in section 1. <outputstate> is |0|1| (where 0 is inactive, 1 is active) Response: state,<connectoraddress>,<0|1>↡ GC-100 serial is bi-directional, RS232 communication. All communication is 8 data bits and one stop bit.Baud rate is set through a GC-100 internal web page, or configuration command, up to 57.6 Kbaud.Parity and hardware flow control can also be set. Each serial input buffer is 255 bytes with no flow control.All serial data is passed through without interpretation via an assigned IP port. Each serial connector isassigned a unique port number. The serial connector with the lowest module number is assigned to IPPort 4999. The serial connector with next highest module number is assigned to IP Port 5000, and so on. If a serial buffer overflows, data will be lost. Parity errors and serial overflow are indicated on the webpage for serial port configuration. Overflow will not occur unless the network connection is blocked wherethe GC-100 is unable to communicate. 6. Error CodesThe chart below provides a list of error messages returned by the GC-100 from port 4998 and theexplanation of each message.
https://pt.scribd.com/document/375733940/API-GC-100-pdf
CC-MAIN-2019-43
en
refinedweb
Overview of Generics in the .NET Framework [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.] This topic provides an overview of generics in the .NET Framework and a summary of generic types or methods. It also defines the terminology used to discuss generics.. Public Class Generic(Of T) Public Field As T End Class public class Generic<T> { public T Field; } generic<typename T> public ref. Dim g As New Generic(Of String) g.Field = "A string" Generic<string> g = new Generic<string>(); g.Field = "A string"; Generic<String^>^ g = gcnew Generic<String^>(); g->Field = "A string"; Generics Terminology. Constraints are limits placed on generic type parameters. For example, you might limit a type parameter to types that implement the IComparer<T> generic interface, to ensure that instances of the type can be ordered. You can also constrain type parameters to types that have a particular base class, that have a default. Function Generic(Of T)(ByVal arg As T) As T Dim temp As T = arg ... End Function T Generic<T>(T arg) { T temp = arg; ...} generic<typename T> T Generic(T arg) { T temp = arg; ...}; Visual Basic, Introduction to Generics (C# Programming Guide),. See Also Tasks How to: Define a Generic Type with Reflection Emit Reference System.Collections.Generic System.Collections.ObjectModel Introduction to Generics (C# Programming Guide) Concepts When to Use Generic Collections Generic Types in Visual Basic Overview of Generics in Visual C++ Generic Collections in the .NET Framework Generic Delegates for Manipulating Arrays and Lists Advantages and Limitations of Generics Other Resources Commonly Used Collection Types Generics in the .NET Framework
https://docs.microsoft.com/en-us/previous-versions/ms172193(v=vs.100)?redirectedfrom=MSDN
CC-MAIN-2019-43
en
refinedweb
Host launch information for Server class. More... #include <gamerunnerstructs.h> Host launch information for Server class. Create Server dialog uses this to setup host information. However things that can be set through the Server class, like MOTD, max. clients, max. players, server name, etc. should be set through Server class' setters. Contents of this list will be passed as "+consoleCommand value" to the command line.
https://doomseeker.drdteam.org/docs/doomseeker_0.10/structHostInfo.php
CC-MAIN-2019-43
en
refinedweb
Member Since 1 Year Ago 4,410 Upload Excel Into Vuejs no this isn't what i want. i want the data in excels uploading into database table fields. Started a new conversation Convert Object To Array i am using vue paginator and vue-json-to-excel. Vue paginator needs data in object while vue-json-to-excel needs data in array. How can I make this work? Thanks Started a new conversation Upload Excel Into Vuejs I am developing a SPA using vue js and laravel. I want to know how to upload excel file into database by using vue js. I have been successfully export JSON from vue js to excel by using vue-json-excel. Started a new conversation Getting Files That Uploaded To Database To Show On Edit Form I have a CreateProjectLayout which allows user to upload related files into my database. I also have an EditProjectLayout which get all the data from the database to show in each form field. I want to know how to get files that uploaded to show in the form field. Attachments <div> <input type="file" id="files" ref="files" multiple v-on: </div> <br> <br> <div v- <img class="preview" v-bind: {{ file.name }} <div class="success-container" v- Success <input type="hidden" : </div> <div class="remove-container" v-else> <a class="btn btn-danger" type="button" v-on:Remove</a> </div> axios.get('/projects/project/'+app.$route.params.id) .then(function (response) { var data = response.data app.ID = data.id app.projectName = data.project_name app.totalPrice = data.total_price app.start_date = data.start_date app.end_date = data.end_date app.categoryData = data.project_category_id app.customerData = data.customer_id app.description = data.description app.rows = data.scopes app.files = ? Replied to Showing Category Name Instead Of Id I have already set up the one-to-one relationship. I don't know how to make the controller works. Started a new conversation Showing Category Name Instead Of Id I have two different tables - projects and project_categories. In projects table, I have category_id column. I want to show all column in projects and instead of showing category_id, I want to show name. Replied to Call To A Member Function GetClientOriginalName() On Null yes, i do Started a new conversation Call To A Member Function GetClientOriginalName() On Null Here's my controller namespace App\Http\Controllers; use App\FileEntry; use App\Project as Project; use Illuminate\Http\Request; use Illuminate\Support\Facades\File; use Illuminate\Support\Facades\Input; use Illuminate\Support\Facades\Storage; class FileEntriesController extends Controller { public function index() { $files = FileEntry::all(); return view('files.index', compact('files')); } public function create() { return view('files.create'); } public function save($id, Request $request) { $this->validate($request, [ // 'file' => 'image|max:3000' ]); $file = Input::file('file'); $filename = $file->getClientOriginalName(); $path = hash( 'sha256', time()); if(Storage::disk('uploads')->put($path.'/'.$filename, File::get($file))) { $entry = new FileEntry; $project_id = Project::find($id); $entry->project_id = $project_id; $entry->filename = $filename; $entry->mime = $file->getClientMimeType(); $entry->path = $path; $entry->size = $file->getClientSize(); $entry->save(); return response([ 'status' => true, 'msg' => 'A file has been saved!' ]); } return response()->json([ 'success' => false ], 500); } } Replied to Fetching Data From Database To Show In Options In Select2 thank you Replied to Fetching Data From Database To Show In Options In Select2 i'm using vue js and laravel. Do you have another example? Started a new conversation Fetching Data From Database To Show In Options In Select2 I have a form - a dropdown which I use select2. I want to search and select a name that is in database. I'm using axios to get data but I do not know how to display the name in the dropdown. Replied to Undefined Index: Features @arifkhn46 can you write it please? Started a new conversation Undefined Index: Features $rows = $request->get('rows'); foreach ($rows as $row){ $scope->features[] = $rows['features']; $scope->prices[] = $rows['prices']; }; Replied to Two Ids Waiting For Each Other it solved. Thanks Started a new conversation Two Ids Waiting For Each Other I want to save a form into two tables. I have two models using in one controller. There are two ids that are waiting for each other. I don't know how to fix this. Please help $project = new Project; $project->project_name = $request->projectName; $project->start_date = date("Y-m-d", strtotime($request->start_date)); $project->end_date = date("Y-m-d", strtotime($request->end_date)); $project->description = $request->description; $project->created_by = Auth::user()->id; $scope = new Scope; $scope->project_id = $project->id; $rows = $request->get('rows'); foreach ($rows as $row){ $scope->features = ($row['features']); $scope->prices = ($row['prices']); }; $scope->save(); $project->projectScope_id = $scope->id; Replied to Storing Array Value From Vue I want to store these two $projects->features = $request->get('row.features'); $projects->prices = $request->prices; Replied to Storing Array Value From Vue return{ ID:null, projectName:null, customerName:null, start_date:null, end_date:null, description:null, image:'', myValue: '', customer_names: ['Sok', 'Sao', 'Som'], rows: [ features = '', prices = '' ] } },return{ ID:null, projectName:null, customerName:null, start_date:null, end_date:null, description:null, image:'', myValue: '', customer_names: ['Sok', 'Sao', 'Som'], rows: [ features = '', prices = '' ] } }, <div id="SCOPE" v- <div class="row"> <div class="col-md-6 pr-1"> <div class="form-group"> <label>Features <i class="text-danger">*</i></label><textarea rows="4" class="form-control" placeholder="Features" name="features" v-</textarea> <small class="form-text text-danger">{{ errors.first('row.features') }}</small> </div> </div> <div class="col-md-4 pl-1"> <label>Prices <i class="text-danger">*</i></label> <input class="form-control" placeholder="Prices" name="prices" v-</input> <small class="form-text text-danger">{{ errors.first('row.prices') }}</small> </div> Started a new conversation Storing Array Value From Vue I want to store array value from a form in vue by using controller. I am using loop v-for in my form, and I want to get all the values from there. Please help. $project->project_name = $request->projectName; $project->customer_id = $request->customerName; $project->start_date = date("Y-m-d", strtotime($request->start_date)); $project->end_date = date("Y-m-d", strtotime($request->end_date)); $project->description = $request->description; $project->file_name = $request->file; $projects->features = $request->get('row.features'); $projects->prices = $request->prices; $project->created_by = Auth::user()->id; Started a new conversation Laravel Show Customer_name By Using Customer_id I am using laravel and vue js. I have two tables: customers and projects. I have a form - a dropdown form which user can select a customer_name. However, in the projects table, I want to store customer_id from the form selected by the user. I want to use id in customers table to show customer_name in the form. How this can be done? Thanks,
https://laracasts.com/@Hourlee
CC-MAIN-2019-43
en
refinedweb
Submitted byPawan SinghDepartment of Electrical and Computer Engineering Masters Committee:Advisor: Siddharth SuryanarayananSudipta ChakrabortyDan Zimmerle ABSTRACT The analysis of the electrical system dates back to the days when analog network analyzers wereused. With the advent of digital computers, many programs were written for power-flow and shortcircuit analysis for the improvement of the electrical system. Real-time computer simulationscan answer many what-if scenarios in the existing or the proposed power system. In this thesis,the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platformOPAL-RTTM . The concept and the challenges of the real-time simulation are studied and addressed.Distributed energy resources includes some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeledand simulated on a real-time platform. A microgrid encompasses a portion of an electric powerdistribution which is located downstream of the distribution substation. Normally, the microgridoperates in paralleled mode with the grid; however, scheduled or forced isolation can take place.In such conditions, the microgrid must have the ability to operate stably and autonomously. Themicrogrid can operate in grid connected and islanded mode, both the operating modes are studiedin the last chapter. Towards the end, a simple microgrid controller modeled and simulated on thereal-time platform is developed for energy management and protection for the microgrid. ii ACKNOWLEDGEMENTS This thesis was possible by the help, guidance, and support of many people. I would like to thankmy advisor, Dr. Siddharth Suryanarayanan and Dr. Sudipta Chakraborty for the guidance andwisdom they provided throughout my time at Colorado State University and National RenewableEnergy Lab. Thank you also to Dr. Dan Zimmerle for agreeing to be a part of my thesis committee.I would especially like to thank Austin Nelson for helping and guiding me throughout theproject at National Renewable Energy Laboratory. I would also like to thank the Jerry Duggan andOPALRTTM technical support team for the advice and suggestions which were extremely helpfulto operate OPAL-RTTM at Colorado State University. Lastly I would like thank my family andfriends for their unwavering love and support. iii TABLE OF CONTENTS ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACKNOWLEDGEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi INTRODUCTION 1.1 BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 LITERATURE REVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 2.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 RT-LABTM MODELING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2.1 17 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.1.1 3.2 30 4.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.2 4.3 DIESEL GENERATOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 iv 4.45 MICROGRID 40 5.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 5.3 PROTECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.4 ENERGY MANAGEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 56 6.1 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.2 FUTURE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 REFERENCES 58 APPENDICES 65 A.1 A.2 A.3 A.4 . . . . . . . . . . . . . . . . . . . . . . . 79 LIST OF TABLES 3.1 Microgrid pu voltages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 under-voltage/over-voltage protection. . . . . . . . . . . . . . . . . . . . . . . . . . . 51 . . . 77 LIST OF FIGURES 2.2 2.3 2.4 2.5 2.6 2.7 3.3 CPU usage varying with the time step of the 13-Node distribution model. . . . . . . . 22 3.4 Splitting the IEEE 13-Node distribution model in master and slave subsystem. 3.5 3.6 Pu absolute voltage percentage error as compared to the 13-Node feeder test data. 3.7 3.8 Power Hardware in the Loop setup and feedback from the Hardware. . . . . . . . . . . 29 VSC control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 4.5 4.6 Voltage and current during diesel generator synchronization with 13-Node distribution . . . . . . . . . . . . . . . . . . . . . . 19 . . . . 23 . . 25 . . . . . . . . . . . . . . . . . . . . . . . . 33 . . . . . . . . . . . . . . . . . . . . . . 36 model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 vii 4.7 . . . . . . . . . . . . . . . . . . . 39 4.8 Microgrid with DERs and microgrid switch connected to 13-Node distribution model. 5.5 5.6 5.7 Pu voltage and frequency during 3-phase fault on the grid (V < 50%). . . . . . . . . . 52 5.8 Pu voltage and frequency during 3-phase fault on the grid (V < 80%). . . . . . . . . . 53 5.9 Power management by the microgrid controller with variation in microgrid load and 43 . . . . . . . . . . . . . . . 44 PV array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55A.1 Line parameters in SimulinkTM block. . . . . . . . . . . . . . . . . . . . . . . . . . . 73 viii Chapter 1INTRODUCTION1.1 BACKGROUND The modernized electricity grid, often termed as the Smart Grid, is a complex cyber-physicalentity in which classical power system technologies are intertwined with emerging advances intechnologies related to renewable energy sources, distributed resources, power electronics, controllable loads, sophisticated control system, diverse communication mediums and protocols, andactive end-users enabled with forecast and near real-time information. The distribution realm ofthe electricity grid is seeing unprecedented modernization with the advent of the above mentionedtechnologies [1]. A representative concept of the Smart Grid is the microgrid, a self-containingsubset of an area electric power system operating at distribution voltages with access to loads, controls and protection, and multiple sources of generation. Further, the microgrid presents itself as asingle unit of control to the utility grid, from which it may be intentionally islanded for operation[2].The interactions between the various component assets and the system in a complex cyberphysical realm may require comprehensive studies for full characterization of the critical infrastructure, i.e., the electricity grid. Test platforms for performing both real-time (RT) and hardwarein-the-loop (HIL) experiments enable such studies. The former is a fast digital simulation technique of time steps in the order of a few microseconds for quantifying steady-state, dynamic, andtransient characteristics over long (realistic) horizons of simulation. The latter is an experimentaltechnique where an actual hardware is included in the simulation loop for characterization. The high fidelity of the simulations will make the hardware under test oblivious to the lab environmentby mimicking reality- while not compromising on the grid-side complexity of the test procedure.The hardware can be tested, characterized, and validated for standard compliance, system-leveloperation, interoperability, functional specification, and communication variations.In that regard, the Energy Systems Integration Facility (ESIF) of the National Renewable Energy Laboratory (NREL) and collaborators from Colorado State University (CSU) designed anddeveloped a flexible cyber-physical platform for testing electric power microgrids. In this testplatform, actual hardware interfaces with a system or network-level abstractions of the electricgrid while maintaining the fidelity of the latter using sophisticated simulation models. In additionto testing hardware assets in a RT and HIL environment, the setup is capable of tests on differentcommunication phenomena (e.g., protocols, latency and bandwidth requirements), loss of package,cyber-attacks, and data management. The effort of the personnel from Advanced Power Engineering Laboratory (APEL) of CSU in developing the testbed at NREL relates to the following: a)development of real-time models of renewable generation, loads, electrical storage, and powerelectronics; and, b) development of the electrical backbone model for the microgrid system, i.e.,the distribution grid.This report details procedure, challenges and solutions, and results pertaining to the modelingand simulation of the IEEE 13-Node distribution network and distributed resources including adiesel engine, a solar photovoltaic array, and battery storage models on a real-time platform, i.e.,OPAL-RTTM . These simulations can be performed using a real-time simulator from OPAL-RTTM .The system can be used for controlling kW to MW scale power equipment, thus creating realistictest platforms to conduct integration testing at actual power and load levels to evaluate componentand system performance before commercialization [3]. LITERATURE REVIEW Digital real-time simulator has been used for many years in power system sector; and there aremany commercial simulators have been developed based on Dommels algorithm implemented inEMTP simulation program [4]. The OPAL-RTTM real-time simulator based on the user-friendlycommercial product consists of ARTEMIS and RT-LAB. Testing of complex HVDC networks,SVCs, STATCOMs and FACTS device control systems, under steady state and transient operatingconditions, is a mandatory practice during both the controller development phase and before finalsystem commissioning [5]-[6].The integration of DG devices, including some microgrid applications, and renewable energysources (RES), such as wind farms, is one of the primary challenges facing electrical engineerstoday [7]-[8]. It requires in-depth analysis and the contributions of many engineers from differentspecialized fields. According to IEEE std. 1547-2003, intentional or planned islanding is still under consideration by many utilities [9]-[10]. The Consortium for Electric Reliability TechnologySolutions (CERTS) executive summary report in [11] presents the results of ongoing investigations on development of high power electronic systems for distributed generation systems usingstandardized approaches for integrating the components that comprise power converters.The balance between supply and demand of power is an important requirement of microgridin both the islanded and the grid connected mode. In grid connected mode the power requirementis met by exchanging the power with the grid. [12], [13]. Microgrid technologies such as energymanagement, protection control, power quality, and field test have been studied. A distributedintelligent energy management was developed in [13], [14]. Pilot plants for microgrid have beendeveloped and tested using emulated laboratory microgrid [15]. Demonstration and pilot projectsfor demonstration and testing of microgrid [15]-[21]. Field tests for frequency and voltage control,and utility interconnection devices [15]-[17]. Hardware in loop technologies are has been usedin various engineering technologies such as power electronics, robotics, and automotive system. [22]-[24]. Real time digital simulator (RTDS) is a special purpose computer designed to studythe electromagnetic transients in real-time. RTDS utilizes the advanced parallel processing technique. RTDS have been used for testing of high voltage direct current, static var compensators,and protective relays [25]-[31].During literature survey on the aspects of integration of DERs it is further revealed that thelatest technological development in this field of research, in recent years, is aimed at the maximum utilization of various types of renewable energy sources (RESs) due to great advantage ofabundance availability and less environment impact. The main objective of this study is to develop a real-time model of a distribution system for HILstudy. The model should be validated and quantified on a real-time platform. The study shouldpresent the challenges encountered and the solution to those challenges. The distributed resourcesshould be selected and modeled in a real-time platform and, these models will form the microgridwith the 13-Node distribution system. The other important aspect of the microgrid is the microgridcontroller which should be developed and tested under different scenarios.All the chapters in this thesis starts with an brief introduction to the contents presented in thechapter and end with a summary and next steps towards the objective. Chapter 2 contains theinformation regarding modeling of the 13-Node distribution system. It explains all the modelingaspects of the 13-Node distribution system in Simulink. Various types of load and line configurations are explained in detail. Chapter 3 explains the real-time modeling of the 13-Node distribution system and explains the real-time simulator OPAL-RTTM . This chapter also deals with theconcept of multi-core simulation and why it is necessary to take this approach. Finally the modelis validated in real-time with an acceptable accuracy. Chapter 4 deals with the development ofdistributed resources such as diesel generator, photovoltaic system, and battery inverter system. These models are validated in real-time simulation and these forms the basis of the microgrid.The microgrid formed is simulated in real-time with all the distributed resources developed in thischapter. Chapter 5 models a microgrid controller which has the capability to control the batterysystem to compensate for step changes in loads and variation in photovoltaic system. The developed microgrid is simulated for various scenarios of the microgrid operation and then simulationresults are discussed along with the waveforms. Finally, a conclusion for the thesis and future workare suggested and discussed in chapter 6. Chapter 2MODELING THE IEEE 13-NODEDISTRIBUTION FEEDER2.1 Many computer programs are available for analysis of distribution feeders. A paper [34] was published to make the data for radial distribution feeder available to program developers and to verifythe correctness of the computer simulation results. Many computer programs are available for theanalysis of unbalanced three phase radial distribution feeder. There was a need to benchmark thetest feeder to obtain comparable results from various computer simulators. The complete data andsolutions for all of the test feeders can be downloaded from the Internet at [35] While the literatureon the distribution system is abundant [32]-[35], the use of real-time simulation for these simulation is lacking.The primary objective of this work includes selection, validation, and modeling of a distributionfeeder in SimulinkTM and RT-LABTM , followed by validating the model in a real-time simulatorOPAL-RTTM . The model will be used in PHIL simulations for forming a microgrid with photovoltaic and battery inverter system as the actual power hardware in the loop. The reason for choosing the IEEE 13-Node test feeder model was the interesting characteristicsexhibited by this distribution feeder, which are listed as follows: 1. The 13-Node test feeder is relatively short (8200 ft) and highly loaded for a 4.16 kV feeder.The longest line is 2000 ft long.2. There is one substation voltage-regulator consisting of three single-phase units connected inwye. This regulator has capability to change secondary voltage in steps of 0.00625 per-unit(pu).3. There are overhead and underground lines with variety of phasing in this feeder.4. There are two shunt capacitor banks. One at node-675 with 200 kVAr each phase and theother one at node-611 with 100 kVAr.5. There is one in-line transformer between node-633 to node-634. The transformer is 500kVA4.16 kV/ 480 V in wye-ground configuration.6. There are unbalanced spot and distributed loads, which are of Constant PQ, I, or Z typeconnected in delta or wye configuration.One of the other reasons for selecting this test feeder is that, it is verifiable from the availabletest data and it is relatively smaller with 13 electrical nodes making it conducive to simulate on areasonably sized real-time platform. Figure 2.1 shows single line diagram for 13-Node distributionmodel. using a three phase dynamic load which is available under the SimPowerSystemTM library. Theload real power P (W) and reactive power Q (vars) is given by equation (2.1) and (2.2) [37]: P = P0 VV0 n p (2.1) Q = Q0 n q (2.2) where,P0 and Q0 are the initial active and reactive power at the initial Voltage V0np and nq are constants controlling the nature of load i.e., constant PQ, I, or ZThe 13-Node distribution model requires unbalanced load models, which is modeled usinga single phase dynamic load block given in SimulinkTM example[38]. The delta connected loadrating for Phase-A, Phase-B and Phase-C refer to the ratings of the loads connected between phasesA-B, B-C and C-A respectively. If not specified, the neutral is grounded by default. By setting theparameter [np nq ] in SimulinkTM block as 0, 1, or 2 for constant PQ, I, and Z load, respectively,the desired load configuration can be obtained. Figure 2.2 shows the parameters in the SimulinkTMblock for load configuration according to the IEEE 13-Node test data. The parameters shown arefor the load at node-692, which is D-I configuration and the load is 170 kW and 151 kVAr at phaseC. This delta load is connected between phase C-A. If a single phase load is wye configuration thenit is connected between line to ground. Parameters [np nq ] are set as 1 for constant I type load andthe nominal voltage is set to 4.16 kV for phase-phase connection.DISTRIBUTION FEEDER LINE MODELThere were several options available for the line model such as ARTEMIS Distributed ParameterLine (DPL), PI-line model in SimulinkTM . ARTEMIS DPL can be used in the model as it has amajor advantage of decoupling both side of the line into 2 distinct state-space systems and to make Lengthmin = Ts3.3 106 (2.3) where, Lengthmin is the minimum line length in km and Ts is the model sample time or stepsize in seconds.Using this equation with Ts = 50 s the minimum length is calculated as 15 km for the DPL.For modeling that requires smaller lengths stub-line can be used, which allow time delay betweensubsystems to be at least 1 time step. In the 13-Node distribution model, the longest line length is2000 ft or 0.61 km thus this method cannot be used because of the low propagation time. 10 The IEEE 13-Node distribution system has unbalanced three-phase load and the line parametersare given in impedance and admittance matrix data. Hence, a PI-line model was developed inSimulinkTM with R-L in parallel with C/2 at each end using the RLC model branch [41] model inSimulinkTM as shown in Figure 2.3. This matrix element represents G+jB, where G was negligible.The line capacitance C is calculated as C = B.2f z = R + jL = R + jX y= 1+ jC = G + jBRsh Where R is the series resistance (/m), X is the series reactance (/m), G is the shunt conductance (S/m), and B is the shunt susceptance (S/m)The IEEE 13-Node distribution feeder test data provide data for impedance and admittancematrix [35]. A MATLABTM script written to parameterize the line data is given in appendix-A.2.The line model parameters R, L, and C are specified according to the type of line, the nomenclaturefor line parameters is: Parameter Type of line, usage is given in Appendix-A.1. For a line type601 [35] the resistance, inductance, and capacitance will be given by R 601, L 601 and C 601respectively.VOLTAGE REGULATORA step-voltage regulator consists of an autotransformer and a load tap changing mechanism. Astandard step regulator contains a reversing switch enabling 10% regulator usually in 32 steps,which amount to 5/8% change in voltage. A common type of step voltage regulator is type-B asgiven in [40].The SimulinkTM model for OLTC is available under SimulinkTM library is a phasor model. De- 11 0.232 = 0.00625pu 4. The transformer voltage level changed to 115 kV for the primary and 4.16 kV for the secondary. 12 The on-line tap changer (OLTC) transformer model given in the SimulinkTM library is notcapable of changing the tap of each phase individually [39]. Taps are changed according to themeasured pu voltage, assuming a balanced 3-phase system but, the 13-Node distribution systemis an unbalanced system. The control system for the SimulinkTM library model of the tap changerwas modified to change the tap of each phase of the 3-phase autotransformer individually by usingthe same tap changing control for each of the phases. Standard tap changing transformers havea reversing switch enabling 10% regulator range in 32 steps, resulting in 0.625% per step change[8]. These values are specified in the SimulinkTM model parameters. The pu transformer reactanceand resistance values are specified as referred to the second winding of the autotransformer. Theleakage reactance of the OLTC transformer was varied and the pu voltages were recorded which isshown in Figure 2.4. This is similar to what is expected, as obtained in [42].VERIFICATIONThe new model was verified by changing the tap of the OLTC transformer and recording the outputpu voltage. Figure 2.5 shows Phase-A, Phase-B and Phase-C changing pu values in 0.0625 pu stepsas the taps of OLTC are changing. This verifies that OLTC is woking correctly and changing stepsas per the desired control.TRANSFORMER, BREAKER, AND CAPACITOR BANK3-phase transformer and breaker models were obtained from SimPowerSystemsTM library in SimulinkTM .The values of voltages, nominal transformer power, and the pu reactances and resistances were obtained from the IEEE 13-Node test system data [35]. Capacitor banks were modeled using a seriesRLC load branch from the SimPowerSystemsTM library in SimulinkTM as shown in Figure 2.6.QL and P were set to zero as shown in Figure 2.7, where V is line-line voltage and Qc is thereactive power of the capacitor bank in VAr. The capacitors were connected in star or groundedconfiguration and the respective KVAr values were obtained from the IEEE 13-Node test system. 13 14Figure 2.4: Pu voltage variation with change in leakage reactance of the OLTC. Figure 2.5: OLTC transformer test with change in pu voltage with changes in steps. 15 16 Chapter 3REAL-TIME MODELING INOPAL-RTTM3.1 A discrete-time constant step duration simulation is assumed for discussion in this thesis. Discretetime simulation means that time moves forward in fixed step size, which is also known as fixedtime step simulation. Even though there are some methods and simulation such as variable stepsolver in SimulinkTM , which use variable step size for solving high frequency dynamics, they areunsuitable for real-time simulation [43].HARDWARE AND SOFTWARE REQUIREMENT FOR MODELING AND SIMULATIONThe hardware and software tools used for modeling and simulation in this thesis are given in Table3.1.Table 3.1: Hardware and Software requirementHardwareSoftwareTMOPAL-RT OP5600Host operating system: Windows7TM12 active cores at 3466 MHzTarget operating system:RedhatTM 2.6.29.6-opalrt-5i686 architectureRT LABTM v10.5.9MatlabTM v2011bSimulinkTM v7.8 3.1.1 A real-time simulation is based not only on the computational speed of the hardware, but alsoon the computer models complexity. A real-time simulator would produce results at within thesame amount as its real world physical model would. The time step is the time required not justfor computing the mathematical model, but it also includes the time to drive the input and output(I/O) ports. The idle time of the simulation is lost, i.e., if the computation is finished before theend of time step the remaining time is lost. Figure 3.1 explains this phenomenon in the real-timesimulation.In Figure 3.1, during the time step Ts = 1 and Ts = 2 the computation and I/O is completedwithin the time step and idle time is lost, the next step is not started until Ts = 3. An overruns occurin a simulation if the cycle of receiving the data, performing computation and sending output isnot performed with in the given time step, which is shown in Figure 3.1 at Ts = 3. At time step Ts= 3, the cycle is not completed, this is known as overrun. Because of the overrun in the model themodel is frozen for Ts = 4, i.e., no computation is done for the next time step. The time step Ts = 4is lost and no computation is performed because the last time step Ts = 3 had overrun. Overruns inthe simulation must be avoided so that model runs in real time and no data is lost due to overruns. RT-LABTM MODELING RT-LABTM is a real-time simulation platform for high-fidelity plant simulation, control systemprototyping, and embedded data acquisition and control. The unique distributed processing capability of RT-LABTM allows to quickly convert the SimulinkTM models to high-speed, real-timesimulations, over one or more target PC processors.RT-LABTM enables models in SimulinkTM models to interact with real world enabling thehardware-in-loop (HIL) engineering simulators. RT-LABTM links the code generated with SimulinkTM 18 19Figure 3.1: RT-LABTM timing mechanism for time step. coder to optimized runtime libraries. This helps to achieve a jitter-free fixed step [43]. RT-LABTMautomates the process of preparation, downloading and running of parallel models. There are twoparts of OPAL-RT simulator the command station and the target node.Command station is a WindowsTM PC with RT-LABTM software servers as the developmentsystem and user interface. It allows the user to create and prepare the model for distributed realtime execution and interaction during run-time. Target node is a cluster of PCs where the simulator runs. For a real-time simulation it requires aReal-Time Operating System (RTOS) such as RedHatTM Real-Time Linux. The command stationcommunicates with the target using Ethernet. For HIL simulations, the target communicates withthe real world through I/O boards. Each target node can contain one or two processors and eachprocessor can contain one, two, four, or six processor cores.RT-Model consists of three type of subsystem in the top most level as shown in Figure 3.2 anddescribed in Table 3.2.A SimulinkTM model can be converted into RT-LABTM compatible model with following changes:1. Place the ARTEMIS and the POWERGUI blocks at the top most level of the model as shownin Figure 3.2.2. Place all the scopes in the console subsystem and all the other computational model in theMaster subsystem.3. Use the opcomm block for communication between the subsystems as shown in Figure 3.2.All input ports must go through this block before it is connected inside the subsystem.Some of the issues, challenges, and solution encountered while modeling in RT-LABTM are givenin appendix-A.6. 20 21 Figure 3.3: CPU usage varying with the time step of the 13-Node distribution model. 22 Reference [44] gives the method for splitting the model into multiple cores and to reduce theCPU usage in a model. Similar method was used to split the the 13-Node distribution model into2 cores or computer node as shown in Figure 3.4. The OLTC is placed in master subsystem andall the 13-Nodes are placed in slave subsystem. Output from the OLTC was connected to thecurrent controlled source, which received current signal from the input of the 13-Nodes in slavesubsystem. Voltage output form the OLTC in master subsystem, which was input to the controlledvoltage source in the slave subsystem was measured. This setup is shown in Figure 3.5 Figure 3.4: Splitting the IEEE 13-Node distribution model in master and slave subsystem. VALIDATIONThe current and voltage waveforms were observed as sine-wave without any disturbance. Thepu voltages from the model were recorded and compared with 13-Node test feeder data. After acertain period, the system achieves steady state i.e, when there is no tap changes of the OLTC. The 23 errorrmsvoltage = Vsimulation VrefVref (3.1) where, errorrmsvoltage is the percentage error, Vsimulation is the rms voltage at the node fromthe real-time simulation, and Vref is the rms pu voltage at the node from the IEEE 13-Node testfeeder data. Figure 3.6 shows the absolute pu voltage percentage error as compared to the IEEE13-Node test feeder data, the maximum error is less than 2.5%. 24 Figure 3.6: Pu absolute voltage percentage error as compared to the 13-Node feeder test data. 3.2.1 An analog signal is required to drive a real world power hardware. This is obtained from OPALRTTM simulation via the use of analog input-output (I/O) board in OPAL-RTTM and I/O blocks inRT-LAB. The analog I/O blocks are available in the SimulinkTM library under the RT-LABTM I/Osection. I/O boards on OPAL-RTTM can be configured as follows:1. Find the right type of I/O port and Board-ID form the RT-LABTM from Get I/O info.2. Choose appropriate I/O block and connect the multiplexed signal to analog I/O block. Selectthe voltage range to 5 V, 10V, or 16V as required. If the signal input to the I/O block isgreater than specified range the signal will be clipped in the output.3. Place the OpCtrl block in the model and specify the appropriate Board ID, which can beobtained form the I/O info from RT-LABTM . 25 4. Place the bitstream file under the current RT-LABTM project files.PROTECTIONThe output voltage from the 13-Node distribution feeder is used to drive the grid simulator. A threewinding transformer is connected to the secondary side of distribution transformer 4.16 kV/480 Vat node-633. The three winding transformer converts the 480 V line-line to 2-phase 3 wire systemwith 240 V line-line and 120 line-neutral. Figure 3.7 explains the connection of this transformerto the 13-Node distribution system with the I/O and the protection blocks.The magnitude and frequency of the output voltage is bounded by certain limits, which dependson the input of the grid simulator such as over voltage and over/under frequency. The main concernis not to drive grid simulator with a DC voltage from the 13-Node distribution model.Phase locked loop block can be used to measure frequency accurately, but it is a more complexblock and requires more computation. Hence, a frequency measurement block was modeled inSimulinkTM . Frequency of a signal can be calculated by counting the number of simulation timesteps, 50 s, between any two zero crossings. The time period can be calculated by multiplyingthe number of time steps (N) with the fixed step size (Ts). The frequency is given by (3.2). f= 1Ts N (3.2) The input voltage is passed through a low pass filter to remove any high high-frequency component in the signal, which might give false zero-crossing trigger. The filtered signal is fed to acustom MATLABTM function, which detects a zero crossing by observing the change in the signof the signal.The MATLABTM code for zero crossing is given in appendix-A.4. This function outputs avalue 1 for every change in sign from negative to positive. This is fed to the counter which countsthe number of time steps between these ones, from the zero crossing detector, the code for the 26 counter is given in appendix-A.4. This values is held and multiplied by the time step to get theperiod of the wave, reciprocal of this value gives the frequency.POWER HARDWARE IN THE LOOP (PHIL)The voltage from the 13-Node distribution model is passed through a gain block to make the voltage level compatible with the grid simulator, which acts as an amplifier. The analog I/O boardgives the measured voltages from OPAL-RTTM simulation in the form of analog voltage waveforms. The output current measured from the grid simulator is used to drive the controlled currentsource in the model. This feedback current and the setup of the microgrid is shown in Figure 3.8.Appendix-A.5 shows the various current level input to the controlled current source with variationin active power and reactive power injected into the distribution system. The pu values of the 480V/240 V transformer are given with change in current input to the controlled current source. 27 28Figure 3.7: Frequency and voltage protection and I/O blocks. 29Figure 3.8: Power Hardware in the Loop setup and feedback from the Hardware. Chapter 4DISTRIBUTED ENERGY RESOURCES4.1 One of the aim of microgrid is combining all the benefits of the renewable generation technologieswith the combined heat and power systems. Many countries are coming up with plans to reducethe carbon emissions to meet the global carbon emission commitment. Non-conventional generation system employed in the dispersed or distributed generation systems are known as DER ormicrosources. Choice of DER commonly depends on the availability of the fuel and topology ofthe region. This chapter describes the modeling process of the two DERs: (i) Diesel generator,and (ii) Solar Photovoltaic array. Though storage devices are not typical DERs, but for maximumbenefit from the microgrid the application of storage device is mandatory with demand side management. For this reason a battery inverter system is also modeled which will be the energy storagedevice in the microgrid. Solar PV generates electricity from inexhaustible solar energy. Some of the major advantages ofPV system, includes (i) minimum environmental impact (ii) long functional life of over 30 yearsand (iii) reliable and sustainable nature of fuel. A PV plant is the combination of a PV inverterand PV arrays. MPPT (Maximum Power Point Tracking) which sets the voltage at such a pointso that maximum power can be extracted form the PV array is used in the model. The standardSimulinkTM average inverter model was used in this simulation with a MPPT algorithm. A single diode PV cell model was developed for real-time simulation, with the capability to vary outputdepending on ambient temperature, irradiance, and number of cell in series and parallel.The PV array model available in SimulinkTM has algebraic loops that the SimulinkTM easilysolves. However, these models were incompatible for real-time simulation in OPAL-RTTM . This isbecause the C code necessary for executing models in OPAL-RTTM cannot be created for any modelhaving an algebraic loop. Using a zero-order hold or a memory block can eliminate algebraicloops. A PV cell model was developed with the parameters from SunPowerTM SPR-305-WHTavailable under the SimulinkTM model library. Table 4.1 presents the parameters which are codedinto MATLABTM script for the PV cell, these parameters can be changed easily by changing theparameters in the script. The Matlab script is used by the RT-LABTM model for the real-timesimulation of the PV cell model.Table 4.1: SunPower SPRTM -305-WHT ParametersSunPowerTM SPR-305-WHTParameterValueNo. of series modules in string5No. of parallel modules in string66Voc64.2Isc5.96Vmp54.7Imp5.58Rs0.037998Rp993.51Isat1.1753e-8Iph5.9602Qd1.3 A single diode model is developed based on (4.1) and (4.2). The circuit diagram for a singlediode PV cell is shown in Figure 4.1, where I0 is the saturation current, Ipv is the current generateddirectly proportional to solar irradiance, Vt is the thermal voltage of the array with Ns and Np cellsconnected in series and parallel, q is the electron charge, k is the Boltzmann constant, T (in Kelvin) 31 I = Ipv I0 exp V + I.RsaVt Vt = V + I.RsRp Ns KTq (4.1) (4.2) 32 33 in the VSC controller of the inverter and the transformer coupling the PV array to the grid werechanged according to the voltage level (4.16 kV) of the IEEE 13-Node distribution system. 34 Figure 4.4: Variation of PV inverter output in grid connected mode with varying irradiance. The synchronous generator model, available in the SimPowerSystems toolbox library of SimulinkTM ,implements a 3-phase synchronous machine modeled in the dq reference frame [48]. This modeltakes the pu reference speed and the pu voltage reference as inputs. The governor and the exciteror the automatic voltage regulator (AVR) is responsible for controlling the current to the rotor fieldwindings, which directly affects the terminal voltage of the generator [49].The IEEE type 1 synchronous voltage regulator combined with an exciter is the excitationsystem implemented in this model. This model is developed by MathWorks which is based on [49].One of the important changes to the model is the addition of a parasitic resistive load connectedto the machine terminals to avoid numerical oscillations. This load is proportional to the timestep of the simulation and the rating of the generator; for larger time steps, the parasitic load isbigger. The parasitic load is 2.5% of the nominal power rating for a 25 s simulation time stepof a 60 Hz system. In our case, for a 50 s time step of a 60 Hz system with 3 MVA nominal 35 power, the parasitic load is 150 kW. Islanding a part of 13-Node distribution model with 3.577MW load required a 3 MVA diesel generator. Synchronization of the diesel generator to the grid isdone by matching the voltage, phase, and frequency. The phase sequence is assumed known. TheSimulinkTM logical block and PLL blocks are used to measure the frequency and the phase of theoncoming generator and the grid. The synchronization parameters suggested by the IEEE 1547 std.are used [50], [1]. The synchronization switch/breaker is closed when the difference in frequency,f Hz, voltage, V %, and phase , are less than 0.1 Hz, 3 %, and 10 , respectively. The controllogic for synchronizing diesel generator to the grid is shown in Figure 4.5. The synchronizationwas performed with 13-Node distribution model with recording the frequency and the power of thesystem. It is observed that the phases lock into synchronism once the breaker is closed. The voltageoutput from generator is set at 1 pu and the frequency is set at 1.0001 for synchronization.Thesynchronization is smooth as seen from Figure 4.6, the voltage and current waveforms are smoothas expected. 36 37Figure 4.6: Voltage and current during diesel generator synchronization with 13-Node distribution model. The energy storage system model comprises of a battery model and the inverter model. The standard battery model available under the SimulinkTM library [52] was used. Some of the changes tothe battery model includes (i) Nominal battery voltage (V), (ii) State of charge (SOC) in percentage, (iii) Battery type, and (iv) Rated capacity of the Battery in ampere-hour (Ah).Lead acid batteries are relatively economic but are prone to frequent maintenance and requirerelatively larger storage space. The life of these batteries tend to decrease if discharged below 30%.They are commonly installed in renewable and distributed power systems. The largest lead acidbattery system installed is a 40 MWh system in California. The inverter model is similar to theinverter model in the PV array setup, however with some changes implemented for charging thebattery. The battery inverter operates in the constant power mode, i.e., charging and dischargingoccur at a constant power specified in the VSC control block. The charging mode of the inverteris triggered by reversing the reference direct current (Id ref) in the current regulator block of theVSC control block. This is shown in Figure 4.7.The battery model is available under the SimulinkTM library is configured as lead-acid typebattery at 500 V DC nominal voltage with 1000 Ah rated capacity. The internal resistance of thebattery is calculated automatically by the SimulinkTM battery block based on the number of paralleland series combinations of the individual cell model, which is based on the nominal voltage andcapacity of the battery model.This battery model does not include self-discharge characteristics; hence, the discharge andcharge characteristics of the battery are assumed same. The battery inverter system is connectedto the IEEE 13-Node distribution system and operated in the charging and discharging mode inOPAL-RTTM as shown in Figure 4.8. As expected the battery discharges and charges at a constantpower of 500 kW with the state of charge (SOC) of the battery changing as expected for chargingand discharging periods. 38 39 Chapter 5MICROGRID5.1 U.S Department of Energy defines microgrid as: A group of interconnected loads and distributedenergy resources (DER) with clearly defined electrical boundaries that acts as a single controllableentity with respect to the grid (and can) connect and disconnect from the grid to enable it to operatein both grid-connected or island mode.In transitioning microgrid to an islanded mode, it has to include a form of on-site power generation and/or energy storage. Without either, or both, a microgrid could not function properly.The distributed generation (DG) and energy storage become the foundation for the localized islanded microgrid. One of the other important components is the microgrid controller. Some of theimportant advantages of microgrids include:1. Supporting the existing grid infrastructure by adding resilience to the grid infrastructure,compensating for local variable renewable sources, and supplying ancillary services such asreactive support and voltage regulation to a part of an Electric Power System (EPS).2. Meeting uninterruptible power supply needs for critical loads, maintaining power qualityand reliability at the local level, and promoting demand-side management.3. Enabling grid modernization and interoperability of multiple smart-grid interconnectionsand technologies.4. Integration of distributed and renewable energy resources to reduce carbon emissions, peak 41 microgrid switch present between the nodes 632-671. Figure 5.2 shows power, frequency, and puvoltage at the instant of diesel generator synchronization to the 13-Node distribution system.After synchronizing diesel generator to the 13-Node distribution system the microgrid is islanded. The active power from diesel generator raises to supply the microgrid power when microgrid is islanded. Figure 5.3 shows the pu voltage, frequency and power at the instant of islandingthe microgrid and Figure 5.4 shows the values when the microgrid reaches a steady state values.The islanded microgrid has voltage unbalance, requiring a voltage control strategy. One of themethods is to use capacitor banks to boost the voltage in the islanded microgrid because capacitorbank is the most economical solution for this type of problem. A microgrid controller model cancontrol the switching of the Diesel generator, the PV and the battery. The microgrid controllershould also control the switching of the capacitor bank when the microgrid is islanded for voltagecontrol.Microgrid requires a control to maintain system security and seamless transfer from one modeof operation to the other mode while maintaining the system and regulatory constraints. This isdone via the use of microgrid controller. There can be a central controller and a microsourcescontroller. The central controller controls the whole microgrid operation and the microsourcecontroller controls the local DER operation. The power dispatch set points are set by the centralcontroller for import and export of power based on some agreed contracts and economies.A simple microgrid controller is developed which has has two basic functions (i) Protectionand Co-ordination and (ii) Energy management. The energy management module provides the setpoint for the active power to the battery module according to the operational need of the microgrid.It is assumed that all power generated form the PV plant is utilized in the microgrid and the batteryis managed to compensate for the change in load and PV plant output. The Protection and coordination is designed for an under voltage and an over voltage protection mechanism. 42 43Figure 5.1: Microgrid with DERs and microgrid switch connected to 13-Node distribution model. 44Figure 5.2: Diesel generator synchronization to 13-Node distribution. 45Figure 5.3: Power, pu voltage, and frequency variation when microgrid is islanded. 46Figure 5.4: Power, pu voltage, and frequency when microgrid reaches a steady state. The IEEE STD 1547.4-2011 has discussion on microgrid synchronization. The recommendedvoltage limits for resynchronization should be with in the Range B of the ANSI/NEMA C84.12006, Table-1 and frequency range should be between 59.3 Hz and 60.5 Hz. The voltage, frequency and should be with in acceptable limits specified in the IEEE STD 1547-2003. Thereshould be a 5 minute delay between the reconnection after the steady state voltages and frequencyare achieved.According to [53], which used the IEEE STD 1547-2003 as the basis of its interconnectionguidelines, the steady operating range for the distribution system above 1 kV is 6% and nominalsystem frequency range between 0.2 Hz. The voltage unbalance should be below 3% undernormal operating conditions. Equation (5.1) gives the voltage unbalance equation. VmaxVavg (5.1) where Vavg is the average of the three phases and Vmax is the maximum deviation fromaverage phase voltage. The pu voltages of all the nodes in the microgrid are shown in Table 5.1.The maximum % voltage unbalance calculated at all the three phase nodes is 6.3% which is outsidethe range of the 3% deviation, which is not acceptable. It can be observed form Table 5.1 thatvoltage needs to raised to satisfy the unbalance requirement of less than 3% and also to raise thevoltages to facilitate resynchronization of microgrid to the grid. The voltages before capacitor bankswitching are not in the range of 3%. Table 5.1 gives the values of the voltages after capacitorbank switching. After the switching of capacitor bank the maximum voltage unbalance is 1.6% onthe phase A of node-680. The unbalance in the microgrid is well with the unbalance range of 4%and steady state pu voltages are with in the the range of 6% after the capacitor bank is switched inthe microgrid, a time delay of 1s is introduced before the capacitor bank switching to minimize anytransients. The voltages are found to be in acceptable range according to the standards discussesabove. 47 Node Phase Node-680Node-680Node-680Node-684Node-684Node-611Node-652Node-692Node-692Node-692Node-675Node-675Node-675 ABCACCAABCABC puvoltagewith Capacitorbank1.0458270.9704570.9382140.9607880.9587240.9329190.9399201.0403540.9627690.9327881.0426250.9608610.982394 puvoltagewith Capacitorbank1.0093470.9805050.9909860.9830910.9823740.9803690.977530.9849430.9816510.9843380.9784290.983830.982394 Passive synchronization requires sensing of voltage, frequency, and phase angle on both the microgrid and the grid side. Synchrocheck relay is used for this purpose which measures the differencein voltage, frequency, and phase angle on both sides. Passive synchronization methods may takelonger than the active synchronization, where a control mechanism matches the voltage, frequency,and the phase angle of the microgrid to the grid. In order to guarantee the synchronization and possible reconnection a small frequency error 0.001 pu is intentionally introduced through the dieselgenerator speed set points. Figure 5.5 and Figure 5.6 shows resynchronization of the microgrid tothe grid. The maximum frequency and the voltage deviation is well within the limit. Also, there isno overshoot in the power as there is less than 20% of the power imported from the grid. In casethere is an overshoot in the power, the DER might reach its power limit and cause tripping of theequipment. 48 PROTECTION Reference [53] utilizes IEEE 1547 as the basis for interconnection and mentions that the undervoltage/over-voltage protection must have the ability to detect phase to ground voltage that isoutside the normal operating limits of 90% < V < 106% of the nominal and must trip within thetrip time mentioned in the Table 5.2. Two scenarios for under-voltage were simulated, in the firstcase there is a 3-phase line to ground fault with 0.001 fault resistance at the node-632. Thiscauses the voltage to drop below 50% of the nominal. This causes the the trip signal to the breakerand the microgrid is islanded. Figure 5.7 shows the voltage and frequency of the microgrid and thegrid before. 49 50Figure 5.6: pu voltages and frequency during synchronization of microgrid to the grid. In the second case there is a 3 phase line to ground fault on the node-632 with 0.4 faultresistance. This cause the voltage to drop at 80% of the nominal voltage. There is no intentionaldelay introduced, but there is a delay between the tripping of the breaker and the fault because ittakes at least one cycle to calculate the rms values of the signals. In figure 5.8, at time 640.075s, the 3-phase line to ground fault occurs on the main grid then at 640.086 s Microgrid switch isopened, islanding the microgrid. At time 640.18 s, the microgrid voltages becomes stable. It canbe observed that the steady voltages are with in the specified range of 6% and the frequency iswith in 59.3 Hz and 60.5 Hz.Table 5.2: under-voltage/over-voltage protection.PU voltageV 50%50% < V <90%90% < V <106%106% < V <120%V 120 % Trip timeInstantaneous120 cyclesNormal Operation30 cyclesInstantaneous ENERGY MANAGEMENT The battery power set points are set according to the PV and load variation in the microgrid. Itis assumed that diesel generator set points are already set by the management agent. The loadwithin microgrid may change at any time, in order to keep the power balance between load andgeneration, the controller should be able to handle any load change within its capability. 51 52Figure 5.7: Pu voltage and frequency during 3-phase fault on the grid (V < 50%). 53Figure 5.8: Pu voltage and frequency during 3-phase fault on the grid (V < 80%). It is expected that the battery should take the responsibility to maintain power balance withinthe microgrid. This includes the the variation in the power of the PV array from the nominal andany load variation. Figure 5.9 shows the variation of the load in the microgrid and the variationof the irradiance causing the change in the power output of the battery. It can be seen that batterycan quickly track the variation in the load and performs as expected. The battery power output setpoints are set according to the equation (5.2). (5.2) where, Pset is the battery setpoint in kW, Pload is the change in load in the microgrid, andPP V is the change in the PV output from the nominal values. 54 Figure 5.9: Power management by the microgrid controller with variation in microgrid load andPV array. 55 Chapter 6Conclusion and Future Work6.1 CONCLUSION A real-time model of the IEEE 13-Node distribution feeder network was developed, simulated, andquantified on a real-time platform OPAL-RTTM using SimulinkTM library. Multi-core simulationfor 13-Node distribution model was performed and verified with relative absolute pu voltage errorless than 2.5%. Modeling and simulation challenges of the real-time model were addressed. Adiesel generator, a PV array model, and a battery inverter system models were developed andsimulated on a real-time platform. The diesel generator synchronization control was developedand tested in OPAL-RTTM . DERs were connected to the 13-Node distribution model forming amicrogrid. A simple microgrid controller with two basic function of protection-coordination andenergy management was developed. The microgrid was islanded from the 13-Node distributionsystem and the steady state state values of the microgrid were observed as expected from thecalculations of load in the microgrid. Two common scenarios of intentional and unintentionalislanding were successfully performed. Appendix-A.7 lists the SimulinkTM and RT-LABTM modeldeveloped and included with this report. FUTURE WORK The state space nodal (SSN) type OLTC developed by OPAL-RTTM can eliminate any overruns inthe IEEE 13-Node distribution model and the distribution model will not require splitting of themodel into two cores. A simple microgrid controller to control the DERs, islanding mode, and grid-tied mode of microgrid will be the next step in modeling, which will complete the microgridmodel. The DERs developed in the OPAL-RTTM can be replaced with the actual hardware andthe results of the power hardware-in-the-loop (PHIL) tests can be compared with the simulationresults. Further, a variety of what-if scenarios may be studied in this test setup to aid in fullcharacterization of such emerging microgrid systems. This model can then be used to implementdispatch algorithms for the microgrid to operate in grid-tied and islanded modes, and eventuallyperforming HIL simulations. 57 References[1] H. E. Brown, S. Suryanarayanan, G. T. Heydt, Some characteristics of emerging distributionsystems considering the Smart Grid Initiative, The Electricity Journal, v. 23, no. 5, June2010, pp. 64-75.[2] S. Suryanarayanan, F. Mancilla-David, J. Mitra, Y. Li, Achieving the Smart Grid throughcustomer-driven microgrids supported by energy storage, in Proc. 2010 IEEE Intl Conference on Industrial Technologies (ICIT), Viadel Mar, Chile, pp. 884-890.[3] S. Suryanarayanan, M. Steurer, S. Woodruff, R. Meeker, Research perspectives on highfidelity modeling, simulation, and hardware-in-the-loop for electric grid infrastructure hardening, in Proc. 2007 IEEE Power Engineering Society (PES) General Meeting, 4 pp., Jul2007.[4] H. W. Dommel, Digital Computer Solution of Electromagnetic Transients in Single-andMultiphase Networks, IEEE Transactions on Power Apparatus and Systems, vol. PAS-88,pp. 388-399, Apr. 1969.[5] C. Dufour, S. Abourida, J. Blanger, InfiniBand-Based Real- Time Simulation of HVDC,STATCOM and SVC Devices with Custom-Off-The-Shelf PCs and FPGAs, in Proc. 2006IEEE International Symposium on Industrial Electronics, pp. 2025- 2029.[6] Y. Liu, et al., Controller hardware-in-the-loop validation for a 10 MVA ETO-based STATCOM for wind farm application, in IEEE Energy Conversion Congress and Exposition(ECCE09), San-Jos, CA, USA, pp. 1398-1403.[7] J.N. Paquin, J. Moyen, G. Dumur, V. Lapointe, Real-Time and Off-Line Simulation of aDetailed Wind Farm Model Connected to a Multi-Bus Network, IEEE Canada ElectricalPower Conference 2007, Montreal, Oct. 25-26, 2007. [8] J.N. Paquin, C. Dufour, and J. Blanger, A Hardware-In-the- Loop Simulation Platform forPrototyping and Testing of Wind Generator Controllers in 2008 CIGRE Conference onPower Systems, Winnipeg, Canada. October 19-21, 2008.[9] V.K Sood, D, Fischer, J.M Eklund, T. Brown, Developing a communication infrastructurefor the Smart Grid, in Electrical Power & Energy Conference (EPEC), 2009 IEEE, pp.1-7,22-23 Oct. 2009.[10] Nikos Hatziargyriou, Hiroshii Asano, Reza Iravani, Chris Marnay, Microgrids IEEE Powerand Energy Magazine, 5 (4) (2007), pp. 78-94.[11] H. Nikkhajoei, R. H. Lasseter Distributed generation interface to the CERTS microgrid IEEETransaction on Power Delivery, 24 (3) (July 2009), pp. 1598-1608.[12] S. Chakraborty, M. D. Weiss, and M. G. Simoes, Distributed intelligent energy managementsystem for a single-phase high-frequency AC microgrid, IEEE Trans. Ind. Electron., vol. 54,no. 1, pp. 97-109, Feb. 2007.[13] A. G. Tsikalakis and N. D. Hatziargyriou, Centralized control for optimizing microgridsoperation, IEEE Trans. Energy Convers., vol. 23, no. 1, pp. 241-248, Mar. 2008.[14] E. Barklund, N. Pogaku, M. Prodanovic, C. H. Aramburo, and T. C. Green, Energy management in autonomous microgrid using stabilityconstrained droop control of inverters, IEEETrans. Power Electron., vol. 23, no. 5, pp. 2346-2352, Sep. 2008.[15] J. Stevens, H. Vollkommer, D. Klapp, CERTS microgrid system tests, in Proc. IEEE PowerEng. Soc. Gen. Meet. 2007, Jun., pp. 1-4.[16] D. K. Nichols, J. Stevens, and R. H. Lasseter, Validation of the CERTS microgrid conceptthe CEC/CERTS microgrid testbed, in Proc. IEEE Power Eng. Soc. Gen. Meet. 2006, Jun.,pp. 18-22. 59 60 61 [33] T. H. Chen, M. S. Chen, K. J. Hwang, P. Kotas, E. A. Chebli, Distribution system power flowanalysis-a rigid approach, IEEE Transactions on Power Delivery, vol. 6, no. 3, Jul 1991.[34] IEEE Distribution Planning Working Group Report, Radial distribution test feeders, IEEETransactioins on Power Systems, August 1991, Volume 6, Number 3, pp 975-985.[35] IEEE Power & Energy Society (PES). Distribution test feeders. [Online] {Available}(Accessed: Sep 18, 2015).[36] MathWorks.SimPowerSystem.[Online] {Available}[37] MathWorks. Three-Phase Dynamic Load. [online]{Available}: Single Load [online] {Available}: /physmod/sps/examples/single-phase-dynamic-load- block.html[39] MathWorks. OLTC Regulating Transformer (Phasor Model). [on-- system-modeling-and-analysis-in-matlab-and-Simulink-81978.html[42] M.Panwar, S. Suryanarayanan, S. Chakraborty, Steady-state modeling and simulation of adistribution feeder with distributed energy resources in a real-time digital simulation envi- 62 ronment, in North American Power Symposium (NAPS), 2014, vol., no., pp.1-6, 7-9 Sept.2014[43] J. Blanger, J. N. Paquin and P. Venne The What, Where and Why of Real-Time Simulation,in IEEE PES General Meeting, Minneapolis, USA (July 25-29, 2010)[44] Teninge, A.; Besanger, Y.; Colas, F.; Fakham, H.; Guillaud, X.,Real-time simulation of amedium scale distribution network: Decoupling method for multi-CPU computation, Complexity in Engineering (COMPENG), 2012 , vol., no., pp.1,6, 11-13 June 2012[45] MathWorks. Average Model of 100-kW Grid-Connected PV Array [Online].{Available}:[46]).[47] B. Bahrani.; A. Karimi; B. Rey; A. Rufer, Decoupled dq-Current Control of Grid-TiedVoltage Source Converters Using Nonparametric Models, inIndustrial Electronics, IEEETransactions on, vol.60, no.4, pp.1356-1366, April 2013[48] K.E.,Yeager; J.R.,Willis, Modeling of emergency diesel generators in an 800 megawatt nuclear power plant, in Energy Conversion, IEEE Transactions on , vol.8, no.3, pp.433-441,Sep 1993[49] MathWorks. Emergency Diesel-Generator and Asynchronous Motor [online]{Available} 63 [50] IEEE: EEE 1547-2003: IEEE Standard for Interconnecting DistributedResources with Electric Power Systems. 2003.[51] S. Chowdhury, S.P. Chowdhury, P. Crossley Microgrids and active distribution networks, IETrenewable energy series 6 IET, London (2009)[52] MathWorks. Battery. Implement generic battery mode. line]{Available}: /help/physmod/sps/powersys/ref/battery.html[53] Technical Requirement for connecting distributed resources to Manitabo Hydro System,DRG 2003, Rev2, MBHydro Electric Board; 2010 64 Appendix ASIMULATION RESULTS, MATLABTMCODES, AND FILESA.1 Version 7.13 (R2011b) Simulink Version 7.8 ARTEMIS Blockset Version 7.0.1.736 Version 5.1 Version 4.1 Version 9.2 Version 3.2 Version 8.1 Version 3.0 Embedded Coder Version 6.1 Fixed-Point Toolbox Version 3.4 Version 2.2.14 Version 4.2 Version 7.3 MATLAB Builder EX Version 2.1 MATLAB Coder MATLAB Compiler Version 4.16 Version 3.11 Mapping Toolbox Version 4.0 Version 7.0.2 Optimization Toolbox Version 5.2 Version 1.0.19 RT-EVENTS Blockset Version 4.0.0.433 (R2011B.x) RT-LAB RT-XSG Version v2.3.1.135sgUnsupported(Rx.x) Version 6.16 SimBiology SimDriveline SimElectronics Version 2.0 SimEvents SimMechanics Version 3.2.3 SimPowerSystems Version 5.5 Simscape Version 3.6 Simulink Coder Spreadsheet Link EX Version 3.1.4 Stateflow Statistics Toolbox Version 7.6 Version 5.7 Version 7.4.3 Wavelet Toolbox Version 4.8 66 Ts=50e-6;Rm=1e3;Rm1=1e6;%IEEE 13-Node test feeder impedances % miles/kmmi2km = 1.609344; % feet to kmft2km = 0.0003048; % microsiemens to Faradsms2F = 1/2/pi/60*1e-6; %configuration 601R_config_601= [0.3465 0.1560 0.3375 0.1580 0.1535 X_config_601 =1.01790.5017 0.15800.15350.3414 ]; 0.50171.0478 0.42360.3849 67 0.4236 0.3849 1.0348 ]; B_config_601=[6.2998 -1.9958 -1.2595 5.9597 -0.7417 5.6386]; R_601 = R_config_601/mi2km; L_601 = X_config_601/mi2km/2/pi/60; C_601 = B_config_601/mi2km*ms2F; %configuration 602 R_config_602=[0.7526 0.7475 0.7436 X_config_602= [1.1814 0.42360.42360.5017 0.50171.19830.3849 0.38491.2112]; B_config_602=[5.6990 -1.0817 -1.6905 68 5.1795 -1.605 -0.6588 5.4246]; R_602 = R_config_602/mi2km; L_602 = X_config_602/mi2km/2/pi/60; C_602 = B_config_602/mi2km*ms2F; %configuration 603 R_config_603=[1.3294 0.20660.2066 1.3238 X_config_603= [ 1.3471 0.4591 1.3569]; B_config_603=[4.7097 -0.8999 4.6658]; R_603 = R_config_603/mi2km; L_603 = X_config_603/mi2km/2/pi/60; C_603 = B_config_603/mi2km*ms2F; 69 %configuration 604R_config_604=[1.3238 0.2066 1.3294 X_config_604=[1.3569 1.3471]; B_config_604=[4.6658 4.7097]; R_604 = R_config_604/mi2km; L_604 = X_config_604/mi2km/2/pi/60; C_604 = B_config_604/mi2km*ms2F; %configuration 605R_config_605=[ 1.3292 ]; X_config_605 = [ 70 1.3475]; B_config_605=[4.5193]; R_605 = R_config_605/mi2km; L_605 = X_config_605/mi2km/2/pi/60; C_605 = B_config_605/mi2km*ms2F; %configuration 606 R_config_606=[0.7982 0.3192 0.28490.7891 0.2849 0.7982 ]; X_config_606=[0.4463 0.03280.0328-0.0143 -0.01430.40410.0328 0.03280.4463]; B_config_606=[96.88970 0.000096.8897 0.00000.0000 71 96.8897]; R_606 = R_config_606/mi2km; L_606 = X_config_606/mi2km/2/pi/60; C_606 = B_config_606/mi2km*ms2F; %configuration 607 R_config_607=[1.3425 X_config_607=[0.5124 B_config_607=[88.9912 R_607 = R_config_607/mi2km; L_607 = X_config_607/mi2km/2/pi/60; C_607 = B_config_607/mi2km*ms2F; 72 The following procedure explains how to increase the number of taps from 8(17 OLTC positions)to 10 (21 OLTC positions)If you want to decrease the number of taps, follow the reverse procedure(Change add to cut and skip step C3). B) Open the TransformerA block dialog box inside the Three-Phase OLTC Transformer(use Edit/Look under mask)and change the Number of Taps parameter from 7 to 9(adding two intermediate taps on upper left winding (winding 1)). 73 Portlabel Port number location --------- ----------- Tap10 right Tap9 ..... ...... ....... Tap 1 Tap 0 W2+Out 1314 rightleft E) Cut the OLTCB and OLTCC blocks and replace them with a duplicate of OLTCA block.Rename these two blocks OLTCB and OLTCC. 74 F) Cut the TransformerB and TransformerC blocks and replace them with aduplicate of TransformerA block. Rename these two blocks TransformerB and TransformerC. G) Make connections between OLTC to Transformer for phases B and C as for phase A. H)Ckeckthat the secondary windings of the three transformers are connected in Delta (D1) COUNTER function T = fcn(u,v)j = 0;if j==0T =0;j=1;endif v && u 75 A.5 Table A.1: Controlled current source in 13-Node distribution model (1000MVA transformer ) pu 240 (V) pu 120 (V) 1.0191.0181.0181.0181.021.025 1.0171.0171.0161.0161.0161.015pu 240 (V) 1.0171.0171.0161.0161.0161.015pu 120 (V) 1.0171.0211.0221.0231.0251.032 1.0171.0211.0221.0231.0261.034 Transformer : Pn =1000910220-6700122201001001.01820470-13370244492002001.01851310-33160610935005001.018103100-65450122120100010001.02315700 -187400367131299930001.0251121000 -50850012309401000810000Transformer :710190-6705121981001001.01720400-13370243912002001.01651170-33150609705005001.01682110-52590975088008001.016102800 -6537012182499910001.015207700 -12700024345119992000pu 120 (V)P (W)Q (Vars)S=currentcurrent (A)sqrt(P 2 + Q2 ) =S/V (A) 120V side1.017-101706742122021001001.021-3061020340367523003001.022-8123054980980878008001.023-151200 105100184140150015001.026-297800 218500369360300030001.034-577200 47060074473160026000 76 Table A.2: Controlled current source in 13-Node distribution model (500MVA transformer) 77 1.0211.0231.0281.0291.0351.0491.0621.0771.0831.086 1.0211.0231.0281.0311.0381.0541.071.0881.0951.095 Transformer :P (W)Q (Vars)S=currentsqrt(P 2 + Q2 ) =S/V (A)1226011612261100368403503684230098680937986848001485001411148507120024920010002492022001506300481050632340037705007322770535600110450009933104504780041183000112501183053900312520001191012520579529 current (A)120V side1003008001200200040006000800090009500 Singlewinding50150400600100020013000400145014751 currentin secondary (A)1003008001200200040026000800290029502 A.6 78 A.7 File nameCSU LDRD 20151006 IEEE 13node.mdlCSU LDRD CSU LDRD init PV.m IEEE config DescriptionThis model contains the OPAL-RTTM compatible IEEE 13-Node distribution system20151006 IEEE 13node sim.mdlThis model contains the SimulinkTM onlycompatible IEEE 13-Node distribution system20151006 analog IO protection.mdl Contains the 13-Node distribution systemwith split phase 240/120V transformer driving a analog I/O with protection block for frequency and voltage20151006 microgrid.mdlContains the 13-Node distribution system, PVarray model, diesel generator with synchronization switch-control and lead acid batterymodel.Used to load the PV data in the PV cell;change this data to change the characteristicsof the PV panel.line test.mContains the line parameters for the PI linemodel Note: See the Matlab/SimulinkTM and RT-LABTM versions for compatibility, given inappendix-A.1. 79
https://www.scribd.com/document/330780611/Distribution-Feeder
CC-MAIN-2019-43
en
refinedweb
I believe this error means I can't include a variable in a loop however I am struggling to see a way around.... the error is TypeError: range() integer end argument expected, got unicode. Try wring a program the will prompt for an number and print the correct times table (up to 12). def main(): pass choice = raw_input("Which times table would you like") print ("This is the", choice , "'s times table to 12") var1 = choice*12 + 1 for loopCounter in range (0,var1,choice): print(loopCounter) if __name__ == '__main__': main() The raw_input function gives you a string, not an integer. If you want it as an integer (such as if you want to multiply it by twelve or use it in that range call), you need something such as: choice = int(raw_input("Which times table would you like")) There are potential issues with this simplistic solution (e.g., what happens when what you enter is not a number), but this should be enough to get past your current problem.
https://codedump.io/share/1Y395FqH3cRP/1/why-do-i-get-this-typeerror
CC-MAIN-2017-34
en
refinedweb
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a3pre) Gecko/20070305 Minefield/3.0a3pre Build Identifier: This ValidateNode() is invoked when submission is called and MDG computes node states, ie valid. If node has complex content, then it never validates as simple type handling treats as unknown schema type. This is fairly major, as it blocks submission even when content is valid. Reproducible: Always Created attachment 259797 [details] test case Created attachment 259798 [details] [diff] [review] patch I also fixed a couple misuses of namespace constants in validator. It was using XSD namespace and not XSI, so it was wrongly looking for xsd:type. checked into trunk for sspeiche checked into 1.8 branch on 2007-04-12 checked into 1.8.0 branch on 2007-04-16
https://bugzilla.mozilla.org/show_bug.cgi?id=375546
CC-MAIN-2017-34
en
refinedweb
I am trying to get a multidimensional array working, where the user string is filled in the cell. I have been searching for ways to update user values in the multidimensional array,a little nudge in the right direction would be great help def createMultiArray(self,usrstrng,Itmval): #creates a multidimensional array, where usrstrng=user input, Itmval=width ArrayMulti=[[" " for x in range(Itmval)]for x in range(Itmval)] # need to update user values, therefore accessing index to update values. for row in ArrayMulti: for index in range(len(row)): for Usrchr in usrstrng: row[index]= Usrchr print "This is updated array>>>",ArrayMulti funs This is updated array>>> [['s', 's', 's'], ['s', 's', 's'], ['s', 's', 's']] This is updated array>>> [['f', 'u', 'n'], ['s', ' ', ' '], [' ', ' ', ' ']] string.replace won't work, since it does not affect the original value. >>>>> test.replace("a", " ") 'h llo' >>> test 'hallo' Instead you need to access the list via the indexes: for row in ArrayMulti: for index in range(len(row)): row[index] = "a" If you provide a more precise question and add the output you want to achieve to the question, I can give you a more precise answer. I scrapped the previous solution since it wasn't what you wanted def UserMultiArray(usrstrng, Itmval): ArrayMulti=[[" " for x in range(Itmval)] for x in range(Itmval)] for index, char in enumerate(usrstrng): ArrayMulti[index//Itmval][index%Itmval] = char return ArrayMulti >>> stack.UserMultiArray("funs", 3) [['f', 'u', 'n'], ['s', ' ', ' '], [' ', ' ', ' ']] This little trick uses the whole-number-division: [0, 1 ,2 ,3 ,4] // 3 -> 0, 0, 0, 1, 1 and the modulo operator(): [0, 1 ,2 ,3 ,4] % 3 -> 0, 1, 2, 0, 1
https://codedump.io/share/quXvA0eP21p/1/how-to-replace-values-in-multidimensional-array
CC-MAIN-2017-34
en
refinedweb
Api.AI This component is designed to be used with the β€œwebhook” integration in api.ai. When a conversation ends with an user, api.ai sends an action and parameters to the webhook. api.ai requires a public endpoint (HTTPS recommended), so your Home Assistant should be exposed to Internet. api.ai will return fallback answers if your server do not answer, or takes too long (more than 5 seconds). api.ai could be integrated with many popular messaging, virtual assistant and IoT platforms, eg.: Google Assistant (Google Actions), Skype, Messenger. See here the complete list. Using Api.ai will be easy to create conversations like: User: Which is the temperature at home? Bot: The temperature is 34 degrees User: Turn on the light Bot: In which room? User: In the kitchen Bot: Turning on kitchen light To use this integration you should define a conversation (intent) in Api.ai, configure Home Assistant with the speech to return and, optionally, the action to execute. Configuring your api.ai account - Login with your Google account. - Click on β€œCreate Agent” - Select name, language (if you are planning to use it with Google Actions check here supported languages) and time zone - Click β€œSave” - Go to β€œFullfiment” (in the left menu) - Enable Webhook and set your Home Assistant URL with the Api.ai endpoint. Eg.: - Click β€œSave” - Create a new intent - Below β€œUser says” write one phrase that you, the user, will tell Api.ai. Eg.: Which is the temperature at home? - In β€œAction” set some key (this will be the bind with Home Assistant configuration), eg.: GetTemperature - In β€œResponse” set β€œCannot connect to Home Assistant or it is taking to long” (fall back response) - At the end of the page, click on β€œFulfillment” and check β€œUse webhook” - Click β€œSave” - On the top right, where is written β€œTry it now…”, write, or say, the phrase you have previously defined and hit enter - Api.ai has send a request to your Home Assistant server Take a look to β€œIntegrations”, in the left menu, to configure third parties. Configuring Home Assistant When activated, the Alexa component will have Home Assistant’s native intent support handle the incoming intents. If you want to run actions based on intents, use the intent_script component. Examples Download this zip and load it in your Api.ai agent (Settings -> Export and Import) for examples intents to use with this configuration: # Example configuration.yaml entry apiai: intent_script: Temperature: speech: The temperature at home is {{ states('sensor.home_temp') }} degrees LocateIntent: speech: > {%- for state in states.device_tracker -%} {%- if state.name.lower() == User.lower() -%} {{ state.name }} is at {{ state.state }} {%- elif loop.last -%} I am sorry, I do not know where {{ User }} is. {%- endif -%} {%- else -%} Sorry, I don't have any trackers registered. {%- endfor -%} WhereAreWeIntent: speech: > {%- if is_state('device_tracker.adri', 'home') and is_state('device_tracker.bea', 'home') -%} You are both home, you silly {%- else -%} Bea is at {{ states("device_tracker.bea") }} and Adri is at {{ states("device_tracker.adri") }} {% endif %} TurnLights: speech: Turning {{ Room }} lights {{ OnOff }} action: - service: notify.pushbullet data_template: message: Someone asked via apiai to turn {{ Room }} lights {{ OnOff }} - service_template: > {%- if OnOff == "on" -%} switch.turn_on {%- else -%} switch.turn_off {%- endif -%} data_template: entity_id: "switch.light_{{ Room | replace(' ', '_') }}"
https://home-assistant.io/components/apiai/
CC-MAIN-2017-34
en
refinedweb
Hi, Can anyone help me in recording and asserting the data from a kafka producer application in SOAPUI Pro? I tried with groovy script and example code from the apache website but I was not successful yet. Thanky in advance. Regards Markus Solved! Go to Solution. Hi Markus, Thank you for your email. Do you use a sample code from here: ?What error do you get? Did my reply answer your question? Give Kudos or Accept it as a Solution to help others.↓↓↓↓↓ I get too many errors, therefore this seems for me be not straight forward. The example code is: Properties props = new("foo", "bar")); while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100)); for (ConsumerRecord<String, String> record : records) System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value()); } Do you know what to add into the lace braces instead of "String, String"? KafkaConsumer<String, String> ConsumerRecords<String, String> Did you place the kafka jar file(s) to the <ReadyAPI_Install>\bin\ext folder? 1. I generated the kafka-producer-consumer-1.0-SNAPSHOT.jar file as it's described here: 2. Then, I place it to the bin\ext folder. 3. Also, I added the below line to the script: import org.apache.kafka.clients.consumer.*; After this, I don't get errors about the KafkaConsume class. But, I get the error about the Duration. You can get more info about this here:
https://community.smartbear.com/t5/SoapUI-Pro/Apache-Kafka-consumer-in-SOAPUI-pro/m-p/183098/highlight/true
CC-MAIN-2019-35
en
refinedweb
It seems like a good idea to us -- no muss, no fuss, the nodes just appear when the device gets loaded by insmod, and disappear when the driver gets unloaded. Something like this: if ((major = register_chrdev(0, "lala", &LalaFops)) <= 0) { printk("Unable to get major for lala\n") ; return(1) ; } do_unlink("/dev/lala0"); rc = do_mknod("/dev/lala0", S_IFCHR | 0666, MKDEV(major, 0) ); if (rc < 0) { printk("Unable to create device node for lala\n"); return (1); } To get this to work, we had to arrange to export do_mknod() and also do_unlink() with the enclosed kernel patch. So, I thought maybe I'd just pose the question here: Calling do_mknod() from init_module() - a good idea or not??? -Rick ================= patch ============================ Note: This device driver will make the device nodes automatically on module loading. However, for the time being you must hack your kernel to export the proper symbols to enable this magic. I hope to get the changes incorporated into the 2.0.30 kernel as well as the 2.1.X development kernels The following patch will make the needed changes. Tested on kernel 2.0.27 *** include/linux/fs.h.orig Wed Apr 2 12:31:11 1997 --- include/linux/fs.h Wed Apr 2 11:45:58 1997 *************** *** 617,622 **** --- 617,623 ---- extern int open_namei(const char * pathname, int flag, int mode, struct inode ** res_inode, struct inode * base); extern int do_mknod(const char * filename, int mode, dev_t dev); + extern int do_unlink(const char * filename); extern int do_pipe(int *); extern void iput(struct inode * inode); extern struct inode * __iget(struct super_block * sb,int nr,int crsmnt); *** kernel/ksyms.c.orig Wed Apr 2 12:17:56 1997 --- kernel/ksyms.c Wed Apr 2 11:44:36 1997 *************** *** 170,175 **** --- 170,177 ---- X(generic_file_read), X(generic_file_mmap), X(generic_readpage), + X(do_mknod), + X(do_unlink), /* device registration */ X(register_chrdev), *** fs/namei.c.orig Wed Apr 2 12:19:08 1997 --- fs/namei.c Wed Apr 2 11:45:13 1997 *************** *** 656,662 **** return error; } ! static int do_unlink(const char * name) { const char * basename; int namelen, error; --- 656,662 ---- return error; } ! int do_unlink(const char * name) { const char * basename; int namelen, error; -- Rick Richardson Sr. Principal Engr. Can you be sure I'm really me Digi Intl. Email: [email protected] and not my clone??? Has anybody 11001 Bren Rd. East Fax: (612) 912-4955 seen The Leader's nose??? Minnetonka, MN 55343 Tel: (612) 912-3212
http://lkml.iu.edu/hypermail/linux/kernel/9704.0/0148.html
CC-MAIN-2019-35
en
refinedweb
expo_flutter_adapter A Flutter adapter for Expo Universal Modules. It requires expo-core to be installed and linked. Getting Started Installation Add the plugin as a dependency in your Flutter project's pubspec.yaml file. dependencies: expo_flutter_adapter: ^0.1.0 To install it directly from our git repo, specify the dependency as shown below: dependencies: expo_flutter_adapter: git: url: git://github.com/expo/expo.git path: packages/expo-flutter-adapter Configuration In your Android app's MainActivity.java file: Import the adapter's java package by adding import io.expo.expoflutteradapter.ExpoFlutterAdapterPlugin;to your imports section. Add a call to ExpoFlutterAdapterPlugin's initializemethod after the GeneratedPluginRegistrant.registerWith(this);call by adding ExpoFlutterAdapterPlugin.initialize();after it. Usage If you're simply adding this to consume other previously developed Flutter Universal Module plugins, you won't have to read past this point. If you're developing a Universal Module Flutter plugin, you are probably interested in the ExpoModulesProxy for interfacing with native Universal Modules from Dart. You can import the module proxy by adding this line to the beginning of your dart file: import 'package:expo_flutter_adapter/expo_modules_proxy.dart'; This file contains two classes: ExpoModulesProxy and ExpoEvent. ExpoModulesProxy The Dart API of the ExpoModuleProxy is as follows: static Future<dynamic> callMethod(String moduleName, String methodName, [List<dynamic> arguments = const []]) ExpoModuleProxy.callMethod is a static method that your plugin can use to call a method exposed by the specified Universal Module. The parameter names should be pretty self-explanatory. static Future<dynamic> getConstant(String moduleName, String constantName) ExpoModuleProxy.getConstant is a static method that your plugin can use to retrieve a constant exposed by the specified Universal Module. static Stream<ExpoEvent> get events ExpoModuleProxy.events is a stream of all events being emitted by the Universal Module core. As a plugin developer, you can filter by event names to expose module-specific events to your consumers. See accelerometer.dart from the expo_sensors package for an example. ExpoEvent ExpoEvent is a data class streamed from ExpoModuleProxy.events that has the following properties: expoEvent.name (String): the name of the incoming event. expoEvent.body (Map<String, dynamic>): the payload of the incoming event. Pro Tip: See other Universal Module Flutter plugins in the packages directory of this repository for more examples of how this adapter is used.
https://pub.dev/documentation/expo_flutter_adapter/latest/
CC-MAIN-2019-35
en
refinedweb
A sample logger tool for flutter. support logLevel config and log format. Hope making log colorful and logLevel based callback in the future add dependence in pubspec.yaml dependencies: flutter: sdk: flutter ... colour_log: ^0.2.0 import "package:colour_log/colour_log.dart" ... var log = Logger(); // default log level debug log.d("debug"); log.i("info"); log.e("warn"); log.e("error"); log.logLevel = LogLevel.INFO log.d("debug"); // will not show log.i("info"); log.e("warn"); log.e("error"); Add this to your package's pubspec.yaml file: dependencies: colour_log: ^0.2.1 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:colour_log/colour_log.dart'; We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/colour_log.dart. Run flutter format to format lib/colour_log colour_log.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/colour_log
CC-MAIN-2019-35
en
refinedweb
Add placeholder to the Text widget import 'package:flutter/material.dart' hide Text; import 'package:text_placeholder/text_placeholder.dart'; //... final text1 = Text("magic", placeholder: "", style: TextStyle(), placeholderStyle: TextStyle());/main.dart import 'package:flutter/material.dart' hide Text; import 'package:text_placeholder/text_placeholder.dart'; //... final text1 = Text("magic", placeholder: "", style: TextStyle(), placeholderStyle: TextStyle()); Add this to your package's pubspec.yaml file: dependencies: text_placeholder: ^0.0.2 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:text_placeholder/text_placeholder.dart'; We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Document public APIs. (-1 points) 3 out of 3 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API. Format lib/text_placeholder.dart. Run flutter format to format lib/text_placeholder.dart. Package is pre-v0.1 release. (-10 points) While nothing is inherently wrong with versions of 0.0.*, it might mean that the author is still experimenting with the general direction of the API.
https://pub.dev/packages/text_placeholder
CC-MAIN-2019-35
en
refinedweb
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives.) This deserves more attention than I can give it right now, but I am sure others here will want to comment. It is an interesting thought that one can have a "guru level" programmer forced to program in <insert out of date language here>. He thinks to himself: "God, I could do so much more and so much quicker if only I could use <insert advanced language/tool>", while his manager may be thinking: "I am glad you don't have what you want because if you did, your code would be hard to maintain by some of your coworkers". But I think that sort of thing is a shortcoming of the science of project management, and it's wrong to embed restrictions in a language to throttle gurus from reaching nirvana or to prevent non-gurus from shooting themselves in the foot. That's what code reviews/peer programming, etc. are for. And a manager/architect has the right to suggest (when necessary) that the guru cool his jets and keep it simple for the sake of maintainability by others. Outside a corporate environment, for things such as Python where the fact that I can read anybody else's python code improves (allegedly) Python's popularity, one might think it's the right idea to throttle people since you can't tell them not to use a certain feature. I am not a python guru, but I'd be surprised if the majority of current python libraries out there have sources that utilize the most bleeding-edge features of python as is. Most people don't absorb the new stuff that fast, and they don't use it too often. Some of the ones who use them misuse them, but some make very proper uses and the sources have educational value. Why throw the baby with the bathwater ? Part of me - most of me, really - bridles at being told you can't have that: you might use it, and then other people will have to learn to use it too, as if that wouldn't be of benefit to all concerned - a rising tide that lifts all boats. Did you ever meet a programmer who disliked learning new things who wasn't also a terrible programmer (and destined to remain so)? you can't have that: you might use it, and then other people will have to learn to use it too It seems to me that what those Python experts who resist the further elaboration of the language are afraid of is asking beginners to learn and reflect. They don't see new beginners as being the way they were as beginners, fascinated and intimidated in equal measure. They're so concerned about the intimidation that they forget about the fascination - the motive force that drove them up the learning curve, and will drive others up after them. Language design is about making design choices about what to leave in and what to leave out. If you don't make some tough decisions about what to leave out, you'll end up with a big language. What's arguably the biggest modern language? Perl. Is that a good thing? I know what my opinion on that subject is. That's an idealized version of the world. Charming, tempting even, but not the reality for the majority I suspect.. This is not to say that I think metaprogramming or tail call optimization are necessarily bad. In fact I think the former is so important that I am doing a lot of work on it ATM. The latter I am currently um'ing and ah'ing about as you can see from the linked article. But I still have a large degree of sympathy with those Python people who are wary of additions to the language. I wrote down some ideas along these lines as an article on downwardly scalable languages. I'm still mulling it over, but what you say here is very apt:. The really clever trick would be to make a language system that scales from our beginner just trying to get something done, all the way up to an expert. That's exactly what the PLT Scheme philosophy is all about. The module system allows one to program as if each module is written in its own language. For the book How to Design Programs () a series of languages all subsets of Scheme were made, that progressively made more and more of the language available. Restricting a language for a beginner makes a lot of sense, since error messages can be made more precise - and written in terms that the beginner can understand. But isn't this scalability also satisfied by C++ ( without ever being simple IMO )? Some users use just the C part, most of them use the object system and C's basic types and again much fewer use generic programming facilities. I learned to program as a hobbyist among hobbyists, part of the first generation of home computer users in the UK, so it still surprises me whenever I come across someone for whom a programming language is simply a means to an end. simply a means to an end Even in the business world, the people I know who turned to programming as a way of achieving their business goals did so partly because they were attracted to programming in some way. Perhaps this is more unusual than my personal experience would lead me to believe. I think it's in the nature of programming that people who like to reflect on their own practices are better at it (and also enjoy it more). Without that sort of metacognition, it's easy to get stuck in blind alleys - a discouraging experience in itself. Get a compsci degree, but since the bubble burst here in the US, it might not be true anymore. [tail call optimization] I am currently um'ing and ah'ing about Your fib example seems to be more about tupling than about tail recursion. Perhaps a better example would be to sum over a list by recursion, where you'd have to introduce an accumulator to make the function tail recursive. I can't remember too many people on LtU bitching about the lack of tail call optimisation in Python or Java compilers. Personally I don't think it's that important, because you can't have everything in one language. Another issue is adding it to the virtual machines. If it's cheap for you to add tail call capabilities to your VM, please consider doing so. It will allow implementators of FPLs to target your VM. I can't remember too many people on LtU bitching about the lack of tail call optimisation in Python or Java compilers. I don't bother to bitch about it usually because there's not much point, here. I think quite a few people on LtU understand how important it is. After all, one only has to read and understand the papers after which this site is named, like "Lambda: The Ultimate Imperative", to understand the importance of TCO. However, it's not just about functional languages. Recursion is more fundamental than that. Personally I don't think it's that important, because you can't have everything in one language. It is important. It's not about having "everything", only the minimum necessary. The Python world has made a big mistake here, led by their benevolent dictator. The mistake is in seeing iteration and recursion as an either/or choice: that if the language has good support for iteration, then it doesn't need to support recursion well. However, even if you take the position β€” as Guido apparently does β€” that iteration is a more natural way to think, that can only credibly be argued for some subset of all problems. Guido would presumably say it's a large subset. However, there are many problems for which recursion is demonstrably the most natural solution, in terms of the simplicity of the resulting programs. By refusing to properly support recursion, Python limits itself unnecessarily. It's analogous to driving a car that can't turn left β€” you're OK as long as you don't need to make too many left turns on your trip. Otherwise, the right-turning loops you have to make when you need to turn left get tedious after a while. Unfortunately, such a limitation blinds people to the existence of left turns. To them, a left turn is a right-turning loop, and their mental model of their problem domains and algorithms are structured in this way, avoiding left turns even when they're the simplest solution. Ian Bicking would presumably claim that this is a good thing, that it allows people to more easily become experts, because they only have to become experts on right turns. The only sense in which that's true is if you've been brought up, from birth as it were, in an environment which emphasizes right turns and demonizes lefts. This is self-fulfilling. Switching back to the real situation, the historical development of most programming languages emphasized iteration over recursion, which resulted in a situation in which many people are far more familiar with iteration than recursion. Worse, they can't even experiment with recursion to learn about it, because none of the languages they know support it well. As a result, a lot of FUD tends to surround recursion and its supporting infrastructure. One of the biggest pieces of FUD is how difficult or confusing it is. This is more about people protecting the areas of their own ignorance. You'll see people saying things like "Trust me, it would melt your brain", but that's a euphemism for "even I don't fully understand it, therefore it must be really hard". The truth is that most programmers find it really enlightening if they only bother to learn about it. Java and Python are essentially in the same category as the original FORTRAN in this respect. They haven't learned much from the past 50 years, ever since someone came up with the original call/return model of control flow. Saying "you can't safely do recursion in this language" is equivalent to saying "stay away from this language for all kinds of advanced applications". Extrapolating from what Ian is saying, such restrictions may be Python's goal. But that closes many doors for Python. I think there is a danger in conflating the concepts of recursion (an obviously good thing, in my book at least) and tail call optimization (a useful thing in some circumstances, but not something I would considera absolutely fundamental). I think you can have practical support for fairly advanced useages of recursion even without tail call optimization. I wonder whether the conflation of these concepts goes back to the time when most systems had a fixed stack size, and recursing past that point caused all sorts of problems? IMHO decent modern implementations should allow the stack to grow to available memory at which point tail call optimization seems, relatively speaking, less of a burning issue to me. Of course, I agree that recursion and TCO aren't the same thing. But TCO is nevertheless "absolutely fundamental" in that it's required in any case where recursion isn't bounded by some reasonably finite value. Simple example: a server program which uses a tail call in its main loop β€” do you really want your server to consume ever more unreclaimable memory on every request it serves? Of course, there are many less trivial examples. The common Python-style response to this is that you can program such an infinite loop using an iterative loop construct, so there's no problem. But this comes back to my basic point, which is that either you provide good support for recursion, or you don't. If you provide good support for recursion, then you should be able to use a tail call to implement an infinite loop. Python is taking the approach of not providing good support for recursion. The result is that the "experts" which Ian Bicking wants to help create will run around thinking that recursion is difficult, problematic, and to be avoided, just because Python doesn't allow left turns (mixing my metaphors). Saying that you can use recursion up to the limits of available memory β€” which means that applications which rely on certain types of recursion will be very memory-hungry, for no good reason β€” is not providing full support for recursion. I think you can have practical support for fairly advanced useages of recursion even without tail call optimization. Any examples of languages that qualify? Besides, amongst the "fairly advanced" uses of recursion that are excluded by your suggestion are some incredibly simple uses, like the server loop I mentioned earlier. You certainly can't even think about CPS parsers in languages which don't do TCO, unless you're willing to work around the problems by writing trampolines, which tends to negate the simplicity advantages. I think there is a danger in conflating the concepts of recursion (an obviously good thing, in my book at least) and tail call optimization (a useful thing in some circumstances, but not something I would consider absolutely fundamental). Recursion is fundamental, yes. But loops are also fundamental. Why? Because algorithms that accumulate bounded control context during computation are fundamental. However, for and while are only two special cases of such algorithms. They are easy to implement in a language with proper tail recursion and either higher-order functions, call-by-name semantics, or macros. But other kinds of iteration cannot be easily expressed with for or while. for while You're right that the issues of general recursion and tail-recursion are not identical, but they are very connected. Proper tail recursion is not just important for algorithms that are already tail-recursive. Any function that is not tail-recursive can be refactored into one that is. This is a transformation that functional programmers are quite comfortable with, a standard optimization technique in the programmer's toolkit. But this is impossible to do without painful contortions in a language without proper tail recursion. IMHO, decent modern implementations should allow the stack to grow to available memory at which point tall call optimization seems, relatively speaking, less of a burning issue to me. Neither Python (cf. the misnamed "maximum recursion depth exceeded" error) nor Java (StackOverflowError) nor C# (StackOverflowException) is currently decent according to your prescription. But I agree, that would be great - and is great in some very nice languages! However, without proper tail recursion there would still be many algorithms that would chew up all available memory for no good reason. It is important. It's not about having "everything", only the minimum necessary. The Python world has made a big mistake here, led by their benevolent dictator. We have (a) a community of developers, many of which are skilled, (b) are using the language intensively for a full spectrum of tasks, (c) many of whom have extensive experience with Lisp, (d) people do frequently write recursive functions in Python, and (e) there are several paths to suggest changes to the language, both casual and formal. But really people don't care. When I say don't care, I mean: there are no champions who speak from a perspective that is meaningful to Python. Googling, it came up last July, and pretty much no one cared, even the author, and there were several practical reasons why it wouldn't happen. I think language theorists and mathematicians care about TCO, and that's about it. Really, I just brought it up as an example, I don't really care that much about it either. Well, this was Guido's response:. Another reason is that I don't think it's a good idea to try to encourage a Scheme-ish "solve everything with recursion" programming style in Python. This is an example of the either/or attitude Guido has to recursion (this is not the only example, but I'll have to dig for others). He's arguing against supporting a useful language feature because of the risk that it will encourage attempting to "solve everything" with that feature. IOW, he's restricting recursion because he's afraid it might be too useful?! Seriously, what's wrong with supporting a feature because it might be appropriate in certain circumstances? I agree that this reasoning is silly. Python has done something similar before: it did not use lexical scoping. There were only three scopes: current function, current module's globals, and builtins. I recall one of the reasons, other than interpreter simplicity, was that nested functions should be discouraged; "flat is better than nested". When people used nested functions nevertheless, with a hack for making a closure (adding default parameters initialized to outer variables of the same names), Python finally gave up and changed its scoping rules. It's worth reiterating that Python still does not support lexical scoping. What Python now does is that it determines lexically a list of dictionary objects that it dynamically consults to figure out variable bindings. Since the dictionaries are mutable objects, it is not in general possible to determine variable scope statically. There should be a name for this scoping discipline. I used to call it whacky scoping, but perhaps a more sober name would catch on. FWIW, newlisp uses this notion of scope as well. You are completely wrong. Python does support lexical scoping. First of all it is wrong that variables are generally stored in dictionarys. This is true for member variables ( of general objects/modules ) but is wrong for functions local variables that are stored in arrays. If a LOAD_FAST opcode is executed the current stack frame contains the index of the local variable for look-up in the locals array. The best way to see how scoping works is to disassemble Python code. from dis import dis x = 0 def f(): print x >>> dis(f) 2 0 LOAD_GLOBAL 0 (x) 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE Once a local variable x is defined in f the global x will not be accessed unless it is declared in f's code block using the "global" keyword. def f(): print x # raises an error x = 0 >>> dis(f) 2 0 LOAD_FAST 0 (x) 3 PRINT_ITEM 4 PRINT_NEWLINE 3 5 LOAD_CONST 1 (0) 8 STORE_FAST 0 (x) 11 LOAD_CONST 0 (None) 14 RETURN_VALUE We see immediately that the LOAD_FAST opcode must fail because STORE_FAST is executed later. This pattern does not change if we nest f using a closure g: def f(): def g(): print x return g g = f() >>> dis(g) 3 0 LOAD_GLOBAL 0 (x) 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE The compiler is by no means confused about the scope of x and does not guess that it persists in the dict of f ( which is empty! ). def f(): x = 0 def g(): print x return g g = f() >>> dis(g) 4 0 LOAD_DEREF 0 (x) 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE Here g uses a LOAD_DEREF opcode that accesses x from a cell object that stores x for reference by multiple scopes. In this case f does not execute LOAD_FAST/STORE_FAST but LOAD_CLOSURE/STORE_DEREF opcodes. If we define x also in g it will be accessed locally using LOAD_FAST: def f(): x = 0 def g(): x = 0 print x return g >>> g = f() >>> dis.g() 4 0 LOAD_CONST 1 (0) 3 STORE_FAST 0 (x) 5 6 LOAD_FAST 0 (x) 9 PRINT_ITEM 10 PRINT_NEWLINE 11 LOAD_CONST 0 (None) 14 RETURN_VALUE In this case no cell is created for x and f uses LOAD_FAST/STORE_FAST again. One final remark. It is possible to reflect about the locals f has created. They will be represented as a dictionary. But this does not mean that changing the dictionary also changes the locals. From the code analysis given above it should be clear that this is impossible. The bytecode is immutable. def f(): x = 0 v = vars() print vars v["x"] = 7 print vars() >>> f() {'x': 0} {'x': 0, 'v': {...}} Python functions have static scope You are completely wrong. Python does support lexical scoping. "Completely wrong" is a bit harsh. Lexical scoping were introduced in Python in April 2001 and it wasn't until version 2.2 that it were made the default scoping rule. An related (at least in my book) issue is that Python to my knowledge doesn't support closing over writeable values. Yes, but Charles Stewart insisted in "reiterating" a wrong assertion and it is wrong for quite a couple of years. An related (at least in my book) issue is that Python to my knowledge doesn't support closing over writeable values. Let's say it in Python slang: you can't rebind a name of an outer scope. The only exception are globals that can be accessed using the global keyword. There are known workarounds ( usually a value is put into a list and one has to read/write list elements ). This might not be satisfactory at least not for a Schemer with class-Angst ( creating classes in Python is as cheap as creating functions ). I'm not sure if a future version of Python will introduce a var keyword or something similar that enables a name of an outer scope being rebound. In this case I would expect that the whacky scoping semantics Charles Steward mentioned would be valid for var-tagged names. Another possibility would be redefining the "global" keyword semantics without introducing a new opcode. I missed the above comments first time around. If I am not mistakened, the then current python language standard permits, but does not mandate lexical scoping, but explicitly allows for scopes to be implemented by mutable dictionaries, though deprecating this as something that might not be allowed in future specifications. That some implementations have lexical scoping is not really the point (though my post did mistakenly talk about what python implementations do, rather than may do): you cannot assume lexical scoping when writing code, hence it is hard to write code that can't be defeated by implementations that conform to the standard. What I wrote was based on the language standard, and not on tests of particular implementations. I might still be wrong, and I'd be grateful if the places in the language standard that show that I am wrong could be pointed to. I did check my post carefully against the standard when writing the post that offended Kay, but I will admit to finding the python standard harder going that certain other language standards.... Charles, I'm not sure what you are referring to? I refer to PEP 227 which defines scoping rules for Python. This document stems from Nov. 2000 and the CPython implementation is quite around for a while. If you have contradicting sources I would be interested in them. The juridical problem of what has to count as the true "standard Python" is solved pragmatically by referring to CPython ( PEPs, docs and reference implementation ) as the de-facto standard - at least until know. I have no indication that this is going to change. Dialects like Stackless-Python or other implementations like JPython and IronPython that exploit platform specific strengths did not yet generate conflicting claims. I agree that it is sometimes hard to distinguish between Python as a language and the specification of its reference implementation. The threading model is the most visible entity of deviation. I guess Python is not different from most open source projects in this respect. I'm not sure this is a deficiancy. I would say it depends always on the phase of the projects life-cycle, the public requirements and the social conflicts among competing teams. BTW if my comment sounded overly harsh than less because of a wrong technical detail but the "reiterated insistence" on it. I hope I contributed to more clarity nevertheless. I'm not sure what text I was looking at, but the loophole I thought I saw is excluded in both the current language reference (for 2.4.2) and the then current version (for 2.4.1). Scoping rules are given in sections 4.1 and 4.1.1, which describes the scope as a tower of frames, enumerates the language primitives that are allowed to add variables to scopes (all of which behave lexically) and forbids any feature to remove variables to any scope but the current frame. It's a fairly roundabout way of specifying lexical scope, but I think it's watertight. I'm guessing I must have been looking at an older standard, and thought it was current. When I have a bit more time, I'll try to track it down. I don't agree with your comments about pragmatic language definition: it's fair enough to use an implementation to test-drive proposed language features, but it's important to be able to write code in a portable way, and, pragmatically-speaking, a language standard is an importanty guarantor of predictability when doing this. BTW, didn't the stackless fork become CPython? Guido has made many pro-iterative and anti-recursive remarks, at many different times. I'm sure that's a strong reason that it's not a "topic of much debate" in the Python community. As for whether people care, I come back to my point. If all you've learned about is left turns, you're not going to care about right turns until you've gone through the process of learning what they are, how they work, and why you should care. When your benevolent dictator is telling you to drop it, that's a strong disincentive against learning more about it. So much for helping to create "experts". I think language theorists and mathematicians care about TCO, and that's about it. All users of functional languages care about it, or at least, depend on it. That's a group that goes well beyond language theorists and mathematicians. Erlang users are one such group, and most Erlang users are neither language theorists, nor mathematicians (afaik!) However, as I've said, there's nothing about recursion that makes it unique to functional languages. All that's achieved by refusing to implement TCO in an imperative language is that the power of recursion is severely limited, unnecessarily. This is true, but I think he really meant that nobody much in the Python community cares about TCO, despite many of them knowing functional languages. However, as I've said, there's nothing about recursion that makes it unique to functional languages. All that's achieved by refusing to implement TCO in an imperative language is that the power of recursion is severely limited, unnecessarily. Never so simple! For one thing, if you added TCO then you've added another way of doing the incredibly basic task of iteration. That's right against the Zen of Python: There should be one-- and preferably only one --obvious way to do it. Properties like that are extremely important in my opinion (e.g. they're what make Erlang better than Lisp) and only people who really use the language can appreciate them. Not being a Python-hacker myself I'd give them the benefit of the doubt. I'd have a similar objection to adding loops to Erlang. That's actually something I've never quite understood. What is the importance of having only one way to do things? I'm aware that people feel Common Lisp is too much of a kitchen sink, but the problems with this are 1) difficulty in producing and maintaining an implementation of such a large language, and 2) the steep learning curve. I don't think (1) is an issue here, but I don't see (2) as a valid argument against having multiple ways of doing things. It's just an argument for building a language or its development environment in such a way that it can be learned in successive iterations. This is why I like DrScheme's language levels so much. But if I believed a language should be as simple as possible, then e.g. I wouldn't want a module system since I can implement a module as a higher-order procedure that is parameterized over all its imports. Are there other reasons for wanting there to be only one way to do things? It's a misguided language aesthetic. Another way to say "there is only one way to do it" is "there are no correctness-preserving program transformations." In Scheme, one can write a program that uses higher-order functions. Then, one can defunctionalize it to use first-order data structures (like records) instead. That's another way to do it. Solution? Remove higher-order functions from the language. Then, one can CPS (with defunctionalized continuations, of course) to explicitly expose control. That's another way to do it. Solution? Remove proper tail-call handling to prevent CPS. Etc, etc, etc. Without more than one way to do it, it is not possible to write a simple program that works and then gradually transform it into something more efficient. Instead, every program is doomed to be not simple, or not efficient, or both, right from the start. Or else you think it is a language designer's job rather than the programmer's to come up with a simple and efficient solution to all the problems that are solvable in the language. HQ9+ is a language like that, but otherwise I don't think it will ever work. It is certainly at odds with having a small language. It's a social/community/managerial thing - it's supposed to result in everyone doing things the same way, so that in theory, when dealing with other people's code, you know what idioms to expect in a given situation, and there are fewer surprises and less variation between code written by different people. But soft criteria like this are open to interpretation, and the people who want to interpret them the most tend to be those with the weakest technical argument. (That's just a general observation, not directed specifically at Luke's mention of TOOWTDI.) Perhaps I should just go with the flow and point out that the principle of least surprise dictates that recursion should be unrestricted. Check, and mate. ;) Who expects this particular optimzation? The basic operational model for function calls allocates stack space (or other space). Understanding stack overflow comes before understanding tail call optimization. The principle of least surprise would better argue that tail-recursive functions may overflow the stack. TCO is important to me and I agree with the arguments in its favor. However, "unrestricted recursion" amounts to an appealing, but inaccurate re-labeling of TCO. ...would be when a minor change to a recursive function causes a stack overflow (i.e. a change that suddenly makes the call non-tail recursive). Which is why I think that the tail-call should be explicit, like Luke indicated. TCO is important to me and I agree with the arguments in its favor. However, "unrestricted recursion" amounts to an appealing, but inaccurate re-labeling of TCO. With Clinger-style "proper tail recursion", recursion is as unrestricted as it can possibly be while still having a correct program. I think it's reasonable to call that "unrestricted recursion", used in context, although I'm open to alternative descriptions that are a bit more snappy than e.g. "recursion which isn't unnecessarily restricted". Who expects this particular optimization? The basic operational model for function calls allocates stack space (or other space). Understanding stack overflow comes before understanding tail call optimization. If you teach people that recursion must always use stack space, then TCO will be a surprise, certainly, but that's only because you started out by misleading them. If that was intended as a convenient pedagogical "lie" which will later be corrected, fine, but then it's your responsibility to correct it. Most people learn programming by doing it, especially in informal contexts. Anyone who's ever been surprised by hitting a recursion limit is demonstrating an expectation that recursion should work better than it actually does. It's not a question of expecting a particular optimization, it's expecting a syntactically & semantically legal program to work. Proper TCO satisfies this expectation to the greatest degree possible. Many of those instance of hitting recursion limits would not have happened in the presence of TCO. The remaining instances of hitting the limits are learning experiences, in which the programmer needs to learn that there are limits on the ability to make recursive calls that aren't tail calls. It's something that's quite easy to explain simply based on local inspection of the affected code. It only gets elevated into a bogeyman by people who have ingrained experience with a more constrained approach. Who expects this particular optimzation? The basic operational model for function calls allocates stack space (or other space). Understanding stack overflow comes before understanding tail call optimization. Here's a reduction sequence for a tail-recursive factorial (using eager evaluation despite Haskell syntax), fac n = fac' n 1 where fac' 0 acc = acc fac' n acc = fac (n-1) (n*acc) fac 4 -> fac' 4 1 -> fac' 3 4 -> fac' 2 12 -> fac' 1 24 -> fac' 0 24 -> 24 Where is this stack that is overflowing? I guess I'll just have to add one in just so it can overflow. Understanding functions comes before understanding stack overflow. Out of curiosity, I wrote the above in Python, just to see what it would do: #!/usr/local/bin/python def factorial_tco(n, acc): if (n == 0): return(acc) else: return(factorial_tco((n-1), (n*acc))) def factorial(n): return(factorial_tco(n, 1)) print factorial(998) No error on 997, blows up on 998 with: File "D:\Python\hello.py", line 9, in ? print factorial(998) File "D:\Python\hello.py", line 8, in factorial return(factorial_tco(n, 1)) File "D:\Python\hello.py", line 6, in factorial_tco return(factorial_tco((n-1), (n*acc))) **** previous two lines repeated 996 times **** File "D:\Python\hello.py", line 6, in factorial_tco return(factorial_tco((n-1), ) RuntimeError: maximum recursion depth exceeded I guess the first thing I don't get, even before we talk about the usefulness of tail calls from the end user perspective, is why a self tail call is not optimized. Saves stack space and cuts significantly down on the number of instructions by effectively making it into a loop. End users wouldn't care, since they if they understand an error printed once, they probably won't be any more elightened seeing it an additrional 998 times. They just want to get at the one meaningful message (which is where the error occured). IMHO, self tail call optimization is a pure optimization and doesn't effect the ability to decode a stack trace. Now the question back is whether mutual recursive routines should be TCO'd. Optimizing locally seems to be a no-brainer for this untrained mind. No different than all those fancy loop unrolls, etc... Unfortunately this appears to be the case if, and only if, said function contains one self tail call. AFAICT even functions with more than one self tail call can not be guaranteed to recover the necessary information for a stack trace in constant space. Optimizing locally seems to be a no-brainer for this untrained mind. It seems like you could even do it naively at the bytecode level as a peephole optimization by checking for CALL-RET instruction sequences. (You probably wouldn't want to do this in the bytecode interpreter itself as it would penalize non-tail calls.) Unfortunately I can't give a reason crisp enough to fit in this now rather narrow margin. I'm picturing my favourite feature-poor language and some of the very reasonable changes that one could ruin it with. Thought experiment: if you could take control of Guido and make him change Python in any ways you wanted, would the language become more popular or less popular? This is true, but I think he really meant that nobody much in the Python community cares about TCO, despite many of them knowing functional languages. I was responding to Ian's comment that only "language theorists and mathematicians care about TCO". That's not true even amongst Python users. The subject of limits on Python's recursion comes up regularly. There's even a stock answer, which is that recursive algorithms which hit the recursion limit should be rewritten iteratively. Many of the people who run into this restriction may not realize that in most cases, TCO would eliminate the restriction. This means that many Python users do "care about TCO", whether they realize it or not, in that they care about the improved recursion capability it can provide them with. Never so simple! For one thing, if you added TCO then you've added another way of doing the incredibly basic task of iteration. That's right against the Zen of Python: There should be one-- and preferably only one --obvious way to do it. I disagree that TCO violates that precept. If a language has 'for' loops, for example, then that's the one obvious way to do that kind of iteration. The same goes for all other specific iteration constructs. The presence of TCO wouldn't make Python's iteration constructs any less obvious as the designated way to achieve iteration, and documentation and community standards would reinforce that. The point of adding TCO is not to provide an alternative way to perform loops, but to provide a way to do some things that Python currently has zero obvious ways to do β€” or more accurately, where the obvious way is unnecessarily complex, such as requiring manual creation and management of a stack, rather than being able to make use of the language's stack. Not being a Python-hacker myself I'd give them the benefit of the doubt. The problem is that restricting function or method calls like this in a general-purpose language would require a pretty big doubt to justify it. No-one has raised anything that big. The only real technical issue is that of stack traces, which is a relatively minor issue, and one which many other languages have been dealing with successfully for decades. I think the real reasons here are historical and psychological. Many language implementors didn't used to know about TCO, so languages didn't support unlimited recursion. As a result, recursion is ingrained in many people's brains as something that's of restricted value, to be avoided in most cases. This even leads people to believe that iteration is somehow a "natural" way for humans to think, and that recursion isn't. Guido is one of those who has claimed that. However, it doesn't take much examination to find that there are at least some problems for which recursive solutions are more natural, in that the recursive solutions are easier to write and easier to understand. In fact, recursion is no more an unnatural way for humans to think than functions and function calls are. Once you introduce functions, you have to train people not to use recursion, as the Pythonistas who run into the recursion limits demonstrate. What's the rationale for restricting recursion? It used to simply be that language designers didn't realize they didn't have to. That's no longer an excuse. Can anyone come up with a real reason?... Python was my first love, and it's still how I make 99% of my income, but I blow the max recursion depth regularly. I do agree that there should be only one way to do iteration, but nowadays I think it should be recursion and no loops. Once I learned Haskell, I never wanted to go back to Python, Python is too restrictive. I also feel that Python has become cluttered with too much syntax. What do you think of the decorator syntax in Python? Would you recognize it if you saw it? I learned Python 1.4 in eight hours, and I could immediately read all the Python sources I found. These days I can't teach experienced programmers all of Python in eight hours. I think that will decrease the flow of new users into the Python community, and new users are the lifeblood of any community. -- Shae Erisson - ScannedInAvian.com Yep, a statically typed, strictly purely functional language with an even smaller community and libraries and Python is more restrictive... yeah, right! give me a break! I showed some of the best features of Python to a friend of mine ( dynamic typing, list comprehensions, OO or procedural style, easy documentation and introspection ) and he was able to grasp the concepts with great enthusiasm. It was about our lunch time. Granted, he's a programmer already and well aware of the OO paradigm, so it was not so much of a big deal. Python _is_ a small language and ( except for some max recursion level constant ) not restrictive at all. Haskell is effectively compile-time dynamically typed because of type inferencing. I declare the same amount of type information in both my Python and my Haskell programs. This is not static typing like in Java or C. Purely functional means that I can separate side-effects, they're only inside an IO type. Debugging J2EE or Zope would be a lot easier if it were obvious which code has side-effects, and if state were explicitly passed. When I use Python, some of the things I miss are multi-line lambdas, partial application, tail call optimization, pattern matching, and algebraic data types. I am still convinced that Python is more restrictive. Have you used Haskell and Python for similar tasks? How would you compare and contrast the languages? --Shae Erisson - ScannedInAvian.com I doubt that this particular line of argument will go anywhere very illuminating (it may end up sinking ignominiously into the Turing tar-pit). I personally find Haskell more expressive than Python, meaning that I can capture the meaning of a program very much more succinctly in Haskell. Haskell seems to me to fulfil Paul Graham's desideratum about being able to define a program one layer of abstraction at a time. Python's emphasis on OOP as the one-way-to-do-it introduces a level of latency (or friction) into this process that you don't even notice until you switch to something that allows you to do-it-differently. Be that as it may, restriction and expression are not opposed. Syntax and grammar restrict and enable linguistic expression. In Haskell, you are able to take a small set of rules and construct a larger set, qualifying one restriction with another, so that the meaning of a program becomes clearer in some ways as you go along. That clarity entails ruling out some possibilities and mandating others. The purported advantage of dynamic languages is that less is ruled out in advance: saying that something can happen does not entail saying that only that thing can happen. By being less explicit about the criteria for failure and success, you expose yourself to unforeseen varieties of possible failure but also facilitate subsequent redefinitions of success. Sometimes it doesn't pay to be too clear about what you want - the more you express now, the more you may have to take back later. dynamic There's not one such a thing. Dynamic, or latent, typing has less to do with type declarations and more to do with the fact that variables -- or expressions -- are not bound to a statically specified type. That way, a camel is always a camel and won't suddenly become a dog. But sometimes it's desirable such flexibility, which Haskell won't provide unless you change the whole expression -- the program -- and go. Granted, my experience with Haskell ain't all that great. But what i've seen was enough. I won't ever exchange Scheme or Python flexibility and syntax expressiveness for static program correctness checking or some crazy way to manage IO side-effects. Off-topic: i just had this weird bug while posting which alerted me that there was some "suspicious data" in the text. Well, i tried a lot of things to manage to get the text posted. In the end, it turned out that it wasn't accepting my parentheses, so i took them off and replaced them by -- weird... Maybe the software likes static typing and considers any opinion for dynamic typing to be suspicious. Actually, Anton mentioned before that the Drupal software has a check for malicious javascript code. And it appears to be a bit on the false positive side. A stroke in the right hemisphere of the cerebral cortex can cause a person to lose the very concept of leftness: they cannot turn to the left nor turn other objects to the left, because the whole area to the left of their body's center line simply does not exist. And some of them do learn to compensate by turning further to the right, or to eat a whole plate of food by eating the right half, rotating the plate 90 degrees to the right, eating the right half of what remains, and so on. You mention two problems of tail call optimization (TCO), There are two chief costs associated with tail calls and their optimization. The first is fairly obvious, and is evident in the forced rewriting of the Fibonacci function: many functions have their most natural form without tail calls. This is true, but how you consider it a problem of TCO is beyond me. If you don't have TCO you still have to rewrite the function using iteration and, given higher order functions, it can't be anymore natural iteratively than with TCO (modulo a "constant factor") because I can write a HOF implementing any iteration control structure given TCO. However, you add What's more, as can be seen from the Fibonacci example, the resulting function is frequently hard to understand because it often requires state to be passed around in parameters. Obviously FP people would claim that that makes the program easier to understand, but regardless, TCO doesn't force this upon you, you can certainly use lexical nesting (assuming the language supports it) and mutate the local variables (or global/member), e.g. in an impure Haskell, fib n = fib' n where (current,next) = (0,1) fib' 0 = current fib' i = (current,next) := (next,current+next); fib' (i-1) I could have had i be mutated too, but that seems less clear as it's the control variable, at any rate it would be a minor change. In fact the similarity of this to the pure tail-recursive version or to a for-loop is obvious, though it would be a bit less obvious if I hadn't posited a parallel assignment operator (which many languages lack). Of course, for the pure version it's a non-issue; you get "parallel assignment" for free. The other problem (missing stack trace information) makes a lot more sense, but isn't insurmountable. Many people have mentioned that you can keep a finite amount of otherwise unnecessary stack frames around which should be all that's necessary in most debugging scenarios. When that is not enough the implementation could simply keep track of all call frames utilizing filtering, abbrieviating, and/or logging techniques for example. Finally I want to mention that this isn't just a problem when using a functional style. If you endeavor to replace case-analysis with OO dynamic dispatch, you will again run into problems due to lack of TCO (beyond the ones that come up anyways). I did try to be very careful with my use of language, although I think I probably wasn't clear enough. I was trying to distinguish between tail calls and tail call optimization. The Fibonacci example is, to me, an example of why trying to mash everything into a tail-calling form is not a good thing - this is entirely independent of whether tail-call optimization is in place (although admittedly people probably wouldn't bother doing the mashing if it weren't). The stack trace issue is an example of why tail call optimization itself can cause problems. My problem is that I want to be able to produce accurate stack traces at all times but, as I explained in my blog, I don't see how to do that in finite space in the presence of functions with multiple tail calls. And if I can't make a guarantee about finite space and tail calls in all situations, I would rather not add tail call optimization in since it'll confuse me even if it confuses noone else. I genuinely would be interested to see if there's a way of allowing tail call optimization whilst preserving this requirement. It's via this article that I discovered that other people - the Squeak fellows, at least - have bought up the stack trace issue at some point so maybe someone has an answer. My problem is that I want to be able to produce accurate stack traces at all times One problem here is that when the function call mechanism is used to achieve iteration, there are cases where requiring a stack trace is "unfair". For example, you don't expect a stack trace which includes each iteration of a for loop β€” why do you require it ("at all times") when such a loop is implemented using a tail call? I'll do you a deal: you tell me how I can teach a programming language the difference between fair and unfair and I promise to add tail call optimization into my VM ;) How about allowing a compiler to specify when stack traces are required, perhaps by providing two different tail call operators? Then when an iterative loop is being generated via a tail call, it can just use the "no stack trace required" operator. This would also allow flags applied at the source level, by the user, to control the strack trace behavior of the generated bytecode. Settable flags might include "no stack trace on iterative constructs", "no stack traces on tail calls", and "no stack traces on self-tail calls", etc. A language might even choose to allow stack traces to be requested or prevented in the source code. At the VM level, all you have to do to support all these options is provide some alternatives. (define-syntax non-tail (syntax-rules () [(non-tail e) (let ([result e]) result)])) (define (breakpoint? n) (= n 7)) (define (fact-not-iter n a) (cond [(breakpoint? n) (error 'breakpoint)] [(zero? n) a] [else (non-tail (fact-not-iter (- n 1) (* n a)))])) (let ([result e]) result)) <=> ((lambda (result) result) e) <=> e E (non-tail E) Burn TCO! Burn eta-conversion! Down with Lambda, up with von Neumann! etc, etc! Interesting question. Just musing... <hacker> Yeah, it is true that a compiler may perform let-conversion. Thanks to the halting problem there are always ways to get around this, but that's not a particularly pleasant answer. </hacker> <semanticist> Now, equivalence relations are always defined "up to" some differences you choose to ignore, and beta-equivalence (as in your example) does not observe stack efficiency. So if you use beta equivalence to perform a compiler optimization you are implicitly saying, "I don't care about tail position, my equivalence ignores that property." Put another way, the equivalences demonstrate the correctness of the code, not its efficiency. But this isn't the best response, either; to quote Will Clinger, "asymptotic decreases in runtime efficiency are not the sort of property that endears you to a programming community." The reason is that the equivalences mod out issues of efficiency for the sake of discussing correctness, but if the code were truly equivalent, as in, not observably different, there would be no reason for the compiler to perform the transformation in the first place. So a transformation involves two relations: one is the equivalence relation that demonstrates the correctness, and the other is the "improvement" relation (I'm not sure what the real term for this is) that demonstrates the resulting code is more desirable in some way. We're pushing the limits of my knowledge of compilers, but I imagine this just comes down to which kind of property you treat "tail-positionness" to be. If it's a correctness criterion, then no transformation you perform may change an expression's tail-positionness. If it's an improvement criterion, you might imagine that turning a non-tail-position expression into one in tail-position would be an improvement, but not vice versa. In this case, my code would definitely be incorrect, because let-conversion is valid. </semanticist> I should note that none of this really argues anything about whether proper tail recursion is a desirable language feature; it's more about whether my little hack was a correct way to defeat TCO. In the end it's not a big deal; you could either off-load the responsibility to the compiler and make non-tail a primitive, as Anton suggested, or you could be a hacker and just figure out what implementation defeats the compiler's optimizations. non-tail Burn TCO! Burn eta-conversion! Down with Lambda, up with von Neumann! etc, etc! :P (tailcall E) As for TCO, I don't actually find the abbreviated backtraces a serious problem in practice. Interesting though that most discussions don't mention the problem at all? Sometimes it's hard to keep track of which practicalities that have been abstracted away when talking theory. Actually what I think would be a nice primitive is a (tailcall E) which is rejected unless E is a function call in tail-position. This could be a helpful utility for learning to write tail-recursive programs. Ironically, many languages that don't support TCO have a very similar construct that marks tail calls. Unlike your proposed primitive, it doesn't reject non-tail calls because it -makes- E a tail call. This primitive is usually called 'return'. Operationally, the former will allocate new invocation record, then calculate f(), which will probably deallocate this record, then deallocate the previous record. The latter will calculate f(), which will probably deallocate this record. So the first behaves as +frame f -frame, while the second as f, which has the same final result but a very different memory consumption history. I seem to remember tail-calls being discussed in the Tcl community, and a proposal of the form return -call cmd args.. or similar. This seems pretty simple to understand (especially if you've already got your head round uplevel) and means you don't have to do any analysis on the code to figure out if something is a tail call. return -call cmd args.. uplevel Edit: I should point out that the above construct destroys the current stack frame and then calls the command given. So the first behaves as +frame f -frame, while the second as f, which has the same final result but a very different memory consumption history. I said, "many languages that don't support TCO", so though I wasn't as clear as I could have been, it should be obvious that I wasn't saying that 'return' optimizes the tail call. Almost all calls that are tail calls are clearly marked in such languages. The reason I say it's ironic, is that one "problem" with TCO sometimes raised by people using the above mentioned languages and implicit in some arguments in this thread (albeit relatively minorly), is that it is "difficult" (for beginners at least) to distinguish tail calls from non-tail calls. In practice, it's almost always trivial and almost impossible to err on the dangerous side. Also, thinking about it now, 'return' almost seems to do a better job than Luke Gorrie's tailcall operator in ensuring that an expression is a tail call and facilitating learning. In many cases it would simply be syntactically illegal to return in a non-tail position, presumably most compilers will give an unreachable code warning in the remaining cases, and I find it hard to believe that even beginners with a basic understanding of 'return' would find such code sensible or even write it in the first place. The comparison of 'return' to an explicit 'tailcall' operator is illuminating. Note however that 'return' fails to serve the role of 'tailcall' in avoiding surprise. Modify tailcall f(n+1) to tailcall f(n)+1 and the compiler reports an error. No error is reported if return f(n+1) is modified to return f(n)+1 tailcall f(n+1) tailcall f(n)+1 return f(n+1) return f(n)+1 Modify tailcall f(n+1) to tailcall f(n)+1 and the compiler reports an error. tailcall f(n+1) tailcall f(n)+1 No, it doesn't. That's a tail call to operator+. operator+ Now modify tailcall f(n+1) to tailcall g(f(n+1)). Should an error be reported? tailcall f(n+1) tailcall g(f(n+1)) Another way to do it, rather than requiring a programmer to annotate every call that they intend to be in tail position and erroring out on those that aren't really in tail position, is to just perform tail call optimization on the calls that are actually in tail position and (obviously) not on the others. Scott Turner said: tailcall f(n)+1 ... the compiler reports an error. Kevin Millikin said: No, it doesn't. That's a tail call to operator+ But as any PL geek should know, it all depends on the rules of precedence in the language we are using (which hasn't been specified here that I can see). So both answers are wrong (or right if you prefer). Shame on you both! ;-) Now modify tailcall f(n+1) to tailcall g(f(n+1)). Should an error be reported? Donning my Captain Obvious cap, the answer is no, because the syntactic form seems to indicate that a tail call to g is being requested/asserted, and that's valid in this case. However, an error should be reported in the following case: g(tailcall f(n+1)) ...because of course the call to f is not a tail call. But the tailcall construct is never going to be valid when it's embedded within an expression, as in the latter example. The only time it will be valid is when it's used at the beginning of a "statement", assuming we're talking about a language that has statements. (More generally, the tailcall construct is only valid when used in tail position, but that's perhaps less illuminating β€” then again, I'm still wearing that cap.) tailcall This all illustrates that return is indeed a reasonable implementation of a tailcall construct. If we write return g(f(n+1)), that's cool; but if we write g(return f(n+1)), most languages with a return statement will generate an error. This error might read something like "parse error before 'return'" or "illegal start of expression", but what it's really saying is that return is only valid in a potential tail position. I say "potential", because as Derek originally pointed out, return will (or at least could) force a tail call when it's used legally. return return g(f(n+1)) g(return f(n+1)) Sorry for the diversion yesterday titled "not on target". I had not read Luke Gorrie's suggestion for (tailcall E) carefully. Also my background is in conventional compilers for C++ etc. so I think first of tail call optimization as a compile-time transformation, rather than think of "proper tail call" as in Scheme. I incorrectly assumed that the proposal was intended to prevent a programmer from inadvertently transforming an optimized self tail call into a non-optimized call. For example, f(x) = ... return f(x+1) f(x) = ... return 1+f(x) f(x) = ... return tco(f(1+x)) tco(E) f(x) = ... return tco(1+f(x)) I don't know how or whether this could be implemented in a language with proper tail call, with a similarly helpful effect. In Pre-Scheme: The programmer can declare that individual calls are to be compiled as properly tailΓ‚-recursive. (GOTO proc arg1 ...) is syntax indicating that proc should here be called tailΓ‚-recursively (assuming the goto form occurs in tail position). Unfortunately I've already had thoughts along the same lines... There's absolutely no technical problem with the compiler deciding that sometimes stack traces should be present, and sometimes not - but I'm not sure I want to burden the user by requiring them to understand possibly bizarre compiler implementation issues in order to work out when this might happen. As regards adding syntax for a new type of call operator, this is more of a starter for me, but I'm less than convinced that I want to rival SL5 for overly complex function application rules. What I really want (and I suspect it's not possible, but I'd love to be proved wrong) is for the compiler / VM to automatically optimize tail calls whilst still preserving stack call information. I'm not sure I want to burden the user by requiring them to understand possibly bizarre compiler implementation issues in order to work out when this might happen. That's fine, but it sounds to me as though you haven't tried very hard to figure out a reasonable balance (I could be wrong). I would argue that the reason you haven't tried hard is because you're not ascribing enough importance to having full support for recursion. However, I think it's perfectly reasonable to decide that you see some problems that you're not interested in trying to resolve, and therefore you will exclude some feature. I think it's less reasonable to then turn around and try to claim that the feature in question is no good anyway, or too complicated to understand, or whatever β€” that's essentially a sour grapes approach, and that's what I'm getting from the Python vs. TCO question. There is an element of truth in that - but as I would have imagined my other posts on this matter would have suggested, it's not really due to sloth (but thanks for the vote of confidence ;)). It's because I don't want to start adding in extra syntax, or teaching the compiler (and therefore the user) some tricky new cases, if there's actually a way of doing this automatically and transparently to the user. Once that possibility is exhausted, then I will turn to considering other more manual solutions - but not before. That seems a perfectly sensible prioritization to me. As to the sour grapes comment, as someone who's reasonably neutral on the subject, I can't help but detect some of that on both sides of the debate! There is an element of truth in that - but as I would have imagined my other posts on this matter would have suggested, it's not really due to sloth (but thanks for the vote of confidence ;)) Don't take it personally, the subject line was too good to pass up. ;) Besides, programming language designers need to be kept on their toes, otherwise next thing you know they're saying that things would be easier if we didn't really have syntax (Lisp, Smalltalk), or that lexical closures aren't important enough to be supported well (old Lisp, recent Python, Java, many others), or that unrestricted recursion isn't particularly important. Hard to believe, I know! Once that possibility is exhausted, then I will turn to considering other more manual solutions - but not before. That seems a perfectly sensible prioritization to me. Maybe. If you're going to talk about prioritization, if you look at some of the functional languages, you'll find that this stack trace business is really not a big deal. The cases where tail calls occur are iterative by nature, and not having a stack trace entry in those circumstances doesn't often matter. BTW, quite closely related to all this, I notice that on your "Why don't we use functional programming languages more?" page, you focus on lazy & pure languages, like Haskell & Clean. If those were the only kind of functional languages, I would agree with the general conclusion that the main benefit of FP was as a kind of test bed for advanced concepts. However, when you take into account eager, impure languages like SML, OCaml, Erlang, and Scheme, your premises no longer hold, and nor does the conclusion that FP is "fundamentally ill-equipped to deal with many practical problems." The upside for you, though, is that for something like this stack trace issue, you can look at what implementations of these other, practical FP languages do. There are plenty of things that could reasonably be pointed to in those languages as being unsuitable, or at least non-ideal, for a more mainstream language; but IME, the TCO stack trace issue isn't one of them. Most of these languages have at least some non-academic users, too, especially Erlang. So a reasonable question in terms of prioritization is whether or why you need to break new ground in this area. If you actively want to, more power to you, but I think you can get away without it (back to the sloth thing). As to the sour grapes comment, as someone who's reasonably neutral on the subject, I can't help but detect some of that on both sides of the debate! Neither of the things you mention are costs. The first is fairly obvious, and is evident in the forced rewriting of the Fibonacci function: many functions have their most natural form without tail calls. So, proper handling of tail calls is essential to solve problems A, B, and C, but it doesn't impact the solution of problem D. Is that a cost? I can use the same argument: many functions have their most natural form as recursive functions, rather than using iteration and mutable state variables. Is that a good argument against the while loop? Is it a cost of including while loops in a language? The second cost associated with tail calls is...only really evident in languages which give decent stack trace error reports Yes, but a stack trace for an exception thrown in a while loop does not tell you what iteration of the loop the exception occurred in. Instead, you do something like inspect state variables in a debugger to figure out what's going on. Tail recursive functions are exactly the same. One merely inspects the values of those annoying extra arguments. Qute respectfully (honest), I don't think this is really a cost of proper tail call handling. Unless it's also a "cost" and a good argument for removing loops for languages, that is. If you don't make some tough decisions about what to leave out, you'll end up with a big language. But guaranteed tail-call optimization allows you to leave so much out that it should be no brainer, if small languages are your aesthetic. It seems to me that what those Python experts who resist the further elaboration of the language are afraid of is asking beginners to learn and reflect. It's not that we want to limit beginners, but we do want to give beginners satisfying experiences, and provide a smooth learning curve as they become experts, and make "expert" a truly accessible goal. This is something of a common thread in the Zen of Python. BTW, I left a comment on why I think the seemingly-innocuous feature of tail-call elimination is problematic. Saying that tail-calls optimisation is dangerous because it is hard to predict when it applies strikes me as an argument one could make against GC too: "[X] won't have GC because it is hard to know (indeed, in suitably dynamic languages, it can be impossible to predict at compile-time) when it will be applied, so that small changes can make a constant-space program suddenly explode in memory usage. It also leads to a greater use of constructs such as linked lists, which can be hard to understand, e.g., the sharing of 'next' nodes. In addition, the absence of GC, a small subset of buggy programs will crash instead of being stuck in an infinite loop." What I find particularly interesting with this parallel is that tail-call optimisation is basically the (usually static) garbage collection of call frames. Both are ease-of-use/space -optimisations that allow some functions to automatically use constant space. Yet, I don't see many people complaining that they must understand references to predict their programs' usage of heap memory. Like GC makes the use of certain useful data structures easier, tail-call optimisation allows certain programs to work without reinventing the wheel (e.g., manual trampoline). Moreover, just as programmers who use memory dynamically have to manually manage memory when GC is unavailable, those who need deep recursion may very well end up using a trampoline in the absence of tail-call elimination. In my opinion, trampolines are much harder to work with and understand than tail-call optimisation. Basically, this is the kind of reasoning that I often hear from Pythonistas and yet never fails to amaze me: Feature X is useful, but can be confusing. X is bad even though the confusion only arises when X is used. (Or "Allowing the use of knives makes it possible to cut one's self when eating steak." Well, don't eat steak if you don't understand how to use knives. On second thoughts, this one might not be very well-chosen since I'm nearly vegetarian ;) As for the problem of backtraces, I think there is at least one solution: tail-call optimisation doesn't have to be static, nor does it have to completely erase history. One could envision an interpreter that only GCs the stack frames once it would otherwise reach the maximal recursion depth, and, even when it did GC the stack frames, would still keep the nth topmost ones. Instead of popping up a "Program stack exhausted, proceeding to prompt death." message, the interpreter could offer the option of continuing the execution after releasing all but the nth last tail-calls from the stack. The one good point I can see against tail-call optimisation is that it can be hard to explain. I could argue that people only have to use what subset of the language they understand, but I know that argument doesn't fly very well in the Python community ;) However, I don't see how this is any harder to understand than GC: A stack frame is released when no other operation remains to be done in it -- An element is released when no other element refers to it. (Note the artful avoidance of the word 'continuation') Comparing GC to tail-call elimination isn't really fair. GC is really really useful; any confusions it adds are more than worth it. Tail-call elimination just isn't that valuable. Especially if you collect frames lazily to maintain tracebacks, it's unlikely to even matter, except for code that is deliberately written in a tail recursive fashion, which I don't think is a good style because it's fragile and confusing in an environment that heavily favors iteration (which is what Python will always be). Couldn't we take the arguments against tail-call elimination and make pretty much the same argument against having recursion in general? Novice programmers aren't going to understand recursion better because we've made it worse. They'll start out being impressed that recursion makes their program shorter and more understandable. They'll test it with their small data set and be surprised when their previously working program suddenly dies on a larger data set (most likely at the most inconvient time). In fact, tail-call elimination becomes a feature again, because it might save their bacon in a few instances. And as long as we're throwing away recursion, we should probable get rid of while(), because once you add that feature, we won't be able to know in general if our program ever halts ;-) Couldn't we take the arguments against tail-call elimination and make pretty much the same argument against having recursion in general? I once skimmed a book where recursion got a place next to goto in the paragraph "Unusual control structures", so some people already make this point. I'm an optimist, so it's never worried me to believe that my functions will work, even while I'm writing them. There are already advanced Python idioms that one would have a hard time explaining to a beginner (almost anything to do with metaclasses, for instance). Does the existence of these idioms (and the fact that some people, Philip Eby for instance, are using them) harm the Python community? advanced I think that's the wrong question to ask, really, since it depends on so many unquantifiables (how a majority of people feel, what they find easy or hard to understand or explain). Much better is Dijkstra's: a majority of people Are you quite sure that all those bells and whistles, all those wonderful facilities of your so called powerful programming languages, belong to the solution set rather than the problem set? IMHO it's a pity that many implementations of various languages don't support large stacks, and they die horribly when the stack overflows or put artificially low limits on the number of active function calls. This applies especially to functional languages. A natural definition of append or map in OCaml is non-tail-recursive, and thus doesn't work for very long lists. Fixing this by rewriting the function to not use non-tail recursion usually leads to both uglier code and slower execution for the common cases of short lists, so it's rarely done. The effect is a fragile program which blows up on large data. Well, I can understand two reasons for this: Nevertheless in the compiler of my language Kogut I've implemented checking for stack overflow and resizing the stack when needed, even though stack overflow checking has some overhead (I've chosen portability over efficiency in this case). I feel that forcing programmers to abandon a recursive solution in order to obtain scalability, just because the number of calls would be linear with data size, is unfair. An alternative, ugly but scalable solution of the given problem would just use the same amount of heap instead of stack, so in overall it would not use less memory at all. An additional bemefit is that I can create lots of threads, and they can use as much stack as they need, as long as their total demand for memory fits in memory provided by the systemβ€”no need to partition the address space into threads' stacks in advance. Of course I also perform TCO. Alright, I come out, I'm an ignoramus. Every time I hear about metaprogramming, Lisp macros are automatically mentioned. Which is all good and well. But surely, someone must have tried to add macro to another language, one that would presumably use infix notation? (and not in a godawful way like C) Dylan, Nemerle. Is check the LtU "meta prograaming" category and search the archives. Does the C preprocessor count? Not really, althought it provides some of the functionality. However, C++ templates are quite close to proper metaprograming. Erlang programmers sometimes write metaprogrammey code. Here are a few techniques I've learned: compile:forms compile COMPILE fold These aren't common programming techniques in Erlang and they're not as convenient as what Lisp offers. But they're sometimes very handy. and make a single form of syntactic extension applicable to the entire language. But it's not impossible with non-postfix/prefix languages. With an irregularly or heterogeneously - syntaxed language macros can be made to "work " but I suppose there would need to be many different forms or special cases of macros. You'd need one form for each of the language's syntactic constructs which you wanted to be "macro-able". Thinking out-loud here. I've been playing with Haskell recently, and actually finding myself quite enjoying it. ;-) One of the first things to strike me when I first looked at Haskell was the similarity between type declarations and EBNF grammar notations. I presume this is more than a coincidence? Data constructors convert concrete syntax (values?) into abstract syntax (types) - would that be a correct description of the connection? (Excuse me if my terminology is off here, I need to brush up on this stuff). So, how far could this be taken? Could you have a language where it was legal to write something like: type Set a = "{" a? ("," a)* "}" myset :: Set Int myset = { 1, 2, 3, 4, 8, 12 } That would be very cool, although you'd have to check the grammar formed was consistent - another type system (does Haskell call this a kind-system?), which might be quite complex. Anyway, in this context you could completely specify a language (indeed, many languages?) via the type system, and all of the AST would be available as types. Then, a macro would simply be a function which performed a transformation on the types at compile time. A compiler would then be a special case of a macro which takes a complete program and transforms it to another language (e.g. byte-code, machine-code, etc). You could type the target language in the same system. That would be cool. Is what I have just described coherent? Does a system like this exist anywhere? I keep seeing the term "rewrite system" (or something like that), which seems vaguely connected? Apologies if this is much-covered ground. One of the first things to strike me when I first looked at Haskell was the similarity between type declarations and EBNF grammar notations. I presume this is more than a coincidence? Absolutely! The connection in its most general sense arises from structural recursion, which is one of the most fundamental features of computer science, closely related to its more mathematical cousin structural induction. You're correct that rewrite systems, or more generally, reduction systems, are closely connected to all this. Here's a page which neatly ties some of this together, with ML examples (which are very similar to Haskell at this level). A slightly more in-depth intro to these concepts can be found starting in Chapter 1 of Schmidt's book Denotational Semantics: A Methodology for Language Development (dowloadable PS). There might be better sources, more specific to languages with type systems like Haskell's, but Schmidt's is quite clear & concise. As far your syntax-defining idea goes, have you looked at a Haskell prelude, where all the basic datatypes, operators and their precedence are defined? In some ways, it comes pretty close to what you're talking about. However, it doesn't actually allow you to define the most basic syntax of the language, since as you say, you'd run into the problem of checking the resulting grammar, and at that point it probably starts to make sense to separate the two phases more clearly. A more separated approach is Camlp4, which is a preprocessor layer, rather than something built into the language in the way you describe. Thanks for the links. Looks like I've got some more reading to do. Just after I posted it occurred to me that it's probably better to separate out the concrete syntax definition and the rest of the type system/functionality. When I was completing my undergraduate degree, a friend was working on an editor for Haskell that built up and manipulated an AST as you typed. If you could separate out the concrete syntax then you could provide multiple views onto the same AST fairly easily (even non-textual ones like UML-style diagrams). Or, alternatively: every programmer in a team using a customised syntax to write the same language (where "the same" = structural equivalence). Endless possibilities. Maybe static types aren't so bad, after all! Perl6 is supposed to have macros. See this paper: "Growing Languages with Metamorphic Syntax Macros (2000)" or . Abstract: "From now on, a main goal in designing a language should be to plan for growth." Guy Steele: Growing a Language, OOPSLA'98 invited talk. We present our experiences with a syntax macro language which we claim forms a general abstraction mechanism for growing (domain-specific) extensions of programming languages. Our syntax macro language is designed to guarantee type safety and termination. A concept of metamorphisms allows the arguments of a macro to be inductively defined in a meta level... -- Jens Axel Søgaard And I think continuations could lead to bad things. There are wrong paths on the road to higher-level programming. With regard to error handling, programming took the exceptions path. IMO we would have been better off on the continuations path. the exceptions path...the continuations path There are other ways of handling failure besides these two - the Maybe monad, for instance, or search with backtracking... Maybe It's common in Python to use exceptions for failures of all kinds, e.g. try: someDict[someKey] += 1 except KeyError: someDict[someKey]= 0 instead of if someDict.has_key(someKey): someDict[someKey] += 1 else: someDict[someKey] = 0 There have been times that I've wanted some other exception mechanism in Java (say), or something like convenient multiple return values. Ideally, in those cases I'd just write programs in continuation passing style. Except then I would need proper handling of tail calls, and apparently some would deny me even that. Exceptions as some sort of call/cc+mutation design pattern, or as the direct-style transform of programs with explicit normal and exceptional continuations are fine. Choosing to provide them in a language is probably for the good. Denying me the ability to usefully write CPSed code without requiring some annoying trampoline is bad. From the article: I think Guido has been right to resist macros. Not because they are necessarily wrong, but because we haven't yet figured out how to do them right. I think the issue is we've never explained it right. Looking back at all the good and bad things the Lisp community has formed, one bad thing is they've probably been too aristocratic and never explained things well to a normal, interested audience. People weren't told how useful it is for code to be in datastructures more directly usable than text, just as I am thinking right now in terms of words/sentences/paragraphs, not letters. And that lists may be improved upon, so it is no religion. I finished a particular study today. With nothing better to do at my work I decided to write a basic monadic parsing combinators library in Java (yes, I knew how much pain it would cause me). In Haskell MPC libraries are very easy to write, a couple of hours for a basic set of parsers. Its advanced type system carefully restricts your code to something that is coherent. A simple date parser ends looking like this: do d <- digits 2 char '/' m <- digits 2 char '/' y <- digits 4 return (d, m, y) As the parsers are parametric polymorphic we can use the same infra-structure for parsers returning numbers or tuples. The type system ensured that I couldn't treat a number as an string or vice-versa and the type inference algorithm freed me from having to write redundant information. Doing the same in Java took me two full days of work, instead of a page or two of code I ended with 25 classes and almost 1000 LOC. Most of this time I spent chasing bugs due to type-casting. Where in Haskell I was informed by the compiler of incoherent usage of functions, in Java I spent time chasing the places where I thought an Integer was coming but in reality it was a state transition. The Java compiler almost never complained about my code, of course it was due to respect of my intelligence. After the first day I decided to ignore the monadic parametrization of the parser because things were getting too complicated (i.e. I went from type Parser s m a = P (s -> m a) to type Parser s a = P (s -> Maybe a)) and decided to use just unambiguous grammars. In the end the code compiled and executed without problems. My test bar was green as the morning grass. type Parser s m a = P (s -> m a) type Parser s a = P (s -> Maybe a) But when I tried to use the parser combinators in a real world test (i.e. processing a bunch of bank statement text files and generatin some statistics about them) the JVM gave me a silent message java.lang.StackOverflowError. Suddenly my program failed to work on some files (while working on others), just because the runtime failed to see that some stack frames weren't necessary. java.lang.StackOverflowError In the end of the second day I decided to share the results of this simple experiment about how the constraints imposed by the Haskell compiler could allow me to have higher-order thoughts, where the liberality of the Java compiler gave me nothing but generalities. Also the simple TCO provided by Haskell allowed me to use a solution impossible in Java. You're deliberately fanning the flames, aren't you? :-) Java is tricky because a naive tail-call optimization would break the security model. The JVM checks security by looking at where code on the stack was loaded from (local disk vs. web, etc). The idea is that if something on the stack was loaded from then maybe you shouldn't be deleting any files for it. If you tail-call optimized an untrusted method like: void nasty() { Runtime.getRuntime().exec("halt"); } SecurityManager exec nasty This basic model makes higher-order Java programming seem scary in combination with untrusted code: it only has to arrange for something nasty to happen without its own code being on the stack at the time. If it were Emacs Lisp it would be as easy as: (push 'kill-emacs post-command-hook) at least that's what I've been led to believe. So the restriction should be based on how the class was loaded. Only gets to be a problem if you have multiple class loaders operating simultaneously within the VM and objects talk across that boundary. and without the stack, one can't check for permissions in your caller (or grandcaller, or great-grandcaller, or ...) The answer is so simple and obvious I'm amazed it took so long to find. A tail-recursive machine with stack inspection The answer is so simple and obvious I'm amazed it took so long to find. Specifically, if some action happens as a result of join of two asynchronous calls from different principals, who of them is "guilty" (accountable for the action)? Or is this question meaningless and we should abandon trying to assign permissions to single principals and start supporting permissions to sets of principals (or even to propositions?). I believe a simpler idea to just intersect permissions of these principals is not alwys adequate - consider a door with two locks and two people entrusted with different keys. Oh, should we abandon ACLs completely?.. Capabilities to the rescue? To claim relevance to topic, core join calculus uses something like CPS, so is probably experiencing security issues similar to PLs with TCO. I don't get why you can't just form the union of the permissions when replacing the stack frame? As an excuse for missing the obvious, it's 2 AM ;) The state is nothing but an instrument of oppression of one class by another. Ò€”Friedrich Engels Alternatively, check Java Security Architecture: Overview of Basic Concepts Our model addresses security concerns while simplifying the tasks of programmers and thereby avoiding security pitfalls. Although widely applicable, it is particularly motivated by the characteristics and needs of JVMs and of the CLR: it is largely compatible with the existing stack-based model, but it protects sensitive operations more systematically[...] For some reason, it seems to me that stack-based security is equivalent to history-based one in programs written in CPS... [on edit: ah, the author also thinks so: Conversely, to implement history-based rights on top of stack inspection mechanisms, for any given call, we can (in theory) apply a Ò€œcontinuation-passing-styleÒ€ transform that passes an extra function parameter to be called with the result, upon completion of the callee. Hence, the callee still appears on the stack while its result is used by the callerÒ€ℒs continuation. However, this encoding is not practical, except maybe for a few sensitive interfaces. Java is tricky because a naive tail-call optimization would break the security model. In a slashdot interview of I think 3 years ago, Kent Pitman cited private communication to the effect that some Java engineers had figured out how to adapt the trust model to a/the spashetti stack, and were considering adding tail-call optimisation to Java. I've heard some other Sun connected types saying this couldn't work, but I've seen nothing concrete in this time. Does anyone have better information about this rumour? If Java had a better static type system and referential transparency it only would need to check for calls to IO calling code. The place I blew the stack was referentially transparent, my Parser operated on strings so I worked in the same way a Haskell program works: do contents return x Nothing -> fail "Something bad happened" Another thing a "stricter" compiler could do.
http://lambda-the-ultimate.org/node/472
CC-MAIN-2019-35
en
refinedweb
It’s a nice and positive presentation of the language, although superficial compered to the others they covered: Julia included in Oreilly's "Emerging Programming Languages" report (June 2019) It’s a nice and positive presentation of the language, although superficial compered to the others they covered: Julia is an enormous language You reckon this is true? It seems kind medium-sized to me–though I have to admit, the apparent eagerness of the core devs to add syntax is the only significant source of trepidation I have about the language. I don’t feel like it’s enormous yet, but it does seem like it could get that way if a more restrained approach to adding features isn’t taken in the future. I am not quite sure what you mean here, can you give a few examples? I feel kind of the opposite: extra syntax is usually added after careful consideration. Eg #24990. I think it’s just perspective. Someone who doesn’t do technical programming for a living looks at Julia and goes "man, they have a lot of weird extra syntax for math people like \, mul! vs *, @., Tridiagonal, etc., while omitting the things people are used to seeing in a basic language implementation like a web server (yup, quite a few languages have on in there) or graphics engine (usually much bigger than the entirety of Julia). But when you see something you don’t expect, you go β€œthey add a lot of peculiar things”. It is possible that I understand syntax in the narrow technical sense (syntax is what the parser eats). For me Tridiagonal, mul!, and * are not syntax; @. is. Probably one example at I wrote a guide about Object Orientation and Polymorphism in Julia. opinions wanted! I understand * and \ as syntax, but not the rest, they are just names. Actually, even those are just operators, not sure if they count as syntax. The @ itself is syntax. The goal of this report seems ambitious but IMO it does not have enough details for any language. I think it needs to go just a little bit deeper to be useful. I’m pleased to see Julia gets covered though. Well, it’s encouraging to see that they are being careful about it. It would be more encouraging to see some examples of extra syntax being rejected after careful consideration–though not necessarily this specific case. I’m biased towards anything that helps with pipelines. For me, the canonical example is the for loops. for x in xs vs. for x ∈ xs vs. for x=xs. I realize that the the first two are alternate ways to write the in operator (because every languages needs two of those, right?), but the for x=xs definitely qualifies as additional syntax. Then you’ve got comprehensions syntax vs. the usual functors (map, filter and friends) vs. dot broadcasting. I like all of those things, but they are all sight variations on the same concept. import vs. using Splatting with ... vs. array concatination with ;. I realize these are quite different… but then they are kind of the same when defining a new array, only you’re supposed to use ; if it’s an array you’re β€œsplatting” into your new array, but not necessarily a lazy iterable. I do admit that I find the syntax for array literals in general to be a bit overwhelming, but I suspect that is because I never use multi-dimensional arrays, and I assume the extra syntax is necessary for those who do, so that’s not one of the ones that bugs me. Yes, and that’s the latest one I’ve discovered. Another related one which no one has been able to explain to me is the difference between Array{T,1} where T and Array{T,1} where {T}. If they are the same, why are they both supported? (I don’t know if they are the same, but nobody knew a difference when I asked) I will say that generics in gnereal are one area where Julia definitely hits the jackpot in terms of giving a lot of expressiveness with very little syntax. I give all explicitely marked macros a pass. @ is syntax, but every little micro DSL I just file away under that single feature. Places where the languages transforms something with macros implicitly, that’s syntax. I’m actually very pleased that most of Julia’s concurrency features are wrapped up in explicit macros rather than new keywords, since you can see how all of it works with @macroexpand. Of these features, \ is the only one I find a little unnecessary, but I’m generally willing to ignore β€œmath stuff” I don’t use because I assume it’s there for a reasons and I never really have to deal with it. The thing I might object to a wee tiny bit is all the functions that have some alias to a unicode operator, but I try to suspend my misgivings about this because I can imagine its helpful to be able to express oneself in code the same way one would in a paper–on the other hand, I know a physicist who does technical computing for a living and is much harder on Julia’s novel math syntax than I am, but I’ll leave that discussion to the domain experts. In the end, It’s not any one bit of syntactic sugar that’s too much, it’s just the number of cases where there are multiple ways to the same or nearly the same thing reminds me a bit of Perl. I don’t think Julia is an enormous language… I just… don’t want it too be one, either! It was more just when I read this sentence in the report that I was like, β€œoh, maybe I’m not just paranoid.” They are probably including macros or something, or maybe all the functions in the default namespace (which is admittedly way more than most languages, but that doesn’t bother me. A function is a function.) They are the same. However the second one is more compact when writing of more complex types, for example SArray{S, T, L} where S where T where L can be shortened to SArray{S, T, L} where {S, T, L} I do feel the same. I think the difference is clear once you know it: using Mif you want all symbols exported from M import Mif you only want Mto be in your namespace using M: f, gif you only want to call functions import M: f, gif you want to extend functions (without fully qualifying them like M.f) FYI you can also expand native syntaxes using Meta.@lower. For example, julia> Meta.@lower x[y] .= f.(z) :($(Expr(:thunk, CodeInfo( 1 ─ %1 = (Base.dotview)(x, y) β”‚ %2 = (Base.broadcasted)(f, z) β”‚ %3 = (Base.materialize!)(%1, %2) └── return %3 )))) Once I realized that many syntax magics happen at lowering phase, it felt like Julia is more minimalistic (compared to the impression before realizing it). It shortens to SArray{S, T, L} where {L, T, S}. It is still equivalent which is my point. julia> (SArray{S,T,L} where {S, T ,L}) == (SArray{S,T,L} where {L, T, S}) true Equivalent in some context (where the order of S,T,L does not matter), different in other (where their order does matter). SArray{S, T, L} where {S, T, L} is the same as SArray{S, T, L} where L where T where S. While for printing, === or directly observing the fields they are different but from a type standpoint in Julia they are the same. julia> (SArray{S,T,L} where {S <: Int, T <: Real ,L <: Complex}) <: (SArray{S,T,L} where {L <: Complex, T <: Real, S <: Int}) true julia> (SArray{S,T,L} where {L <: Complex, T <: Real, S <: Int}) <: (SArray{S,T,L} where {S <: Int, T <: Real ,L <: Complex}) true julia> (SArray{S,T,L} where {L <: Complex, T <: Real, S <: Int}) == (SArray{S,T,L} where {S <: Int, T <: Real ,L <: Complex}) true If Julia took order into account while determining specificity in type dispatch, like Common Lisp, it may have mattered that the expression after the where was in a different order but Julia does not. Are you following discussions here, and issues and PRs on Github? At this stage of the language, many (I would say most) proposals for syntax changes are actually rejected. Not really. They have an intersection, but they are different. Both have their uses (and similarly list comprehensions). I hope you realize that ... has uses outside creating arrays β€” in fact, it has nothing to do with arrays per se. Your point about = and in in loops is valid. @tkf has explained the difference between import and using. While Julia is of course not perfect, I am wondering if your impression about the β€œeagerness to add syntax” stems from a superficial understanding of some elements of the language. As far as I can tell (despite some syntax examples) you seem to be more concerned about the number of names defined in Base than actual syntax. The good news on that front is that there has been a concerted effort to slim down Base, and move things out. So much so, that I frequently see people clamoring to bring things back or put new stuff in. (I totally agree on for i = ..., btw. I really dislike that particular Matlabism.) I looked it up recently, so these differences are still in my mind, I just don’t get why there needs to be two keywords for this. I personally would rather have fewer semantic options. i.e. one option for all exported symbols (not extensible) and one for importing just the module name. Module.function aways required for extension. This is very cool, thanks! Good to know, thanks! I’m glad to hear it. It seems like most of the proposals I end up hearing about get through, but, as you say, I don’t follow the all the proposal requests closely. It’s probably mostly the ones that have some support from the core devs that trickle down to me. yes. I guess it’s possible. From my perspective–which may be skewed, as you point out–I just see a lot of syntax for similar kinds of things. I realize that dot broadcasting, map’n’filter and comprehensions are different (well, comprehensions are basically same as Base.Filter + Base.Generator, but broadcasting is a little different), but with subtly different semantics. I tend to gravitate towards languages that are intentional about restricting themselves to a smaller number of syntactic forms, and I’m not sure Julia is that kind of language. I do realize that most syntax features are just sugar for normal functions and macros, but that doesn’t make them… not syntax. What gives you that impression? I find it convenient to have a lot of stuff in Base. I hope they at least leave the functions for dealing with files and the filesystem in! That’s (maybe) my favorite thing about Julia! I also like having easy access the the functions for running and communicating with external processes, though I could understand if that were moved out of Base. (but I guess it won’t be, since there is literal syntax for commands in the language, and it would be silly to have to import a bunch of functions to be able to use this language feature.) I think the main problem with the current situation is not that map, broadcasting, and list comprehensions are too similar (and thus redundant syntax for the same thing), but that they are quite different, and a new user may have a difficult time understanding the difference and, more importantly, picking when to use the right one. Pitfalls like julia> map(+, 1, 1:3) 1-element Array{Int64,1}: 2 julia> 1 .+ (1:3) 2:4 can be surprising since eg map in particular can’t be accused of being overdocumented. I would of course make a PR but I am not sure I know all the corner cases. The differences are scattered in comments like these. Your position is a curious mixture of being a purist when it comes to language design (avoiding constructs you consider redundant) and a kitchen sink approach for β€œbuilt-in” functions. People may find it convenient to have stuff in Base, but at the same inconvenient to develop stuff in Base, or even the standard librares. As long as their release cycle and versioning is coupled to Base, changes are going to be very slow (2-3 times a year, compared to packages which can get new features with a complete deprecation cycle in a matter of weeks), and consequently people tend to be more conservative about what goes into Base and the standard libraries. I wonder if people arguing for β€œbatteries included” realize that this means that they get stuck with one battery type for longer time, when with a more modular approach they would already have the shiny new batteries with 2x the capacity, a mascot playing a percussion instrument, and a raygun (* while stocks last).
https://discourse.julialang.org/t/julia-included-in-oreillys-emerging-programming-languages-report-june-2019/25562
CC-MAIN-2019-35
en
refinedweb
Return data to an etcd server or cluster python-etcd In order to return to an etcd server, a profile should be created in the master configuration file: my_etcd_config: etcd.host: 127.0.0.1 etcd.port: 2379 It is technically possible to configure etcd without using a profile, but this is not considered to be a best practice, especially when multiple etcd servers or clusters are available. etcd.host: 127.0.0.1 etcd.port: 2379 Additionally, two more options must be specified in the top-level configuration in order to use the etcd returner: etcd.returner: my_etcd_config etcd.returner_root: /salt/return The etcd.returner option specifies which configuration profile to use. The etcd.returner_root option specifies the path inside etcd to use as the root of the returner system. Once the etcd options are configured, the returner may be used: CLI Example: salt '*' test.ping --return etcd A username and password can be set: etcd.username: larry # Optional; requires etcd.password to be set etcd.password: 123pass # Optional; requires etcd.username to be set Authentication with username and password, currently requires the master branch of python-etcd. You may also specify different roles for read and write operations. First, create the profiles as specified above. Then add: etcd.returner_read_profile: my_etcd_read etcd.returner_write_profile: my_etcd_write The etcd returner has the following schema underneath the path set in the profile: The job key contains the jid of each job that has been returned. Underneath this job are two special keys. One of them is ".load.p" which contains information about the job when it was created. The other key is ".lock.p" which is responsible for whether the job is still valid or it is scheduled to be cleaned up. The contents if ".lock.p" contains the modifiedIndex of the of the ".load.p" key and when configured via the "etcd.ttl" or "keep_jobs" will have the ttl applied to it. When this file is expired via the ttl or explicitly removed by the administrator, the job will then be scheduled for removal. This key is essentially a namespace for all of the events (packages) that are submitted to the returner. When an event is received, the package for the event is written under this key using the "tag" parameter for its path. The modifiedIndex for this key is then cached as the event id although the event id can actually be arbitrary since the "index" key contains the real modifiedIndex of the package key. Underneath the minion.job key is a list of minions ids. Each minion id contains the jid of the last job that was returned by the minion. This key is used to support the external job cache feature of Salt. Underneath this key is a list of all of the events that were received by the returner. As mentioned before, each event is identified by the modifiedIndex of the key containing the event package. Underneath each event, there are three sub-keys. These are the "index" key, the "tag" key, and the "lock" key. The "index" key contains the modifiedIndex of the package that was stored under the event key. This is used to determine the original creator for the event's package and is used to keep track of whether the package for the event has been modified by another event (since event tags can be overwritten preserving the semantics of the original etcd returner). The "lock" key is responsible for informing the maintenance service that the event is still in use. If the returner is configured via the "etcd.ttl" or the "keep_jobs" option, this key will have the ttl applied to it. When the "lock" key has expired or is explicitly removed by the administrator, the event and its tag will be scheduled for removal. The createdIndex for the package path is written to this key in case an application wishes to identify the package path by an index. The other key under an event, is the "tag" key. The "tag" key simply contains the path to the package that was registered as the tag attribute for the event. The value of the "index" key corresponds to the modifiedIndex of this particular path. salt.returners.etcd_return. clean_old_jobs()ΒΆ Called in the master's event loop every loop_interval. Removes any jobs, and returns that are older than the etcd.ttl option (seconds), or the keep_jobs option (hours). salt.returners.etcd_return. event_return(events)ΒΆ Return event to etcd server Requires that configuration enabled via 'event_return' option in master config. salt.returners.etcd_return. get_fun(fun)ΒΆ Return a dict containing the last function called for all the minions that have called a function. salt.returners.etcd_return. get_jid(jid)ΒΆ Return the information returned when the specified job id was executed. salt.returners.etcd_return. get_jids()ΒΆ Return a list of all job ids that have returned something. salt.returners.etcd_return. get_jids_filter(count, filter_find_job=True)ΒΆ Return a list of all job ids :param int count: show not more than the count of most recent jobs :param bool filter_find_jobs: filter out 'saltutil.find_job' jobs salt.returners.etcd_return. get_load(jid)ΒΆ Return the load data that marks a specified jid. salt.returners.etcd_return. get_minions()ΒΆ Return a list of all minions that have returned something. salt.returners.etcd_return. prep_jid(nocache=False, passed_jid=None)ΒΆ Do any work necessary to prepare a JID, including sending a custom id. salt.returners.etcd_return. returner(ret)ΒΆ Return data to an etcd profile. salt.returners.etcd_return. save_load(jid, load, minions=None)ΒΆ Save the load to the specified jid.
https://docs.saltstack.com/en/develop/ref/returners/all/salt.returners.etcd_return.html
CC-MAIN-2019-35
en
refinedweb
ALM Rangers - Version Control in the TFS Client Object Model By Jeff Bramwell | January 2013 This article is a follow-up to β€œUsing the Team Foundation Server Client Object Model,” written by members of the Visual Studio ALM Rangers in the August 2012 issue (msdn.microsoft.com/magazine/jj553516). So far, we’ve introduced the Team Foundation Server (TFS) client object model, and now I’ll introduce the version control APIs. To recap, the ALM Rangers are a group of experts who promote collaboration among the Visual Studio product group, Microsoft Services and the Microsoft Most Valuable Professional (MVP) community by addressing missing functionality, removing adoption blockers and publishing best practices and guidance based on real-world experiences. If someone were to ask you to explain what TFS is, chances are you’d mention version control within the first few sentences. Although version control does play an important role within TFS, you can see in Figure 1 that there’s much more to TFS. As with many features within TFS, the version control subsystem is accessible via the TFS object model. This accessibility provides you with an extensibility model that you can leverage within your own custom tools and processes. Figure 1 Team Foundation Server Features Assemblies and Namespaces Before you can access the functionality provided within the TFS object model, you must first understand the required assemblies and namespaces. You’ll recall the first article used the namespace Microsoft.TeamFoundation.Client. This namespace contains the classes and methods necessary for connecting to a TFS configuration server, and it’s located within the identically named assembly. This namespace is central to all TFS object model-related development. When working with version control, we must also utilize the namespace Microsoft.TeamFoundation.VersionControl.Client. This namespace contains the classes necessary for interacting with the TFS version control system. Utilizing the APIs within this namespace allows you to access files and folders, pending changes, merges, branches, and so on. The VersionControlServer class within this namespace is the main class that provides access to the TFS version control repository. A Simple Example to Start The VersionControlServer class exposes many properties, methods and events for interacting with version control within TFS. I’ll start with a simple example: retrieving the latest changeset ID. The three basic steps required to interact with most of the APIs exposed by the TFS object model are: - Connect to a TFS configuration server. - Obtain a reference to the TFS service you plan to utilize. - Make use of the various properties, methods and events provided by the service. Taking a slightly different approach to connecting to TFS, as opposed to the examples presented in the August article, I’m going to connect to TFS using the TeamProjectPicker class. The TeamProjectPicker class displays a standard dialog for connecting to TFS servers. This class is not only useful for full-featured applications but is also very handy for simple utilities where you might need to switch among multiple instances of TFS. Create a new instance of TeamProjectPicker and display it using the ShowDialog method: private TfsTeamProjectCollection _tpc; using (var picker = new TeamProjectPicker(TeamProjectPickerMode.NoProject, false)) { if (picker.ShowDialog()== DialogResult.OK) { _tpc = picker.SelectedTeamProjectCollection; } } This code will display a dialog similar to that shown in Figure 2. Figure 2 The TeamProjectPicker Dialog Clicking Connect will return an instance of TfsTeamProjectCollection representing the selected Team Project Collection (TPC). If you prefer to use a more programmatic approach (that is, no user interaction) to connect to TFS, refer back to the August article for further examples. Once you’ve obtained a reference to a TfsTeamProjectCollection, it can be used to obtain an instance of the VersionControlServer service: var vcs = _tpc.GetService<VersionControlServer>(); Once you have a reference to the service you can make use of the methods exposed by the service: var latestId = vcs.GetLatestChangesetId(); This is a simple example, but it does demonstrate the basic steps for interacting with the version control system in TFS. However, few applications are this simple. β€œGetting Latest” A common scenario related to version control is obtaining the latest source code from the repositoryβ€”that is, β€œgetting latest.” While working within Visual Studio, you typically get the latest source code by right-clicking a file or folder within the Source Control Explorer (SCE) and selecting Get Latest Version. For this to work properly, you must also have a mapped workspace selected. When downloading the latest source code from the server, the selected workspace determines where it will be stored. Follow these steps to programmatically obtain the latest source code: - Connect to a TFS configuration server. - Obtain a reference to the version control service. - Utilize an existing workspace or create a new, temporary workspace. - Map the workspace to a local folder. - Download the desired files from the workspace. Building on the previous example, add the code shown in Figure 3. // Create a temporary workspace var workspace = vcs.CreateWorkspace(Guid.NewGuid().ToString(), _tpc.AuthorizedIdentity.UniqueName, "Temporary workspace for file retrieval"); // For this workspace, map a server folder to a local folder workspace.Map("$/Demo/TFS_VC_API", @"C:\Dev\Test"); // Create an ItemSpec to determine which files and folders are retrieved // Retrieve everything under the server folder var fileRequest = new GetRequest( new ItemSpec("$/Demo/TFS_VC_API", RecursionType.Full), VersionSpec.Latest); // Get latest var results = workspace.Get(fileRequest, GetOptions.GetAll | GetOptions.Overwrite); If a workspace already exists and you’d like to use it, replace lines 1-4 in Figure 3 with the following: // Get a reference to an existing workspace, // in this case, "DEMO_Workspace" var workspace = vcs.GetWorkspace("DEMO_Workspace", _tpc.AuthorizedIdentity.UniqueName); Note that if you don’t know the name of the workspace or don’t want to specify it, you can call GetWorkspace (see preceding code sample), passing in only a local path. This will return the workspace mapped to the local path. You don’t need to map the workspace programmatically, so you can also remove lines 5 and 6. Identifying Files and Folders for Download As you might expect, several of the APIs provided by TFS allow you to query the version control server for specific items as well as specific versions of items. In the previous example, when I created a new instance of GetRequest, I had to provide an instance of ItemSpec. An ItemSpec, short for item specification, describes a set of files or folders. These items can exist on your local machine or in the version control server. In this specific example, I’m building an ItemSpec to return all files within the server folder β€œ$/Demo/TFS_VC_API.” The second parameter in the ItemSpec constructor used here specifies RecursionType, which can be None, Full or OneLevel. This value determines how many levels deep an API should consider when querying items. Specifying a RecursionType of OneLevel will query or return items from only the topmost level (relative to the ItemSpec). A value of Full will query or return items from the top-most level as well as all levels below (again, relative to the ItemSpec). Whereas an ItemSpec determines which items to consider based on name and location when querying the version control system, a VersionSpec, short for version specification, provides the ability to limit item sets based on version. VersionSpec is an abstract class so it can’t be instantiated directly. TFS provides several implementations of VersionSpec that you can make use of when querying the version control system. Figure 4 lists the various implementations of VersionSpec provided out of the box with TFS 2012. Figure 4 VersionSpec Types Going back to the previous example of creating a GetRequest, I specified VersionSpec.Latest as my version specification. VersionSpec.Latest is simply a reference to a singleton instance of LatestVersionSpec provided just for convenience. To retrieve code based on a specific label, for example, create an instance of LabelVersionSpec: var fileRequest = new GetRequest( new ItemSpec("$/Demo/TFS_VC_API", RecursionType.Full), new LabelVersionSpec("MyLabel")); Checking out Code Now that you know how to identify and retrieve specific items from the version control server, let’s look at how you can check out source code. In TFS terms, to check out an item is to pend an edit on that item. To pend an edit on items within a particular workspace, you call the Workspace.PendEdit method. The PendEdit method has nine overloads, all of which require a path or an array of paths as well as a few other optional parameters. One of the optional parameters is RecursionType, which works exactly as previously described for ItemSpec. For example, to check out all C# (.cs) files, make this call: // This example assumes we have obtained a reference // to an existing workspace by following the previous examples var results = workspace.PendEdit("$/Demo/TFS_VC_API/*.cs", RecursionType.Full); In this example, I’m requesting that TFS pend edits on all C# files (via the *.cs wildcard) beneath the server folder β€œ$/Demo/TFS_VC_API.” Because I’m specifying a RecursionType of Full, I’ll check out C# files in all folders beneath the specified path. The specific method signature used in this example will also download the checked-out files to the local path as mapped by the specified workspace. You can use one of the overloaded versions of this method that accepts an argument of type PendChangesOptions and specify PendChangesOption.Silent to suppress the downloading of files when pending edits. The value returned in results contains a count of items downloaded because of the call to PendEdit. Edits aren’t the only action that you can pend within the version control system. There are also methods for pending: - Adds via PendAdd - Branches via PendBranch - - Properties via PendPropertyName - Renames via PendRename - Undeletes via PendUndelete For example, the following code pends a new branch, named Dev, from the folder Main: // This example assumes we have obtained a reference // to an existing workspace by following the previous examples var results = workspace.PendBranch("$/Demo/TFS_VC_API/Main", "$/Demo/TFS_VC_API/Dev", VersionSpec.Latest); We’ll cover branching and merging using the APIs in more detail in a future article. Checking in Changes Once you’ve made changes to one or more of the checked-out files, you can check them back in via the Workspace.CheckIn method. Before you call the CheckIn method, however, you must first obtain a list of pending changes for the workspace by calling Workspace.GetPendingChanges. If you don’t specify any parameters for the GetPendingChanges method, you’ll get back all pending changes for the workspace. Otherwise, you can make use of one of the other 11 overloads and filter the list of pending changes returned by the call to TFS. The following example will check in all pending changes for the workspace: // This example assumes we have obtained a reference // to an existing workspace by following the previous examples var pendingChanges = workspace.GetPendingChanges(); var results = workspace.CheckIn(pendingChanges, "My check in."); In the first line of code, I’m getting a list of all pending changes for the workspace. In the second line, I check everything back in to the version control server specifying a comment to be associated with the changeset. The value returned in results contains a count of the items checked in. If there’s one or more pending changes and results comes back as zero, then no differences were found in the pending items between the server and the client. Pending edits aren’t the only changes checked in to the version control system. You also check in: - Additions - Branches - Deletions/Undeletes - Properties - Renames You can also undo pending changes by calling the Workspace.Undo method. As with the CheckIn method, you must also specify which pending changes you want to undo. The following example will undo all pending changes for the workspace: var pendingChanges = workspace.GetPendingChanges(); var results = workspace.Undo(pendingChanges); Retrieving History A common task within Team Explorer is viewing the history of one or more files or folders. You might find the need to do this programmatically as well. As you might expect, there’s an API for querying history. In fact, the method I’m going to discuss is available from the VersionControlServer instance (as represented by the variable vcs in the previous examples). The method is VersionControlServer.QueryHistory, and it has eight overloads. This method provides the capability of querying the version control server in many different ways, depending on the types and values of the parameters passed into the method call. Figure 5 shows what the history view for the file Form1.cs might look like within the SCE. Figure 5 History for Form1.cs You can replicate this functionality programmatically using the code shown in Figure 6. var vcs = _tpc.GetService<VersionControlServer>(); var results = vcs.QueryHistory( "$/Demo/TFS_VC_API/Form1.cs", // The item (file) to query history for VersionSpec.Latest, // We want to query the latest version 0, // We're not interested in the Deletion ID RecursionType.Full, // Recurse all folders null, // Specify null to query for all users new ChangesetVersionSpec(1), // Starting version is the 1st changeset // in TFS VersionSpec.Latest, // Ending version is the latest version // in TFS int.MaxValue, // Maximum number of entries to return true, // Include changes false); // Slot mode if (results != null) { foreach (var changeset in (IEnumerable<Changeset>)results) { if (changeset.Changes.Length > 0) { foreach (var change in changeset.Changes) { ResultsTextBox.Text += string.Format(" {0}\t{1}\t{2}\t{3}\t{4}\t{5}\r\n", change.Item.ChangesetId, change.ChangeType, changeset.CommitterDisplayName, change.Item.CheckinDate, change.Item.ServerItem, changeset.Comment); } } } } Pay special attention to the argument on line 13 in Figure 6. I’m specifying a value of true for the parameter includeChanges. If you specify false, then specific changes for the version history won’t be included in the returned results and the Figure 6 example won’t display the details. You can still display basic changeset history without returning the changes, but some detail will be unavailable. Running the Figure 6example produces the results shown in Figure 7. Figure 7 History from API There are many other variations of calling the QueryHistory API. For ideas on how you might make use of this method, simply play around with the history features in the SCE. You can also query much more than history. For example, there are other query-related methods provided by the VersionControlServer, such as: - QueryBranchObjectOwnership - QueryBranchObjects - QueryLabels - QueryMergeRelationships - QueryMerges - QueryMergesExtended - QueryMergesWithDetails - QueryPendingSets - QueryRootBranchObjects - QueryShelvedChanges - QueryShelvesets - QueryWorkspaces There’s a lot of information available from the version control server, and the various query methods provide you with a window into that information. Next Steps The Rangers have only begun to touch on the version control features exposed by the TFS object model. The features covered in this article can be useful and powerful, but myriad features are exposed by the TFS version control server that we haven’t covered. Some examples of these features include branching and merging, shelvesets, labels and version control events. With this series of articles, we hope to expose many of these APIs and provide you with the knowledge you need to make better use of the TFS object model. Stay tuned for more.. Thanks to the following technical experts for reviewing this article: Brian Blackman, Mike Fourie and Willy-Peter Schaub
https://msdn.microsoft.com/ru-ru/library/jj883959
CC-MAIN-2019-35
en
refinedweb
Distributed_0<< Once you’ve added this to your project, you’ll need to make a couple minor changes to your Startup class in Startup.cs. First, in ConfigureServices, add a couple namespaces:: Caching strings You can use the GetString and SetString methods to retrieve/set a string value in the cache. This would appear in the β€œcachebucket” bucket as an encoded binary value (not JSON). In the sample code, I simply print out the ViewData["Message"] in the Razor view. It should look something like this: Caching objects You can also use Set<> and Get<> methods to save and retrieve objects in the cache. I created a very simple POCO (Plain Old CLR Object) to demonstrate: Next, in the sample, I generate a random string to use as a cache key, and a random generated instance of MyPoco. First, I store them in the cache using the Set<> method: Then, I print out the key to the Razor view: Next, I can use this key to look up the value in Couchbase: ). In the sample project, I’ve also set this to print out to Razor. If you view that document (before the 10 seconds runs out) in Couchbase Console, you’ll see that it has an expiration value in its metadata. Here’s an example:.
https://blog.couchbase.com/distributed-caching-aspnet-couchbase/
CC-MAIN-2019-35
en
refinedweb
pipeline Easily transform any stream into a queue of middleware to treat each object in the stream. Usage First, let's look at the middleware. class PrependLineNumber implements Middleware<String> { int _number = 0; Future<String> pipe(String line) async { _number++; return '$_number: $line'; } Future close() async { // Tear down method // Silly example _number = -1; } } The Middleware interface contains Future<T> pipe(T item) and Future close(). The return value of the pipe method are sent to the next middleware in the pipeline. The return value can be either a value or a Future. The pipeline will wait before it sends it through to the next middleware. If the return value is null or Future<null> the item will be dropped from the pipeline and will never reach the next middleware. This is useful for buffering data up to a specific point and then releasing through to the next middleware. This is a middleware that accepts data from a file stream, but only passes forward every line as it is processed. class ReadLine implements Middleware<int> { String buffer = ''; Future<String> pipe(int unit) async { String character = new String.fromCharCode(unit); // If the character isn't a newline, remove this item from the pipeline if (character != '\n') { buffer += character; return null; } String line = buffer; buffer = ''; return line; } Future close() async {} } Pipeline To actually use these middleware, we need a stream of char codes. In this case, we fake it a bit to prove a point. Anyway, we can either create a Pipeline object with the char stream, or we can pipe the stream to a pipeline object. The pipeline itself is a stream, so we can return the pipeline and allow other parts of the program to listen to it. Future<Pipeline<String>> everyLineNumbered(File file) async { Stream<List<int>> stream = new Stream.fromIterable(await file.readAsBytes()); Pipeline<String> pipeline = new Pipeline(middleware: [ new ReadLine(), new PrependLineNumber(), ]); stream.pipe(pipeline); return pipeline; } In this case, it might be nice to refactor into the Pipeline.fromStream constructor, like so: Pipeline<String> everyLineNumbered(File file) async => new Pipeline.fromStream( new Stream.fromIterable(await file.readAsBytes()), middleware: [ new ReadLine(), new PrependLine(), ] ); Use with HttpServer A good use case for the pipeline is when you're setting up an HttpServer. That could look something like this: import 'dart:io'; import 'package:pipeline/pipeline.dart'; main() async { HttpServer server = await HttpServer.bind('localhost', 1337); Pipeline<HttpRequest> pipeline = new Pipeline.fromStream(server, middleware: [ new CsrfVerifier(), // Middleware<HttpRequest> that protects against CSRF by comparing some tokens. new HttpHandler(), // A handler that writes to the response object ]); await for(HttpRequest request in pipeline) { // Every response should be closed in the end request.response.close(); } } TODO - Write tests
https://www.dartdocs.org/documentation/pipeline/1.0.4/index.html
CC-MAIN-2017-26
en
refinedweb
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication9 { class LFC { public static void Main(string[] args) { int num1 = 10, num2 = 20, max, lcm; max = (num1>num2) ? num1 : num2; for(i=max; ; i+=max) { if(i%num1==0 && i%num2==0) { lcm = i; break; } } Console.Write("\nLCM of {0} and {1} = {2}\n\n", num1, num2, lcm); } } } Output LCM of 10 and 20 is 20
https://letsfindcourse.com/csharp-coding-questions/csharp-program-to-find-lcm-of-two-number
CC-MAIN-2022-27
en
refinedweb
Since I started write my first block of code, I heard from many developers that commenting is useless and it's a type of apology to write a bad code, but after working in a big projects with a big team, the only thing I can say is: Not commenting your code is narcisist and excludes beginners, by the way, who said your code is so good and obvious as you think it is? Yeah, your mind. During your work, probably you faced a function that you asked yourself: "-What the hell is even that?". Many of us experienced this situation, even with our own code after some weeks far from the project. Now, imagine if instead of you have to waste your time, searching through the houndreds of files for the right function you need to put hands on, you could have just comment your code telling the function purpose, its params and what it should return. Life could be a dream, right? Also, we cannot asume the everybody think like us and we are being so obvious. People have different ways to analyze things, we have people that are less experienced or even have mental health conditions like, anxiety and ADDH, and that makes the process of understand some pieces of code even harder. Should we just exclude them because we can't use one single minute to comment our complexity? I think we shouldn't. The question is not about if you should comment your code or not, but is what you need to comment in your code and how it should be done. Write a clean and easily readable code is unegotiable, and you get better on this with experience, but you can also write clean and good comments, so it can be used as a reference for you and the others, and it do not make you a bad programmer, on the contrary!, It makes you a better professional,your code will be easily maintainable, plus, you're ensuring that no matter the level of who enters on your team, they'll get it faster and start working on the project, and if you have to leave your job, the devs that comes after you will be grateful and thanks you everyday before they going to bed. (okay, I'm not so sure about this last part). β€œPrograms must be written for people to read and only incidentally for machines to execute.” - Hal Abelson - MIT Professor. Recommended reads: Best practices for writing code comments What's the best way to document JavaScript? Discussion (106) It's never 'useless', but it can be overkill. If code is written well (good variable & function names, clear logic), then it should be fairly obvious from reading it what it does. In cases where the logic is a little hard to follow then some comments can be very helpful. It can be a tricky balance - you don't want absolutely no comments ever, but at the same time, commenting absolutely everything just to try and cater for every possible skill level is also not a good idea. If code is written well, and is uncommented - then it is probably the assumption of the team (as presumably the code has passed code review) that it is already understandable enough. If a new developer comes to this code and does not understand it, the best solution would be to consult a team member who does, and get them to explain it. This will have the dual benefit of increasing the junior developer's understanding, and making the team aware that there may be an issue with the code being too impenetrable in places. And that's just it: Everybody thinks their code is well written. But most of the time it isn't near as good as the writer thinks it is. When I look at code that I wrote just 6 months ago I can see that it's not as good as the code I write today. I'm always learning improving. If they can't write clean and readable code, why would you think that they can write clear and understandable comments? I wish people wrote clean and readable code. Most of the code I read is crap. I must admit I don't have much experience reading comments. Sadly, most devs aren't very good. Just like most doctors, lawyers, bricklayers, actors, barbers, politicians. Most people are mediocre at what they do. By definition, actually. Mediocre means average. Average isn't very good, usually. But if a person can't even write decent code, I find it unlikely that they are going to write understandable comments in a natural language (much harder -- ask any writer), or, more importantly, that they are going to be diligent enough to keep that comment in sync with the code. And my experience -- closing in on three decades -- bears that out. If you think code is bad, read the comments. They are almost always awful. What's needed, really, is not comments but good code reviews by talented leaders who ensure that code is clean and readable before it goes into production. Which would also help to teach coders to write readable code in the first place. But that might take time, right? We never have time to do it right. We only have time to write comments that we wouldn't have needed if we'd done it right, and then pay a bigger penalty down the road when the comments and code are incompatible and everything is a mess. Ever seen any code like that? You made that up. Nobody is that bad. I did get a chuckle out of it. It was a joke. But it's not as far off as you might think. :-) I consider comments as documentation. If a programmer does not document their code... I don't have energy to finish that sentence. Comments are a shit way to document code. The best way is in the code itself. So if your code is self-documenting, then you have documented your code. The only excuse for comments is that you had to do something in the code that you can't figure out how to make clear without a comment. Maybe it's a workaround. Maybe you're just not that good. That's why we have teams -- so they can show you how to write better code. In short, comments are generally where mediocre coders document their failure to write understandable code. It's either that, or the comments are redundant and probably just get out of sync. So one could say that the more you comment your code, the more you're willing to admit that you don't write very good code. And if that's the case, then I guess comments are better than nothing. But why not learn to be a better coder or take up a different profession? But hey, black and white condemnations like yours are all the rage these days. Maybe see a doctor, though, about your anemia? It is always surprising how many devs confuse their personal preferences and pet peeves as scientific arguments and absolute judgements. And then boast about it. Apologies. Sir, I do not condemn self documenting code. Most of the code I write is self documenting. We agree that bad code is bad code. You make a solid argument that comments don't improve bad code. I generally use comments in a couple of ways. One is when I am writing a function or method, I write the steps out in plain English as comments before I implement them. Then I remove redundant comments. Some comments I leave because some things are nuanced and not obvious. The second way is explaining, usually to my future self, why I did something a particular way, not what the code is doing, because well, I'm not as smart as I think I am. Even then, most of those comments are from my future self to my future future self to save time the next time I have to modify it. Third, is explaining somebody else's code for instance, where they used a variable or parameter named "id" where id could be the column of one of 6 tables, and refactoring is not an option because it's spaghetti code in a huge legacy codebase. I'm interested in your work flow. You're a teacher. I'm a student. How do you do it? Sorry, I misread your last comment as saying that all code had to be commented. I rarely comment my code. Instead I: function returnFalse() { return false } this, typically in callback functions (anywhere you might use const self = this) any typenot interface-- interfaces are mutable unknownin production code Also: index.ts(x)or mod.ts(x) import not from "~utilities/not" notfunction and use that import typefor type imports, it's cleaner appsfolder utilitiesfolder (for generic utilities such as not, pipe, identity) servicesfolder for any bespoke services (i.e., not in an npm or deno module) such as authentication modulesfolder for any reusable modules (e.g., generic components, icons) import doIt from "~apps/TicTacToe/utilities/doIt"rather than import doIt from "../../utilities/doIt" In the appsfolder, I have "micro-apps". These are standalone apps. Everything bespoke to that app is in that folder. Everything outside of that folder is generic and reusable. So, for example, I may have a services/useGraphQLfolder that provides a hook for using GraphQL, but it takes a config and returns queryand mutationfunctions. So the actual URL, query, variables, etc. are provided where they are used. None of this is hard coded in the useGraphQLhook. (And I don't bother with Apollo -- a simple POST request returning JSON works fine.) Inside the appsfolder, I might have a micro-app called, I dunno, TicTacToe. The hierarchy of that folder and its subcomponents would follow the hierarchy of the components, for example: The benefit of this is that: .tsxending tells me it is using JSX (I am forced to use React usually, though I prefer SolidJS or even plain vanilla TS -- deno gives me JSX for free) <TicTacToe />) elsewhere in the app I do not care how short a file is. Why would I? For some reason, many devs fear files. I don't get it. Why would I make a 1000-line file full of named exports when I could make a well organized folder tree with maybe 20 files, each of which contains a single function? If I need to view multiple functions at once, I can open them in side-by-side tabs. Here is an example of the notfunction I mentioned, from production code: That is the entire file! Three lines. Here is a somewhat longer one: That replaces an entire dependency (on "classnames")! You can see pretty easily, I think, that it takes an object with CSS class names as the keys and booleans as the values, and then includes only those that are true, concatenating them into a space-separated string. And if that's not enough to be clear, then in the very same folder is the test: Other than a few polyfills for Intland Temporal, fetch, and uuid, my app uses only React, XState, and, sadly, Auth0.js (not my choice). I write my own code, including utility functions, and reuse it from app to app, improving it as I get better (and the language improves). So yes, I write my own pipefunction, and often map, filter, find, reduceand more (wrapping the JS methods where appropriate). That means that I know practically my whole code base (an argument for vanilla TS). It means that to the greatest extent possible, no one else has code in my code base, which means better security, better reliability, etc. It means that when things break, I know where they broke and why they broke, and I can fix them rather than waiting for the owners to get around to fixing them and releasing a patch. It means that I am highly motivated to keep my code base simple, not to bulk it up with unnecessary and bloated dependencies. It means that most of my files can be seen in full without scrolling. And if my code needs further documentation, I try to put it in those README.md files right in the folder (README means GitHub will automatically display them). That's just a start, but I hope it answers your question at least a little. Lots of devs violently disagree with one or another of the above, and I have done all of these things differently over the years, but this is the methodology that has stood the test of time. I'm sure it can be improved still further, and significant changes in language, framework, or library might make adaptations necessary, but I can say that of the many people I've taught this too, none have gone back to their old ways. It's simple, and it works. YMMV. Good answer. That's more than I expected. It will take me some time to digest it all. I struggle keeping my functions short. 50 - 100 lines is not unusual for me. Three line functions always make me second guess myself, "Should I inline this"? Earlier, I was thinking to myself "I bet he uses readme files". Your use of utilities is fascinating. I have some functions that I seem to redefine over and over in different projects. This a good way to organize then and not redefine them. I've used the folder/index.* naming convention in one project. It confuses me a bit and make my tabs too wide when I get several files open at once. The temporal polyfill is interesting. I like writing vanilla js for much the same reasons you use vanilla TS. Good Night @chasm Just wanted to say that I agree with almost every statement. I have made very similar experiences during my time as a software developer and have come to very similar conclusions. Most of the things are in line with Uncle Bob's Clean Code. Thank you for the detailed elaboration! +1 I couldn't have said it better myself thanks I've never in my years of education been told (except from that student who doesn't want to comment their code) that commenting/documenting code is "useless". Oh my! Documenting your code teaches beginners what you're doing. Not documenting at all excludes them because....how are they supposed to understand if you don't explain it in plain english? I bet the developer himself/herself won't even remember what the program he or she wrote does five or six months down the line without documentation. Who is it harming? True productivity, team work, and efficiency? Or just their ego and arrogance? If you keep naming your classes, methods, parameters and variables consistently and their names express their purpose, your code without comments is easy to understand even for beginners. Sometimes comments are necessary to describe and point to code that does not work in an expected way, and you have implemented strange looking workaround. For example some libraries, used in your project may have side-effects and wrong behavior and to overcome these problems, you have to write strange-looking code. And in this case comment helps to understand what is going on. I totally agree on this. I write and read mostly deep learning Python code. Therefore, I understand the pain when reading undocumented inputs. Documenting these code not only teaches the beginners SOTA but also save time of running the code again just to determine what is the correct shape of a Tensor. But if you have beginners in a project, you can not rely a decision on this, which affects how the whole project and code is build. Again, just to provide an example, you would need to maintain the comments, if you do not, these will rather harm the process of understanding. It is possibly a better approach to on-board and support beginners well. Allow them to contact and ask you whenever they need help. Be their mentor and teach them to work with the code like an experienced developer, do not create "code for the beginners" with comments all over the place. Yep fully agree. It isn't harming anyone, but more so creating an inclusion to new people and beginners. My golden rules: Add a summary at the top of a function about what it does. This way I do not have to read and mentally parse your functions code to understand what it does. This does not apply to very simple functions where the function name can describe everything like removeLastCharacter(). But calculateSingularityProbability()might win by some description. Add comments to reduce mental work If some line(s) are really complicated, add a short comment above describing what it does. Add comments to explain hidden knowledge Why are you doing array.pop() twice here without any appearant reason? Well, because I do know that the array always contains two empty entries at the end, which we don't want. If you write the code, you have that knowledge at hand. Your team member might not. And you, looking at that code in 2 month wont remember either. I have to say, I disagree. calculateSingularityProbability()is already a pretty good summary, isn't it? It shouldn't be necessary to describe formally what the function does exactly because, well, that's exactly what code is: a formal description of behavior. If you write code that you think is hard to read and needs a comment then why don't you change the code instead of adding a comment? This is like creating a product with a crappy UX and then writing a descriptive manual instead of fixing the UX. Why don't you wrap the double array.pop()in a function named removeEmptyArraysAtTheEnd()? Shorter functions, single responsibility maintained and description inside the function title. No risk of "changing the function but forgetting to change the comment". In my opinion, writing comments is the last resort and should almost never be done. Instead, keep functions very short (10 -30 LOC) and parameter count low. I recommend reading Uncle Bob's "Clean Code". I prefer reading one line of comment in human language than having to read 10 - 30 lines of machine language and parse it in my head to figure out whats going on. I read that book. Also many others. πŸ‘ Don't you prefer to read one function title that describes on a high level in human readable language what's inside the function? Basically the same that would be in the one comment line? We went full circle to my initial comment. Yes, there are simple function where all they do fits into the function name. Since functions are composed from other functions no matter how much you break things up, you will end up with something more complex. And the case will be even worse since I now have to scan all over the place nd go down rabbit holes to understand whats going on inside. I also mentioned that its depending on the cases. You cannot generalize the topic. I understand what you're saying. I would argue, though, that only because you add a layer of abstraction to something, it doesn't mean that you need to understand every detail of the layer below to understand the abstraction. I would even say that's the purpose of abstraction. So when you compose 5 functions in a new function, you don't need to read the code of the 5 child functions if they have a descriptive name. If I wrap five HTTP requests in a repository, I don't need to understand the HTTP request logic to refactor the repository. I can stay in this layer of abstraction because I separated all the logic into smaller pieces. I would argue that if a function does more than fits in the function name, the function does too much. If it's only one responsibility, it can be usually described in a short title. But we may have made different experiences and we will possibly not come together and agree here and this is fine :). Agreed big, and so few developers know that I always hear those words and I can't argue. Best line ever: "Not commenting on your code is narcissist and excludes beginners, btw." "Who said your code is as good and obvious as you think it is? Yeah, your mind." Yes Yes Yes a lot of developers say it and yes it's only good in your mind even the seniors will struggle with undocumented code. Although I do see your point, and I myself tend to "over comment" my own code, there is a valid argument to the code being self documented. Interestingly, with a "meta programming" language, such as our Hyperlambda, the code literally is self documented, due to the abilities of the programming language to extract meta data from the code, being able to intelligently understand what each snippet of code does, to the point where you can write stuff that's logically similar to the following (pseudo code). Of course the above is pseudo code, but still a perfectly example of something easily achieved with a "meta programming language", resulting in that your "comments" literally becomes its code, and information about what the code does can be dynamically extracted using automated processes, allowing you to easily understand everything your code actually does, without having a single comment in your code base. Still, kind of out of fear from possibly being wrong, I tend to document my code (too much) ... :/ I love self-documenting "What does the code do" but I have yet to see self-documenting "Why does it do it?" "Why" is a question that has nothing to do with the implementation. It has something to do with the requirements. These come from the stake holders. They should document the requirements somewhere else, not in the code. If the code is the single source of truth for the requirements of your software, then you're doing something wrong. "Why did I choose to use this mapping method? Because in exploring the options this was the most performant." That's never something you'll see in requirements somewhere and is ideally situated near the code you wrote. If it's really a matter of performance, then I agree. However, in today's web applications, performance on such a low level is almost never a concern. From what I've learnt, it almost always boils down to IO in a loop or other nested loops with cubic complexity. Apart from that: readability > performance. Unless you're working in game industry or doing other low level stuff. I've spent a long-time in open source. The one thing that invariably remains…the source code (and it's git history). All other things rot and decay far faster. So, include whatever comments can help provide contextual support of the local "state" of the code. In addition, one must consider that placing way finding comments in a code-base are of high value. There have been a few cases, where past decisions/specs were lost but we still had code. I don't want to go and "backfill" those specs, so I'll write a note saying "This is my understanding given what I've been able to piece together." Okay, maybe I am viewing it too much from a business perspective. It seems like there are a lot more specs from a non-technical view there. This somehow eliminates the need for specs inside code. If you use comments as way to communicate with other developers working on the same code base and have no real communication channel outside of that, then I can better understand the necessity! We're both looking at the same "elephant" but from different perspectives. The code is the most accurate representation of the product. It says exactly what the product is. There will invariably be cases where the intention of the code will be hidden in a private Slack channel, a lost email, or even an unrecorded Zoom meeting. The code is the product and provides the most reliable place to "pin" a way finding comment/annotation. Specs inside code are…treacherous. A URL in the code to that spec? Gold! Good point, but I was speaking of the ability to runtime extract semantic data about what the code does, not reading the code itself ... But you've got a very good point ... I intend to write a full post about this sometime, but here are my thoughts in summary. Self-documenting code doesn't exist because the purpose of documentation is different from what clean code gives you. Cleanly written code makes it trivial to understand how the code functions β€” nothing surprises you, the code isn't hard to follow or constructed of spaghetti, chunks of it fit neatly in your memory and it doesn't require you to go back-and-forth too often. You understand both the implementation and the abstraction quickly and cleanly. It exposes all of the how and most of the what, but what it doesn't necessarily do is explain all of the why. Sure, well-written clean code with properly named functions and properties etc. can help expose the why, but it still requires you to do several iterative readings in any sizeable codebase to grasp the original business intent β€” namely, why does this code exist at all? What purpose does it serve? That's where documentation steps in. Documentation should expose as much of the why as possible, and some of the what, without focusing at all on the how, since how it is implemented is quite literally implementation detail, and is subject to change even without the original business intent changing. The why changes very rarely, and also requires the least comprehensive documentation (the type of documentation that Agile tries to avoid) which is quick to read and grasp. In my experience, pretty much every programmer I've met who has clamored for "don't write documentation, write self-documenting code" has parroted this statement because they didn't want to spend the time it takes to write documentation in the first place, not because they genuinely believe trawling through the code trying to tenuously grasp the intent of its writing is better than reading a short document about it. I've always found this to be a pretty egotist attitude. "My code is so good you shouldn't need help to understand it". I've learnt there's a balance in commenting. There are really 3 use cases: And what is not egoist about thinking that a comment is so good that every reader should get what the corresponding code does? If you know by yourself that your code might be hard to read, then why don't you refactor it instead of adding an explanation? Reminds me of products with manuals that nobody reads. Always asked myself why they don't make the product of intuitive usage instead of writing a manual. Apple was the first big company to understand this. Yes, but that's not so much because I didn't comment the code, as that I didn't write clean, readable code in the first place. If the bit of my wetware that needed to light up at the time to tell me to comment it had actually been doing its job I'd have written it better anyway! Mostly, I'm on your side in this. I like commenting things. I like having a standard doc comment at the top of every function, even if it's "obvious". However, I've had colleagues who don't, and their arguments are usually something like, "now you have to update two things", i.e. whenever you make a code change you need to make sure the comments and documentation match, and it's way too easy to forget. In fact, how many times have you seen the same comment repeated because someone's copied a component from one file to another to use as a kind of boilerplate, even though it has an entirely new purpose now? I guess that it is necessary some context, but to make it short: better this I'm so happy to see so rich discussion here in my article! Thank you so much devs. I don't think leaving out comments excludes beginners. Quite the opposite. Writing too much waste into your code sets a bad example. And I've seen more redundant comments than useful ones in codebases I worked on (including my own). My favourite to this day is: Does this look beginner friendly to you? If you do want to comment stuff, please write proper JavaDoc / JSDoc / whatever-Doc. That's what it's there for. @desc, @property/s and @returns. And if you want to go bonkers, at least be so kind and do so in your automated test suites. You can even use @seein your production code base. And everybody wins. Great article! Writing comments about what your code does can be helpful. But, I've learned that writing too many comments can be excessive. The line I always take with students I mentor is that I don't need a comment to tell me what the code is doing, because I can read the code just fine, and most of that can be encoded in variable and function names, like frobnicate_the_input_array()and input_array_to_frobnicate. I need comments to tell me why it's doing that, and particularly why you're not doing it a different way. "But requirements and statements of purpose don't belong in the code! They should be in other requirements documents." As a developer, I have ready access to the code I'm working on, not the requirements docs, and I especially don't have anything to connect frobnicate_the_input_arrayto a requirements doc saying that the input array needs to be frobnicated, or to a later decision saying that it needs to be frobbed in reverse order. That's what I need a comment for. This is a straw man argument. I don't know a single developer who never writes a comment. The argument is over whether you should comment everything, or just when a comment is needed, which is inversely proportional to the amount of clean, readable, and easily understandable code you write. The argument is not that you should never comment your code. If you have to make up a straw man argument, then your credibility as an authority is undermined. "There are two hard problems in computing: cache invalidation, and naming things". Naming things is hard, which sometimes leads to comments. For the longest time I was a proponent of comments, and it is what was drilled into me growing up. I never heard about them being "useless" or that they should be avoided until the mid 2010s. Now I try to be pragmatic about them: and always consider "does this comment bring value?" "If so, could the code be changed to make the comment unnecessary?" My reasoning about this, including some pet peeves and "uselessness" is in one of my old blog posts. The counter argument (use docstrings / doc-comments if your language has them) is in another. Meta:- I should clean these up a bit and post on DEV... Guy who write the code usually don't have enough distance (and often not enough skill where, documenting is an animal in its own right!) to document the same. I encourage devs to ask questions, and document code they come across when they don't get it (hopefully, after they figure it out). This means that, the meandering effort you refer, of figuring what it really does... Somebody, in my experience, have to do this at least once. Most, "horse did, horse mouth documented" code I come across - the docs in there paraphrase the code - dead weight which I promptly delete in a separate commit (sneering is not productive, removing dead weight is) Good comments often signal bad code. Guy suddenly realized some danger with their code, wrote a comment and moved on cause no time to fix. Which is perfectly fine. They save somebody else the trouble of falling into a trap, and boost the confidence of the time-endowed person who are going to fix the code, and delete the signpost. I'd much rather a // bad codecomment than nothing - because I respect my co-workers. If something looks twisted and isn't labeled bad/clumsy I'll assume its crooked for a reason, whereas sometimes it's not. Part of being an API designer (and much code isn't "API" just grease) is having a knack for roleplaying as a beginner, and adopt the TLDR mentality of somebody who need to understand something, but are lacking the time and dedication to do verbose let alone code. Not all code that needs documenting is bad, but writing code that people just get without much effort (and if possible reading signatures NOT the code) is a good smell. Ok so there are like 100 comments that I honestly won't read (I was eager to read the first 10 but got tired at 5). What whas that? It's honesty. We all should practice it when coding, because we know what we are doing at the time of coding whatever but we should be honest and think that "My future me will love to see comments on this code". "Yeah but there are functions that are SOLID and don't need comments because blah blah" I had the exact same conversation with a colleague the other day so let's mimic it: Is it straightforward, isn't it? But how do you know what it returns? "But Joel, it returns a user, A USER!" Nope, it returns a Promise that hopefully will retrieve a user (as long as it exists in the DB). And me and you, and the rest of the team will love having this information without the need of Ctrl+Click over the function name to see what is doing behind the scenes. So when you are about to use this function you get: Here it is! Inference! Beloved inference. Moreover if you use TS pragma at the top of your JS file it will use TS when it's important, in dev time. Without adding TS as project dependency plus without the extra bundle time of compiling/transpiling TS into JS. It will use JSDoc to type-check your functions and variables. VSCode will handle it out of the box. So if you try to do something like passing a string it will correctly complain like that: Look how cool it looks like! It looks even cooler in VSCode as it changes the color for the JSDoc when properly written: Sooner or later I receive either a "you are right" or a "you were right" about that. I had been in enough projects to say that this is a must. I've been ignoring this, because I'm tired of the "conventional wisdom (or a straw-man version) says X, but I oppose that" genre of article. However, there are a couple of points to make. First, as I hinted, nobody really says not to document your code. No (serious) programming language lacks a way to write comments, and several try to shift the focus to the comments, so-called literate programming. That said, comments are almost universally an admission of failure---the "I don't actually know how this works, so don't touch it" style---or vapid descriptions that have nothing to offer beyond repeating what the code does...or, rather, they repeat what the code did when it was first written, but haven't received an update since then. How much time in your career has reading a comment saved you? The problem is that "comment your code" implies the latter, inserting comments that follow the code. "This assigns the current total to the totalvariable." "Loop over the array indices." "Increment iby one." Not only are those comments useless, but they make maintenance more difficult, because someone needs to always double-check to make sure that the comments reflect the code...but you already have the code. Rather, you want to comment the project, from within the code. For some examples. If, instead of covering that kind of ground, your comment explains the syntax, though, then you should rewrite the code instead of writing the comment. If the comment explains what the variable name means, rename the variable, instead. Those comment the code, and no developer should be forced to read them... There are certainly scenarios where writing comments can be useful, though I would also argue that you should not use comments as a cruch. Anywhere your logic is complicated by all means write a comment, but if your logic is not complicated you should focus on giving the write name to your function. For example. But comments don't make much sense in the following example First preference in my opinion should be to name things properly, use the single responsibility principle as much as possible and then if logic is still complicated then add comment. It's a balance. If creating code for demos/illustrative purposes, definitely need more comments. Otherwise, think 2x before putting long comments. Usually, I will add a comment if I found that I screwed up more than a couple of times b/c I forgot the same thing. That's a good spot to put a warning comment or something like that. Also, add comments when there is a better approach, like a refactor, but we are not doing now b/c of time shortage, for instance. Like a: // TODO. You can even do comments that kind of assign for someone else like: // TODO{john.smith}They can search for their name, etc. Probably can connect to some type of workflow that will automatically generate assigned issues using the comment in GitHub. I always argue for documentation comments, and against random comments. *Doc, if it's written well is always welcome. One line comments before a function call of the same name are generally terrible! First, if you start using comments to explain your code, you will have to maintain these as well. Guess what, this likely will not happen everytime, because people need to always keep in mind to do this whenever something in code changes. Second, code can be self-explaining without the need to look for a certain function. Usually the naming and how you build the whole project helps a lot and maybe sometimes it is necessary to see, what other functions are called, too. But if you cannot understand it without looking for many other places in the project, maybe something different is wrong and the issue is not missing comments. These would rather patch the issues, not solve them. I am not asking for to not comment, but usually there are only a few special cases, which actually need comments and this can be very specific to the technologies chosen. Is it just me or feels the "mental health" and "inclusion" part again just following a trend? I would not say it is wrong (I am totally with you in sense of, that not everybody is the same and we need to respect and support each other), but I do not see, how the commenting part has to be necessarily connected to this. You can have high quality comments and that helps a lot navigating codebases. There's usually a lot of context behind the decisions for some routing and for code that has to be read a lot (99% of it) they help. For enterprise apps that have been touched by lots of people (most of them already gone), comments are valuable. I've found important warnings/explanations and it's saved me hours that are spent in better ways. Comments are async communication and expand context beyond the mere computer instructions. Write comments if you need "external" context to understand "what the heck" this piece of code is doing - by "external context" I mean knowledge/info that can't be derived from that piece of code (function, method, class, module) itself. I do not agree in some parts, your code is changes, the comments often not. once you found several times a comment which lied to you, you start to ignore them. then you have often autognerated code for your linter and pipelines and setter etc which often is also teaches you "just ignore it". if your code has many stuffs which is just boilerplate or self explained you start to ignore even more the comments. so if you have code parts which is ignored by others, then people will do not take care of it since noone relies on it. if you have a code which usually has no comments but then you find here and there one line with a comment which describes why it is there, then this is very usefull. you need sometimes comments, describing a regular expression or a method which do nothing or something weird but in general for most cases your code should be readable and understable enough. i changed in my editor the comment color to red or even pink. and if i see too many of this color which is useless, i just delete it because it hurts to look at it. and if i find a comment which is required, the bright pink color reminds me to read it because it is an important part there and if you leave your job, make sure have ENOUGH, unit, implementation and e2e tests (also more unit tests than impl. and more than e2e follow the pyramide) which runs FAST and can tell you RELIBALE that your project or at least the critical parts of it are running without failure. I have seen code with too many comments. Something like five lines of code having 5 lines of comments each, which really breaks the flow of reading code for such a simple function. That being said, I think it's important to describe the purpose of each block of code with some standard format. It also helps some IDEs provide you with infirmatiomnon mouse over, which is incredibly helpful. So that being said, I try (and usually fail) to use my comments like so: I use comments in the code very sparingly and only to tell the story about the reasoning behind the code if that should not be obvious even to a beginner. I'm very liberal with elaborate JSDoc blocks above my API interfaces, though. Too many comments are not just useless, they're detrimental to legibility, and are a typical feature of poorly designed code As a rule of thumb: Feeling completely lost in someone else's code is not at all correlated with amount of comments, it's associated with poorly designed code There are only two disadvantages to writing comments. First, and least important, is the column inches it may take up. Annoying, at best. Decent IDEs will collapse such a comment if asked. Second, however, is that, like code, comments can become stale and therefore misleading or unhelpful. This is very serious. Comments must be updated as part of the code they describe should that code ever change. Because comments are code. Not writing and maintaining comments is narcissistic and brazenly hostile to a) others who read your code trying to figure out what you meant to do and b) yourself down the road a few years should you have to do the same thing as (a). By the way, not writing unit tests is also unforgivable and an "armed assault" on others just as is (a) above. Most of my comments are for me 6 months from now when I will be asked to debug or modify the code again, and I surely will ask myself "what the heck was I thinking when I wrote this"? It happens to me all the time. I'm dealing with 20 years worth of legacy code - some methods have 20 - 30 optional parameters and almost no comments anywhere - it takes hours to make even minor changes because the person who wrote it thought it was so obvious what their code does. And then on top of that, single and 2 letter variables names all over the place... Refactoring that mess is a nightmare of technical debt. If you don't comment your code you are some combination of lazy, narcissist, and noob or someone who just thinks that they are smarter than everybody else, and you may well be. But the people that wrote the mess I'm fixing weren't that smart - but they thought they were. Sometimes people say comments are "distracting". But I've always made the syntax highlighter make them light grey so I can tune them out visually if I want to focus just on the code. That being said.. I've been through enough nightmares to know the value of good comments. I'd definitely say any comment is better than none. But in my experience code very rarely has comments (when not written by my team). I believe there needs to be a balance. It's hard for me to believe that one can achieve the highest standards of readability in a production-level application with no comments. If written well, code will explain everything related to how stuff is happening. But based on my experience, there's always a need to document "why". And code often fails to achieve that without comments. I think it's a quite known fact that real-world apps need to sometimes take paths that aren't intuitive, you would insert if-else in your function possibly destroying the single responsibility principle, because of business needs. A comment explaining the purpose of what your code is doing is always welcomed. Here's a talk by Sarah Drasner on Code comments, can check out if interested. First please don't promote wrong things! I honestly didnt know where to start arguing so i won't and I will give you another perspective. Being in a real world project with multiple teams or devs working on it is complex enough. If for whatever function or line you need to comment might as well change jobs. This is not scalable at all. We have all sorts of tools at are disposal. Typescript, unit tests, automation tests, documentation of features (not blocks of code) function complexity tools. Commenting means one of the two things when code isn't self explanatory that you are doing too many things or you aren't giving correct variable, class, function names so things needs to be explained. Again this isn't ok. This needs to be corrected. Leaving a comment will solve the situation at that moment. What about when someone refactors that piece of code. The chances of the comments being updated accordingly are slim to none When you read code you need to understand it. Having comments negates the purpose of readable code and allows windows of badly written code because "well it was difficult so i have added a comment". but then another dev comes to enhance the functionality and will either read the comment and understand nothing because most likely it's an essay which makes no sense so he starts reading the code and again makes no sense of it so he starts debugging line by line till he understands it. If you can write good comments and bad code then ask someone to help you write in a way that's readable. If your content is short and still have to write it, seriously look at your code and find out why you need to comment it. Something so wrong. Sit and think plan or what you want. Finally and most importantly unit tests are your documentation of your class or module. If you don't understand the code in place read the unit tests which should have good description of functionality not a comment which will be deprecated in the first iteration. Don't use comments as an excuse to write tens or even hundreds of lines code cause the only thing that you are adding to is complexity for the next developer to have to read through and try and change which will most likely lead to a refactoring. If what whatever reason you meant documentation in your topic this is completely different and i couldn't agree more with you but i don't think you do. Please don't promote bad practices cause if others adopt what you suggest repos will double in size :p Good practices and conventions are what allow self-documenting code. Not your mind. I wouldn't advise a newbie to avoid comments. Naturally someone who starts out doesn't know anything about architecture. There is already a structure and pattern for everything we want to do. After thinking and experimenting with this a lot, I'm convinced the rules are simple. Inline comments only answer "why". What, where, who and how should be represented in the code by correct encapsulation and naming. Comment external interfaces so they show in the editor. For example, a function should have the parameters commented, a class needs the constructor commented. This comment should never contain the name of the variable/parameter. Comment after a bug fix, to avoid falling in the same error again. That's it. Excessive commenting is lies waiting to happen. I think that people missed a point in Uncle Bob's book about this topic. He did say that writing comments is an excuse for bad code, but he did mention somewhere that it's OK to add a comment if a certain function or block wouldn't be obvious to others if for example there is no way to make it better. The function name should convey its purpose. Same for the params. When I see a docblock above a function it usually means one of 2 things. 1. The comment is entirely redundant or 2. The function doesnt have single responsibility. Except in config files I've never seen a comment that told me more than the code itself. The inverse however is true. I've seen a lot of outdated comments that no longer convey the purpose of the code, or even comments that flat out lied, because they were copy pasted from somewhere else and not updated properly. IMHO docblocks make devs lazy. Focus on writing good clean code (there are books about that if you're interested) don't waste everyone's time writing comments ;p One problem with the commenting code I see is that when you write "what" code does then you are gonna use commenting as excuse to write bad code. Like "Hey, that code smells", "Ya, but I have comment which explains all". And you don't feel the need to write better code. Well... I don't agree! I respect your opinion and I believe that if is a team standard you should follow that. But in my opinion, if you write clean code you don't need to write comments. In fact, it makes the code unreadable. Of course, comments are allowed when you think the code it's not selfexplanatory and writing a comment is a better solution to explain some code. I usually add comments when I fix a bug and there is no way that my code can explain why that code is added. So I only add a comment with a link reference to the issue. Or sometimes when I add a utility function that needs some small comments and why the code is written in that way. You have to keep in mind, that writing comments, means that you have to maintain them too! And that means, whenever you change the code, you have to update the comment too which is tricky if you write many comments because usually code have dependencies on other parts of the code and then you will end-up maintaining a comment that is written elsewhere which might have no relation to the code you changed. Another reason is that adding many comments makes the code verbose and sometimes you will think that removing all the comments makes it easier to follow the flow of the code and/or read the code. I have worked on a project which was full of comments and was the worst experience ever for me. Personally, I always try to avoid writing comments, and if I see that a component is way too complex, I prefer writing a "component.readme" file explaining the complex topics that need to be explained for that component. Another important part of not having comments, is Git. If I fail to understand something in a file, I simply look at the histofy of that file. Many times it has helped me understand what I was looking for. I think if you have to make comments to say what your code is doing, then it probably wasn’t written to be readable. Comments are good as long as they aren’t unnecessary comments like commented out code. Generally I don’t like having comments at all, because they sort of keep the code looking messy. Good variable names indicate what concepts are involved. Clean code can document the execution model (what the code is and how it functions) but code itself can not explain "why" a certain routine had been chosen in favour of others. From an engineering perspective the "why" is important to reason about the code on a higher level than just the few lines in front of the eyes. From my experience with reading many many open source project code I can only say: the more comments explain what's going on, including well written jsDoc, the better it was to create a PR to that code. For me, the necessity to add comments to code is a smell. If you realize, your code might be hard to understand, then why don't you refactor the code instead of adding a manual? And what makes you think that the manual is understandable if you're incapable of writing understandable code? Do you think Apple would ever give out a bloated manual for their products? Or do they just create products that are intuitive? Also, I don't agree that code is more maintainable if it's commented. What if there are changes to the code and/or the comment and suddenly the code and the comment have contradictory statements. Which do you trust - the code or the comment. And when people realize something like this happens, they stop trusting the comments inside the code and it becomes a mess all over the place. There is a reason why we can name classes, functions and variable. Because they are supposed to be descriptive. Why don't we use this opportunity to write self-documenting code instead of relying on a meta layer like comments? I've never heard in a decade of software that comments are useless. They're the LAST step though. Identifiers and types should provide as much information as possible. Then comments only when there are gaps. Comments should generally say WHY not just what you're doing. IE I had to add a bug fix that wasn't merged into an older version of jQuery that we were self-hosting. I had a big giant comment of What I did and WHY I did it. I've had some jobs where they avoided comments more than others. I agree that often one's code isn't as readable as they think it is. Comments of high level logic do add value because people forget and code review doesn't always force great naming. I think a lot of this whole "code should be self-documenting" critique of comments implicitly assumes that comments are used to document WHAT the code does. But I find generally find the most useful comments talk about WHY the code does what it does. Which the code does not do, no matter how good the variable names or the structure is. Especially in enterprise code bases with hundred of thousands of lines och code. It is completely obvious what deleteLastRowInTableIfSecureFlagIsSet() does. But why? If the system is re-architected to no longer use secureFlag, what should happen here? If there is a bug related to the function, what is actually the correct behaviour? If we switch from SQL to a key-value store that doesn't have a "last row" what should happen? So IMO a rule of thumb is that good code comments explain WHY, not WHAT. And in that light this whole debate misses the point. I think people is just taking things literally. The people who told you comments = bad were just parroting what they read online and didn't fully understand. There is a place and time for comments. IMO, they should be the exception and not the rule though, or at least the coding standards should aim for that. It's much better to have a small, self-documenting method, but it's also very damn important to have a good, descriptive, multiline comment in a chunk of complex, messy code. Just use your brain people, there is no silver bullet. An emphasis on self-explanatory code should still be the key. Writing out params is verbose and possibly trivial, unless they are not clearly defined. For Javascript, Typescript is really helpful in achieving this, but it shouldn't come at a cost of adopting poor naming conventions. More likely, code's context and purpose is more useful as a comment, rather than a technical explanation of individual variables. Ultimately, it's up to the team to agree on maintainable code practices, adhere to them, and ensure code readability. More often I have seen comments that led me to spend time wondering what it means and in the end find out I've just wasted my time because it was outdated. Comments are used when it's too complex to read.. then the real problem is that it's too complex and should be simplified. Comments should be rarely seen and explain the "why" it was done this way. Explain why you had write this exotic solution. You have pretty strong points in your article. I noticed popular programming languages such as PHP are evolving on this point, though. For example, named parameters self-document the code. Union types allow skipping verbose PHPDoc annotations. IDE can map such features and auto-complete parameters. Probably more efficient than multiple lines of description. However, nothing is magic, and IT likes catchy slogans and bold statements like "Do not comment your code, it should be self-documentated." I think it's misleading. The right statement would be "write your comments carefully" or "use comments with caution." I really appreciate comments that explain the dev's point of view or specific constrainsts that leaded to particular choices. There are always pros and cons, and it's easy to judge someone else's code without the context. When you run into a function that is hard to read, after you get the job of understanding it, do a refactor in such a way that the next time you come across that function, it's easy to read. No, need to do comments if you invest in code improvement. And code improvment is investment as comment writing is debth. I think comments are valuable, but only if the comments come first. Outline what you plan to code first will give you focus to ensure a better outcome than the other way around. I found when devs comment after, the comments tend to describe what the code does instead of what is the intent or the "big picture". Well, in my opinion, your code should be as readable as any english paragraph. Which, at times, is not achieved, so you can add some not-so-obvious comments for that. But the focus should be writing clean and readable code.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jssantana/do-not-comment-your-code-it-should-be-self-documentated-well-i-dont-agree-2n59
CC-MAIN-2022-27
en
refinedweb
A library for SoftLayer's API Project description This library provides a simple Python client to interact with SoftLayer’s XML-RPC API. A command-line interface is also included and can be used to manage various SoftLayer products and services. Documentation Documentation for the Python client is available at Read the Docs . Additional API documentation can be found on the SoftLayer Development Network: Installation Install via pip: $ pip install softlayer Or you can install from source. Download source and run: $ python setup.py install Another (safer) method of installation is to use the published snap. Snaps are available for any Linux OS running snapd, the service that runs and manage snaps. Snaps are β€œauto-updating” packages and will not disrupt the current versions of libraries and software packages on your Linux-based system. To learn more, please visit: To install the slcli snap: $ sudo snap install slcli (or to get the latest release) $ sudo snap install slcli --edge The most up-to-date version of this library can be found on the SoftLayer GitHub public repositories at. For questions regarding the use of this library please post to Stack Overflow at and your posts with β€œSoftLayer” so our team can easily find your post. To report a bug with this library please create an Issue on github.:. Basic Usage Advanced Usage You can automatically set some parameters via environment variables with by using the SLCLI prefix. For example $ export SLCLI_VERBOSE=3 $ export SLCLI_FORMAT=json $ slcli vs list is equivalent to $ slcli -vvv --format=json vs list Getting Help Bugs and feature requests about this library should have a GitHub issue opened about them. Issues with the Softlayer API itself should be addressed by opening a ticket. Debugging To get the exact API call that this library makes, you can do the following. For the CLI, just use the -vvv option. If you are using the REST endpoint, this will print out a curl command that you can use, if using XML, this will print the minimal python code to make the request without the softlayer library. $ slcli -vvv vs list If you are using the library directly in python, you can do something like this. import SoftLayer import logging class invoices(): def __init__(self): self.client = SoftLayer.Client() debugger = SoftLayer.DebugTransport(self.client.transport) self.client.transport = debugger def main(self): mask = "mask[id]" account = self.client.call('Account', 'getObject', mask=mask); print("AccountID: %s" % account['id']) def debug(self): for call in self.client.transport.get_last_calls(): print(self.client.transport.print_reproduceable(call)) if __name__ == "__main__": main = example() main.main() main.debug() System Requirements - Python 3.5, 3.6, 3.7, 3.8, or 3.9. - A valid SoftLayer API username and key. - A connection to SoftLayer’s private network is required to use our private network API endpoints. Python 2.7 Support As of version 5.8.0 SoftLayer-Python will no longer support python2.7, which is End Of Life as of 2020 . If you cannot install python 3.6+ for some reason, you will need to use a version of softlayer-python <= 5.7.2 Python Packages - prettytable >= 2.0.0 - click >= 7 - requests >= 2.20.0 - prompt_toolkit >= 2 - pygments >= 2.0.0 - urllib3 >= 1.24 This software is Copyright (c) 2016-2021 SoftLayer Technologies, Inc. See the bundled LICENSE file for more information. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/SoftLayer/6.0.2/
CC-MAIN-2022-27
en
refinedweb
A flutter plugin for scanning 2D barcodes and QR codes. It wraps zxing-android-embedded for Android and LBXScan for iOS bool isBeep = true, bool isContinuous = false, int continuousInterval = 1000, // only works when isContinuous is true scan will return a List<String> instead of String continuousIntervalfor both Android and iOS, which works only when isContinuous = true scanmethod will return List<String>instead of String isBeepand isContinuousfor iOS example/README.md Demonstrates how to use the fzxing plugin. For help getting started with Flutter, view our online documentation. Add this to your package's pubspec.yaml file: dependencies: fzxing: ^0.3.0 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:fzxing/fzxing.dart'; We analyzed this package on Jun 12,.
https://pub.dev/packages/fzxing
CC-MAIN-2019-26
en
refinedweb
Stores a truncated real spherical harmonics representation of an L2-integrable function. More... #include <mitsuba/core/shvector.h> Stores a truncated real spherical harmonics representation of an L2-integrable function. Also provides some other useful functionality, such as evaluation, projection and rotation. The Mathematica equivalent of the basis functions implemented here is: Construct an invalid SH vector. Construct a new SH vector (initialized to zero) Unserialize a SH vector to a binary data stream. Copy constructor. Add a constant value. Set all coefficients to zero. Compute a normalization coefficient. Convolve the SH representation with the supplied kernel. Based on the Funk-Hecke theorem – the kernel must be rotationally symmetric around the Z-axis. Get the energy per band. Evaluate for a direction given in spherical coordinates. Evaluate for a direction given in Cartesian coordinates. Evaluate for a direction given in spherical coordinates. This function is much faster but only works for azimuthally invariant functions Evaluate for a direction given in cartesian coordinates. This function is much faster but only works for azimuthally invariant functions Brute-force search for the minimum value over the sphere. Return the number of stored SH coefficient bands. Check if this function is azumuthally invariant. Compute the relative L2 error. Add a scalar multiple of another vector. Compute the second spherical moment (analytic) Return a normalization coefficient. Normalize so that the represented function becomes a valid distribution. Equality comparison operator. Access coefficient m (in {-l, ..., l}) on band l. Access coefficient m (in {-l, ..., l}) on band l. Scalar multiplication. Scalar multiplication. Component-wise addition. Component-wise addition. Component-wise subtraction. Negation operator. Component-wise subtraction. Scalar division. Scalar division. Assignment. Equality comparison operator. Project the given function onto a SH basis (using a 2D composite Simpson's rule) Recursively computes rotation matrices for each band of SH coefficients. Based on 'Rotation Matrices for Real Spherical Harmonics. Direct Determination by Recursion' by Ivanic and Ruedenberg. The implemented tables follow the notation in 'Spherical Harmonic Lighting: The Gritty Details' by Robin Green. Helper function for rotation() – computes a diagonal block based on the previous level. Serialize a SH vector to a binary data stream. Precomputes normalization coefficients for the first few bands. Free the memory taken up by staticInitialization() Turn into a string representation. Dot product.
http://mitsuba-renderer.org/api/structmitsuba_1_1_s_h_vector.html
CC-MAIN-2019-26
en
refinedweb
The most simple IoC Container of Dart and Flutter If you looking for a package that is light-weight and provide production-ready of inversion of control, then this package is right for you. This package provides only 2 main api, easy to learn and use, but definitely fits most use case in your flutter project. If you are a server developer developing small or medium scale project, it's very likely you want to use this package. However, large scale project may need more powerful heavy-weight IoC library. You can use it in your angular project, but we highly recommend you use dependency injection system provided by angular. Keep it minimal, light-weight bind to a string: import 'package:ioc/ioc.dart'; main() { Ioc().bind('MyClass', (ioc) => new MyClass()); // later Ioc().use('MyClass'); // you will get an instance of MyClass // with generic if you want Ioc().use<MyClass>('MyClass'); } bind to self: import 'package:ioc/ioc.dart'; main() { Ioc().bind(MyClass, (ioc) => new MyClass()); // later Ioc().use(MyClass); // you will get an instance of MyClass } bind to other: import 'package:ioc/ioc.dart'; main() { Ioc().bind(MyClass, (ioc) => new OtherClass()); // later Ioc().use(MyClass); // you will get an instance of OtherClass } bind with other dependency import 'package:ioc/ioc.dart'; main() { Ioc().bind('MyClass', (Ioc ioc) { OtherClass other = ioc.use<OtherClass>('OtherClass'); return new MyClass(other); }); // later Ioc().use<MyClass>('MyClass'); // you will get an instance of OtherClass } using singleton: import 'package:ioc/ioc.dart'; class A { void someMethod() {} } main() { // use singleton on one Ioc().bind('A', (ioc) => new A(), singleton: true); Ioc().use<A>('A').someMethod(); // use singleton on all Ioc().config['singlton'] = true; Ioc().use<A>('A').someMethod(); } using lazy-loading singleton: import 'package:ioc/ioc.dart'; class A { void someMethod() {} } main() { // use lazy loaded singleton on one Ioc().bind('A', (ioc) => new A(), singleton: true, lazy: true); // class A will only be instantiated after first .use('A') Ioc().use<A>('A').someMethod(); // use lazy loaded singleton on all Ioc().config['lazy'] = true; Ioc().use<A>('A').someMethod(); } example/ioc_example.dart import 'package:ioc/ioc.dart'; class A { void someMethod() { } } class B {} main() { Ioc().bind('A', (ioc) => new A()); Ioc().use<A>('A').someMethod(); } Add this to your package's pubspec.yaml file: dependencies: ioc: ^0:ioc/ioc.dart'; We analyzed this package on Jun 4, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter, web, other No platform restriction found in primary library package:ioc/ioc.dart. Fix lib/ioc.dart. (-0.50 points) Analysis of lib/ioc.dart reported 1 hint: line 3 col 1: Prefer using /// for doc comments. The description is too long. (-10 points) Search engines display only the first part of the description. Try to keep the value of the description field in your package's pubspec.yaml file between 60 and 180 characters.
https://pub.dev/packages/ioc
CC-MAIN-2019-26
en
refinedweb
C# / .NET Core Identity with MySQL Eventually, you will want to restrict access to some/all pages at your .NET Core MVC application, so everyone would have to enter their login and password first (authenticate themselves), and then server will decide whether to let them open the page or not (authorize the access). Official manual guides you through the process pretty nicely, however it only tells about setting it up with MS SQL Server. But we, of course, would like to use MySQL for that. I tried to use MySQL with .NET Core Identity before, but something was really wrong with its support back then, and now it actually works (more or less). Why use Identity at all? Of course, you can create your own username/password authentication yourself, but your own custom bicycle is unlikely to be as good as Identity, which takes care of lots of stuff, including proper passwords storing (you didn’t plan to store them as plain text, did you), ready-made views and models, already implemented roles mechanism, social logins and so on. .NET Core 2.0 Before we start, here’s my dotnet version just in case: $ dotnet --info .NET Command Line Tools (2.1.4) Product Information: Version: 2.1.4 Commit SHA-1 hash: 5e8add2190 Runtime Environment: OS Name: Mac OS X OS Version: 10.13 OS Platform: Darwin RID: osx.10.12-x64 Base Path: /usr/local/share/dotnet/sdk/2.1.4/ Microsoft .NET Core Shared Framework Host Version : 2.0.5 Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 Assuming you have used the template with a built-in Identity authentication/authorization ( dotnet new mvc --auth Individual), I’ll skip the controllers, models and views. If you already have an existing project and want to add it there, then no problems, just create a new one using this template anyway and simply copy its models and views to your project. Install the following NuGet packages: Microsoft.EntityFrameworkCore.Tools(2.0.2); Microsoft.EntityFrameworkCore.Design(2.0.2); MySql.Data(6.10.6); MySql.Data.EntityFrameworkCore(6.10.6). Those are the versions I have, but most probably other version will be fine too. Add connection string to your database in appsettings.json "ConnectionStrings": { "DefaultConnection": "server=localhost;port=3306;database=some;user=some;password=some;CharSet=utf8;" } Add IdentityContext class to your project, /Data/IdentityContext.cs: using Microsoft.AspNetCore.Identity; using Microsoft.AspNetCore.Identity.EntityFrameworkCore; using Microsoft.EntityFrameworkCore; namespace YOUR-PROJECT-NAMESPACE.Data { public class IdentityContext : IdentityDbContext<IdentityUser> { public IdentityContext(DbContextOptions<IdentityContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder builder) { base.OnModelCreating(builder); } } } I did not create my own ApplicationUser class, because I decided to take the default IdentityUser, so I replaced it in views Login.cshtml and _LoginPartial.cshtml and also changed the AccountController (and ManageController) constructor as follows: namespace YOUR-PROJECT-NAMESPACE.Controllers { public class AccountController : Controller { private readonly UserManager<IdentityUser> _userManager; private readonly RoleManager<IdentityRole> _roleManager; private readonly SignInManager<IdentityUser> _signInManager; private readonly ILogger _logger; private readonly IdentityContext _context; private readonly IConfiguration _configuration; public AccountController( UserManager<IdentityUser> userManager, RoleManager<IdentityRole> roleManager, SignInManager<IdentityUser> signInManager, ILoggerFactory loggerFactory, IdentityContext context, IConfiguration configuration ) { _userManager = userManager; _roleManager = roleManager; _signInManager = signInManager; _logger = loggerFactory.CreateLogger<AccountController>(); _context = context; _configuration = configuration; } // ... } } Enable Identity authentication/authorization in Startup.cs: public void ConfigureServices(IServiceCollection services) { // that's where you tell that you want to use MySQL services.AddDbContext<IdentityContext>( options => options.UseMySQL(Configuration.GetConnectionString("DefaultConnection")) ); services.AddIdentity<IdentityUser, IdentityRole>().AddEntityFrameworkStores<IdentityContext>(); services.Configure<IdentityOptions>(options => { // Password settings options.Password.RequireDigit = true; options.Password.RequiredLength = 8; options.Password.RequireNonAlphanumeric = true; options.Password.RequireUppercase = true; options.Password.RequireLowercase = true; // Lockout settings options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromMinutes(30); options.Lockout.MaxFailedAccessAttempts = 10; // User settings options.User.RequireUniqueEmail = true; }); // If you want to tweak Identity cookies, they're no longer part of IdentityOptions. services.ConfigureApplicationCookie(options => options.LoginPath = "/Account/Login"); // If you don't want the cookie to be automatically authenticated and assigned to HttpContext.User, // remove the CookieAuthenticationDefaults.AuthenticationScheme parameter passed to AddAuthentication. services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(options => { options.LoginPath = "/Account/Login"; options.LogoutPath = "/Account/Logout"; options.ExpireTimeSpan = TimeSpan.FromDays(150); }); services.AddMvc( options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); // ... } ); // ... } public void Configure( IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory ) { // ... app.UseAuthentication(); // ... } Couple of (lame) words about Entity Framework. Identity can use various storages for its users/passwords/roles/whatever information. Here we use a MySQL database. And by using so-called code-first approach Entity Framework generates SQL statements from Identity internal C# models (tables for users, roles, etc and relations between them). The process of generating these statements and applying them to database is called migration. At least that’s how I understand the process, so better read some books on the subject. We have our Identity models ready for migration, so let’s perform one. Make sure that you have dotnet version 2.x.x and not 1.x.x, and also that you are in the directory with YOUR-PROJECT.csproj. Run this command: dotnet ef migrations add InitialCreate Most likely you will get the following error: No executable found matching command "dotnet-ef" The solution for that was found in this thread (and other places): make sure that you have the following lines in YOUR-PROJECT.csproj: <ItemGroup> <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" /> <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" /> </ItemGroup> Run dotnet restore. Mine failed with this error: error MSB4024: The imported project file "project/obj/project.csproj.EntityFrameworkCore.targets" could not be loaded. Unexpected end tag. I openned the file, deleted this β€œunexpected” tag, and cleaned and rebuilt the project (just in case). Having done that, try to check EF tools: $ dotnet ef _/\__ ---==/ \\ ___ ___ |. \|\ | __|| __| | ) \\\ | _| | _| \_/ | //|\\ |___||_| / \\\/\\ Entity Framework Core .NET Command Line Tools 2.0.0-rtm-26452 Usage: dotnet ef [options] [command] Options: --version Show version information -h|--help Show help information -v|--verbose Show verbose output. --no-color Don't colorize output. --prefix-output Prefix output with level. Commands: database Commands to manage the database. dbcontext Commands to manage DbContext types. migrations Commands to manage migrations. Use "dotnet ef [command] --help" for more information about a command. Okay! Now perform your initial migration (stop debugging if you have it running in Visual Studio, otherwise you’ll get an error The process cannot access the file): $ dotnet ef migrations add InitialCreate info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0] User profile is available. Using '/Users/vasya/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest. info: Microsoft.EntityFrameworkCore.Infrastructure[10403] Entity Framework Core 2.0.2-rtm-10011 initialized 'IdentityContext' using provider 'MySql.Data.EntityFrameworkCore' with options: None info: Microsoft.EntityFrameworkCore.Infrastructure[10403] Entity Framework Core 2.0.2-rtm-10011 initialized 'IdentityContext' using provider 'MySql.Data.EntityFrameworkCore' with options: None Done. To undo this action, use 'ef migrations remove' Good. You should get the following files generated: /Migrations/20180318174058_InitialCreate.cs; /Migrations/20180318174058_InitialCreate.Designer.cs; /Migrations/IdentityContextModelSnapshot.cs. Apply this migration to the database: dotnet ef database update …At least, try to. Because for me that produced the whole bunch of errors: MySql.Data.MySqlClient.MySqlException (0x80004005): Table 'YOUR-PROJECT-NAMESPACE.__efmigrationshistory' doesn't exist From this answer I got that MySQL (or whoever) doesn’t properly support migrations yet, so you need to create this table manually (via mysql or DBMS of your choice): CREATE TABLE `__EFMigrationsHistory` ( `MigrationId` NVARCHAR (150) NOT NULL, `ProductVersion` NVARCHAR (32) NOT NULL, PRIMARY KEY (`MigrationId`) ); Having run dotnet ef database update after that, I got a new error: MySql.Data.MySqlClient.MySqlException (0x80004005): Specified key was too long; max key length is 3072 bytes This one happened because somewhere in MySQL EF (or wherever) there is some mess with the composite keys, so they have total length that exceeds the limit. I hope that will be fixed in future, but meanwhile here’s a workaround: edit your /Data/IdentityContext.cs: protected override void OnModelCreating(ModelBuilder builder) { base.OnModelCreating(builder); builder.Entity<IdentityUser>(entity => entity.Property(m => m.Id).HasMaxLength(85)); builder.Entity<IdentityUser>(entity => entity.Property(m => m.NormalizedEmail).HasMaxLength(85)); builder.Entity<IdentityUser>(entity => entity.Property(m => m.NormalizedUserName).HasMaxLength(85)); builder.Entity<IdentityRole>(entity => entity.Property(m => m.Id).HasMaxLength(85)); builder.Entity<IdentityRole>(entity => entity.Property(m => m.NormalizedName).HasMaxLength(85)); builder.Entity<IdentityUserLogin<string>>(entity => entity.Property(m => m.LoginProvider).HasMaxLength(85)); builder.Entity<IdentityUserLogin<string>>(entity => entity.Property(m => m.ProviderKey).HasMaxLength(85)); builder.Entity<IdentityUserLogin<string>>(entity => entity.Property(m => m.UserId).HasMaxLength(85)); builder.Entity<IdentityUserRole<string>>(entity => entity.Property(m => m.UserId).HasMaxLength(85)); builder.Entity<IdentityUserRole<string>>(entity => entity.Property(m => m.RoleId).HasMaxLength(85)); builder.Entity<IdentityUserToken<string>>(entity => entity.Property(m => m.UserId).HasMaxLength(85)); builder.Entity<IdentityUserToken<string>>(entity => entity.Property(m => m.LoginProvider).HasMaxLength(85)); builder.Entity<IdentityUserToken<string>>(entity => entity.Property(m => m.Name).HasMaxLength(85)); builder.Entity<IdentityUserClaim<string>>(entity => entity.Property(m => m.Id).HasMaxLength(85)); builder.Entity<IdentityUserClaim<string>>(entity => entity.Property(m => m.UserId).HasMaxLength(85)); builder.Entity<IdentityRoleClaim<string>>(entity => entity.Property(m => m.Id).HasMaxLength(85)); builder.Entity<IdentityRoleClaim<string>>(entity => entity.Property(m => m.RoleId).HasMaxLength(85)); } Now you can finally run dotnet ef database update and it will create all the necessary tables with no problems (at least, it did for me). Here’re the tables you should get: mysql> show tables; +------------------------+ | Tables_in_project | +------------------------+ |) If you want to rollback migrations to perform a new one, try to run dotnet ef migrations remove or simply remove the files from /Migrations/. But that won’t touch the half-created tables in the database, so you’ll need to delete those manually. Migration part is done. However, you might get the following errors after trying to run your application: One or more compilation references are missing. Ensure that your project is referencing 'Microsoft.NET.Sdk.Web' and the 'PreserveCompilationContext' property is not set to false. The type or namespace name 'LoginViewModel' could not be found (are you missing a using directive or an assembly reference?) The type or namespace name 'SignInManager<>' could not be found (are you missing a using directive or an assembly reference?) The type or namespace name 'IdentityUser' could not be found (are you missing a using directive or an assembly reference?) Most probably, you were adding Identity views and models to an existing project and forgot to copy the contents of /Views/_ViewImports.cshtml: @using YOUR-PROJECT-NAMESPACE @using YOUR-PROJECT-NAMESPACE.Models @using YOUR-PROJECT-NAMESPACE.Models.AccountViewModels @using YOUR-PROJECT-NAMESPACE.Models.ManageViewModels @using Microsoft.AspNetCore.Identity @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers Now everything should be fine. So, how to use it? You already are using it. All the actions of your controllers now require user to be logged-in. Probably, you would like to keep the actions of HomeController publicly available, because most likely that’s the one that handles the main page and also error pages in your project. If that’s the case, add [AllowAnonymous] attribute to the whole controller: namespace YOUR-PROJECT-NAMESPACE.Controllers { [AllowAnonymous] public class HomeController : Controller { // ... } } Same goes for the Login actions of AccountController, otherwise no-one will be able to login to your website. In order to do so, add [AllowAnonymous] attribute to the actions, so they would be available for anonymous users: namespace YOUR-PROJECT-NAMESPACE.Controllers { public class AccountController : Controller { // ... [HttpGet] [AllowAnonymous] public async Task<IActionResult> Login(string returnUrl = null) { // ... } [HttpPost] [AllowAnonymous] public async Task<IActionResult> Login(LoginViewModel model, string returnUrl = null) { // ... } // ... } } And when user tries to open a page that requires him to authenticate himself (actions without [AllowAnonymous] attribute), your application will show him login page: As I already mentioned, Identity comes with roles support out-of-the-box, so you can control access even for already logged-in users based on their roles. For example, the following action is only available for users with the role admin: [Authorize(Roles = "admin")] public ActionResult DeleteUser(string email) { // ... } And thanks to Identity, you are provided with everything you need to manage roles. For instance, if you open /Account/Index (or just /Account) page in your browser, you’ll see something like this: You can register new user, delete existing ones and control the roles they have. And there are views for creating/deleting roles as well. So, using .NET Core Identity with MySQL database is definitely possible, even though the process overall still has some problems. .NET Core 2.1 So, .NET Core 2.1 was released: $ dotnet --info .NET Core SDK (reflecting any global.json): Version: 2.1.403 Commit: 04e15494b6 Runtime Environment: OS Name: Mac OS X OS Version: 10.14 OS Platform: Darwin RID: osx-x64 Base Path: /usr/local/share/dotnet/sdk/2.1.403/ Host (useful for support): Version: 2.1.5 Commit: 290303f510 .NET Core SDKs installed: 2.1.403 [/usr/local/share/dotnet/sdk] .NET Core runtimes installed: Microsoft.AspNetCore.All 2.1.5 [/usr/local/share/dotnet/shared/Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.5 [/usr/local/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.5 [/usr/local/share/dotnet/shared/Microsoft.NETCore.App] …and of course it broke some things. While some can be fixed by following the migration manual, MySQL EntityFramework from Oracle got broken for good. For instance, I got the following error (also noted by someone in comments) after the update to .NET Core 2.1: The 'MySQLNumberTypeMapping' does not support value conversions. Support for value conversions typically requires changes in the database provider. Googling the error I found this thread, from where I got to this bugreport at Oracle tracker, which basically says that Oracle doesn’t give a flying rat about fixing it. But good news Pomelo Foundation does, and they actually fixed the problem in their package, so the solution is just to give up on Oracle and switch to Pomelo. Let’s see that on an example of a new project created from scratch: mkdir some-project && cd "$_" dotnet new mvc --auth Individual dotnet add package Pomelo.EntityFrameworkCore.MySql dotnet restore Modify the connection string in appsettings.json: { "ConnectionStrings": { "DefaultConnection": "server=localhost;port=3306;database=DATABASE;user=USERNAME;password=PASSWORD;CharSet=utf8;SslMode=none;" }, // ... } And perform migration: dotnet ef migrations add InitialCreate dotnet ef database update That will create something like this in your database: mysql> show tables; +-----------------------+ | Tables_in_DATABASE | +-----------------------+ |) Then change from UseSqlite() to UseMySql() in Startup.cs: public void ConfigureServices(IServiceCollection services) { // ... services.AddDbContext<ApplicationDbContext>(options => options.UseMySql(Configuration.GetConnectionString("DefaultConnection")) ); services.AddDefaultIdentity<IdentityUser>() .AddEntityFrameworkStores<ApplicationDbContext>(); // ... } And that’s it, there is no even need to override OnModelCreating() with those weird data types customization - everything just works out of the
https://retifrav.github.io/blog/2018/03/20/csharp-dotnet-core-identity-mysql/
CC-MAIN-2019-26
en
refinedweb
StopProfile The StopProfile function sets the counter to 0 (off). StartProfile and StopProfile control the Start/Stop state for the profiling level. The default value of Start/Stop is 1. The initial value can be changed in the registry. Each call to StartProfile sets Start/Stop to 1; each call to StopProfile sets it to 0. When the Start/Stop is greater than 0, the Start/Stop state for the level is ON. When it is less than or equal to 0, the Start/Stop state is OFF. When the Start/Stop state and the Suspend/Resume state are both ON, the profiling state for the level is ON. For a thread to be profiled, the global, process, and thread level states for the thread must be ON. The following example illustrates the StopProfile method. The example assumes that a call to the StartProfile method has been made for the same thread or process identified by PROFILE_CURRENTID. void ExerciseStopProfile() { // StartProfile and StopProfile control the // Start/Stop state for the profiling level. // The default initial value of Start/Stop is 1. // The initial value can be changed in the registry. // Each call to StartProfile sets Start/Stop to 1; // each call to StopProfile sets it to 0. // Variables used to print output. HRESULT hResult; TCHAR tchBuffer[256]; // Declare enumeration to hold result of call // to StopProfile. PROFILE_COMMAND_STATUS profileResult; profileResult = StopProfile( PROFILE_THREADLEVEL, PROFILE_CURRENTID); // Format and print result. LPCTSTR pszFormat = TEXT("%s %d.\0"); TCHAR* pszTxt = TEXT("StopProfile returned"); hResult = StringCchPrintf(tchBuffer, 256, pszFormat, pszTxt, profileResult); #ifdef DEBUG OutputDebugString(tchBuffer); #endif }
https://msdn.microsoft.com/en-us/library/aa985645(v=vs.80).aspx
CC-MAIN-2017-30
en
refinedweb
I was trying to write a c++ programm, which begins with int a - 5 items, and every second int b - gets bigger 2x times. For example 1second prints - 5, 2second - 10, 3second - 20. And then print a sum of it. (35). Since I'm a very beginner, got stuck with it. int main() { int a,b; cout << " Enter a and b: " << endl; cin >> a >> b; for (int i=1; i<=b; i++) { cout << i << endl; } return 0; } #include <iostream> using namespace std; int main() { int a,b,i,result; cout << " Enter a and b: " << endl; cin >> a >> b; result = a; // At 0 sec result is a for (i=1; i<=b; i++) { result = result*2; // Doubles the result at every second cout <<result<< endl; } return 0; }
https://codedump.io/share/4jj5nkUwK4Dn/1/how-to-solve-this-for-loop-exercise-in-c-beginner-n--nx2--nx2x2-
CC-MAIN-2017-30
en
refinedweb
Unable to get closing daily values for SPY I'm attempting to capture closing prices for the day in an IB datafeed, requested at bt.TimeFrame.Daysand resampled to the same. As discussed in other threads, it is my goal to capture the closing bar for the day and enter or exit ES based on evaluation of indicator values driven by the SPY data. I've set sessionend=as shown below, yet I am not seeing the bar in my strategy at the 16:00 closing time. Suggestions would be appreciated. Setting up data feed as follows: # SPY Live data timeframe resampled to 1 Day data1 = ibstore.getdata(dataname=args.live_spy, backfill_from=bfdata1, timeframe=bt.TimeFrame.Days, compression=1, sessionend=dt.time(16, 0)) cerebro.resampledata(data1, name="SPY-daily", timeframe=bt.TimeFrame.Days, compression=1) Using the following code in Strategy next()to capture the bar. # We only care about ticks on the Daily SPY if not len(self.data_spy) > self.len_data_spy: return else: import pdb; pdb.set_trace() self.len_data_spy = len(self.data_spy) At the breakpoint above, I can see the following data: (Pdb) self.data_spy.sessionend 0.625 (Pdb) self.data_spy.DateTime 6 (Pdb) self.data_spy.buflen() 4219 (Pdb) self.data_spy.contractdetails.m_tradingHours '20170111:0400-2000;20170112:0400-2000' (Pdb) self.data_spy.contractdetails.m_timeZoneId 'EST' (Pdb) self.data_spy.contractdetails.m_liquidHours '20170111:0930-1600;20170112:0930-1600' This seems bound to fail: if not len(self.data_spy) > self.len_data_spy: return When the lenof your data is >= 1the notturns that to False(consider it 0for the comparison) and it will never be larger than something which is already >= 1 While that logic might not be very intuitive to look at, it does accomplish the goal since I would want to returnif it is False. I think one possible bug there is that I could miss ticks if setting the self.len_data_spycounter equal to the len(self.data_spy). I've since changed this to be a += 1counter, but I think the issue may still remain. Will see today when we hit the close. It seem it is going to be always Falseand never execute the return. Example Assume initialization self.len_data_spy = 0(not shown above) As soon as len(self.data_spy) > 0then not len(self.data_spy) -> False And False > self.len_data_spyevaluates to Falseand you don't return self.len_data_spyis updated and contains for sure a number > 0 And you repeat the cycle (not the initialization) with the same consequences You are mixing Minutes(for ES) and Days(for the SPY) and according to your narrative, making decisions based on the daily data, operating on the minute data. It is therefore assumed you have an indicator and/or lines operation on SPY, which means nextwill 1st be called when len(self.data_spy) > 0evaluates to True, because the indicator/operation has increased the minimum period. (And if this doesn't hold true, then something is buggy in the platform) It may be that you have some other initialization value or the reasoning is incorrect, but it really seems like the logic won't actually do anything. I have market data ticking through next()on both 1-minute interval for ES and on Daily interval for SPY. My goals are: - Ignore ticks coming through every minute for ES - Only see the tick for SPY at close of RTH which is 16:00 EST Is there a way to see if the tick that is coming through next()is for a particular data source? That would allow me to return if the tick is for self.data_es Otherwise, my only option here is to see if len(self.data_spy)is larger than my counter and if not, return. so the following is Falseunless the tick is on self.data_spy: if not len(self.data_spy) > self.len_data_spy: # could be written if self.len_data_spy == len(self.data_spy) return # ignore self.data_es tick ...do something... self.len_data_spy += 1 The above logic seems to be ok except that it lets through one ES tick at startup I'll come back with more detail about what I am doing in the strategy if I fail to get this closing bar in the next :30 minutes. Is there a way to see if the tick that is coming through next() is for a particular data source? That would allow me to return if the tick is for self.data_es As explained, by checking if the lenof a data feed has increased. When two or more data feeds are time aligned a single nextcall will let you evaluate a change in both data feeds at the same time. Still not getting this closing tick on self.data_spy. I've included most of the Strategy code below to see if there is something I am doing wrong here. One thought is that the SPY data is backfilled from static data that has no time information beyond the date. Not sure if that could impact what the IB feed that is supplementing it is storing regarding time. Also, the data in static backfill is daily data and is being augmented with daily timeframe data from IB. Does that show the time for 16:00 close? I guess the next debugging approach might be to break in the debugger at specific time for any tick to see what we have. I am open to other ideas as to how to sort this out. I've snipped out the code that I do not think is relevant. Happy to provide that if that detail is needed. class SampleStrategy(bt.Strategy): params = ( ('live', False), ('maperiod', 200), ) def log(self, txt, dt=None): ... def __init__(self): self.datastatus = False self.data_es = self.data0 self.data_spy = self.data1 # Add a MovingAverageSimple indicator based from SPY self.sma = btind.MovingAverageSimple(self.data_spy, period=self.p.maperiod) def start(self): self.len_data_spy = 0 def notify_data(self, data, status, *args, **kwargs): if status == data.LIVE: self.datastatus = True def notify_store(self, msg, *args, **kwargs): ... def notify_order(self, order): ... def notify_trade(self, trade): ... def next(self): # We only care about ticks on the Daily SPY if len(self.data_spy) == self.len_data_spy: return elif self.len_data_spy == 0: self.len_data_spy = (len(self.data_spy) - 1) if self.order: return # if an order is active, no new orders are allowed if self.p.live and not self.datastatus: return # if running live and no live data, return if self.position: # position is long or short if self.position.size < 0 and self.signal_exit_short: self.order = self.close(data=self.data_es) self.log('CLOSE: BUY TO COVER') elif self.position.size > 0 and self.signal_exit_long: self.order = self.close(data=self.data_es) self.log('CLOSE: SELL TO COVER') else: self.log('NO TRADE EXIT') if not self.position: # position is flat if self.signal_entry_long: self.order = self.buy(data=self.data_es) self.log('OPEN: BUY LONG') elif self.signal_entry_short: self.order = self.sell(data=self.data_es) self.log('OPEN: BUY LONG') else: self.log('NO TRADE ENTRY') self.len_data_spy += 1 - backtrader administrators In another thread it was recommended to use replaydata, because it will continuously give you the current daily bar. It should be something like this data0 = ibstore.getdata(`ES`) cerebro.resampledata(data0, timeframe=bt.TimeFrame.Minutes, compression=1) data1 = ibstore.getdata(`SPY`) cerebro.replaydata(data1, timeframe=bt.TimeFrame.Datay, compression=1) In the strategy def next(self): if self.data0.datetime.time() >= datetime.time(16, 0): # session has come to the end if self.data1.close[0] == MAGICAL_NUMBER: self.buy(data=self.data0) # buying data0 which is ES, but check done on data1 which is SPY Rationale: replaydatawill give you every tick of the data ( SPYin this case) but in a daily bar which is slowly being constructed - Because ESkeeps ticking at minute level, once it has reached (or gone over) the end of session of SPYyou can put your buying logic in place Note This time in this line needs to be adjusted to the local time in which ESis (information available in m_contractDetails if self.data0.datetime.time() >= datetime.time(16, 0): # session has come to the end or as an alternative tell the code to give you the time in EST(aka US/Eastern) timezone. For that timezone the end of the session is for sure 16:00 import pytz EST = pytz.timezone('US/Eastern') ... ... def next(self): ... if self.data0.datetime.time(tz=EST) >= datetime.time(16, 0): # session has come to the end ... Is it also necessary to first call .resampledata()on data1 in your example because it is an IB feed, or is it enough to use .replaydata()instead?. (Pdb) self.data_spy.sessionend 0.7291666666666667 (Pdb) self.data_spy.sessionstart 0.0 (Pdb) self.data_spy.datetime.time() datetime.time(19, 0) (Pdb) self.data_spy.datetime.date() datetime.date(2017, 1, 11) Looking at ibtest.py I think I have answered my question that the .replaydata()is in place of the .resampledata(). However, I am unable to get this to run. Continually erroring out when starting the system. Remember that this is also the data source which I am using backfill_fromto backfill from local static data since these indicators need several years of data. Not sure if that could be a factor. File "backtrader/strategy.py", line 296, in _next super(Strategy, self)._next() File "backtrader/lineiterator.py", line 236, in _next clock_len = self._clk_update() File "backtrader/strategy.py", line 285, in _clk_update newdlens = [len(d) for d in self.datas] File "backtrader/strategy.py", line 285, in <listcomp> newdlens = [len(d) for d in self.datas] File "backtrader/lineseries.py", line 432, in __len__ return len(self.lines) File "backtrader/lineseries.py", line 199, in __len__ return len(self.lines[0]) ValueError: __len__() should return >= 0 Just to add another data point here: In the debugger, when running with .resampledata(), self.data_spy.datetime.time(tz=EST) always reports dt.time(19, 0) I've added a break to debugger today to break if it reports something other than dt.time(19,0) Will start looking at what might be happening when running .replaydata() Finding the following: I have 3 data feeds configured and available in self.datas At the point in the code where this is failing, self.datas[0] has no size and the call to len fails. The code: def _clk_update(self): if self._oldsync: clk_len = super(Strategy, self)._clk_update() self.lines.datetime[0] = max(d.datetime[0] for d in self.datas if len(d)) return clk_len import pdb; pdb.set_trace() newdlens = [len(d) for d in self.datas] if any(nl > l for l, nl in zip(self._dlens, newdlens)): self.forward() self.lines.datetime[0] = max(d.datetime[0] for d in self.datas if len(d)) self._dlens = newdlens return len(self) Debugger output: > /home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/strategy.py(286)_clk_update() -> newdlens = [len(d) for d in self.datas] (Pdb) self._oldsync False (Pdb) self.data_spy <backtrader.feeds.ibdata.IBData object at 0x810dcfb70> (Pdb) self.datas [<backtrader.feeds.ibdata.IBData object at 0x810dcf3c8>, <backtrader.feeds.ibdata.IBData object at 0x810dcfb70>, <backtrader.feeds.ibdata.IBData object at 0x810dd6320>] (Pdb) len(self.datas) 3 (Pdb) self.datas[0]._name 'ES-minutes' (Pdb) self.datas[1]._name 'SPY-daily' (Pdb) self.datas[2]._name 'ES-daily' (Pdb) len(self.datas[0]) *** ValueError: __len__() should return >= 0 (Pdb) len(self.datas[1]) 1 (Pdb) len(self.datas[2]) 1 Code to setup the feed that is failing size check: # ES Futures Live data timeframe resampled to 1 Minute data0 = ibstore.getdata(dataname=args.live_es, fromdate=fetchfrom, timeframe=bt.TimeFrame.Minutes, compression=1) cerebro.resampledata(data0, name="ES-minutes", timeframe=bt.TimeFrame.Minutes, compression=1) Removing fromdate=gets past that error. But then the next error... If I change to use only .replaydata()for all of these feeds, and set exactbars < 1, I can avoid the above crash. exactbarsset to = 1 causes crash in linebuffer.py I now find that what I am getting from these replayed feeds now using the datas names I have assigned self.data0and self.data1to is Minute data. What am I missing? Is it also necessary to first call .resampledata() on data1 in your example because it is an IB feed, or is it enough to use .replaydata() instead? Either resampledataor replaydata. They do similar but different things. See docs for Data Replay. The platform tries not to be too intelligent. time(19, 0)for your assets (which seem to be in EST) is time(24, 0)(or time(0, 0)in UTCduring the winter time). sessionendwill be used by the platform as a hint as to when intraday data has gone over the session to put that extra intraday data in the next bar. @RandyT There are only some insights as to what's actually running. For example: up until today you had 2 data feeds, suddenly there are 3. And which value actually fetchfromhas may play a role, since it seems to affect what happens when you have it in place and when you don't. With regards to replaydataand the timeframe/compression you get: - A replayed data feed will tick very often, but with the same length until the boundary of the timeframe/compressionpair is met. That means that for a single minutes, it may tick 60 times (1 per second). The len(self.datax)value remains constant until you move to the next minutes. You are seeing the construction of a 1-minutebar replayed. That's why it was the idea above to use it in combination with the 1-minuteresampled data for the ES, to make sure that you see the final values of the daily bar of the SPY. Since you seem to be stretching the limits of the platform and no real data feeds run during the weekend, it will give time to prepare a sample a see if some of your reports can be duly reproduced. I added a third datafeed to give me some daily ES data to do position size calculations. I had been using the SPY for this but ultimately I want to use ES. fromdateis specifying a 7 hour retrieval start time in an attempt to reduce the startup/backfill times. Calculated as shown below. Seemed to work as expected with .resampledata()but immediately failed when changing to .replaydata()for these feeds. fetchfrom = (dt.datetime.now() - timedelta(hours=7)) With some of the changes made today to avoid the crashes, I managed to get to a point where I could run and could print values for self.data_spy(data1) through the day based on timestamps of the ticks, but discovered that rather than the values building on the daily bar for self.data_spyit instead was giving me minute data. I will attempt to put together a more simple version over the weekend that will demonstrate some of these issues. Thanks again for your help with this. - backtrader administrators Side note following from all of the above: sessionendis currently not used to find out the end of a daily bar. The rationale behind: - Many real markets keep on delivering ticks after the sessionend Example: Even if the official closing time of the Eurostoxx50future is 22:00CET, the reality is that it will not close until around 22:05CET. Because of the end of day auction which takes place. Some platforms deliver that tick later in historical data integrated in the last tick at 22:00CETand some others deliver an extra single tick, usually 5 minutes later (the 5 minutes is a rule of thumb, because it does actually change) This is the same as when you consider the different out-of- RTHperiods for products like ES. A resampled daily bar could be returned earlier by considering the sessionend, but any extra ticks would have to be discarded (or put into the next bar). Of course, the end user could set the sessionend to a time of its choosing to balance when to return the bar and start discarding values. replayed bars on the other hand are constantly returned, hence the recommendation to use them combined with a time chek @randyt - please see this announcement with regards to synchronizing the resampling of daily bars with the end of the session. This should avoid the need to use replaydata Great explanation in the announcement you made. I'll give this change a try. One point that I want to make sure is not lost is that on Friday, will looking at the data being returned by the replayto Daily timeframe, I was seeing minute data being reported for OHLC in bars captured at specific times. I could see this because I was comparing the values to the charts seen in IB for minute data. It was not updating these values from the day, but instead was reporting them for the live data that had been replayed to Daily value. This issue was also seen in the values that my indicators were reporting that should have been calculating based on the Daily timeframe. Not sure if you looked at this in your work this weekend. I am going to revert back to .resampledata()for this system and will give it a try tomorrow. Seems I should be able to do the following in if self.data_spy.datetime.time(tz=EST) != dt.time(16, 0): return
https://community.backtrader.com/topic/133/unable-to-get-closing-daily-values-for-spy
CC-MAIN-2017-30
en
refinedweb
Serialization library or the HiPack interchange format hipack is a Python module to work with the HiPack serialization format. The API is intentionally similar to that of the standard json and pickle modules. Features: Both reading, and writing HiPack version 1 is supported. The following extensions are implemented as well: (Note that extensions defined in HEPs are subject to change while they are being discussed as proposals.) Small, self-contained, pure Python implementation. Compatible with both Python 2.6 (or newer), and 3.2 (or newer). Given the following input file: # Configuration file for SuperFooBar v3000 interface { language: "en_US" panes { top: ["menu", "toolbar"] # Optional commas in lists # The colon separating keys and values is optional bottom ["statusbar"] } ☺ : True # Enables emoji Unicodeβ†’SuΓΎΓΎorteΓ°? : "Indeed, JΓΌrgen!" } # Configure plug-ins plugin: { preview # Whitespace is mostly ignored { enabled: true timeout: 500 # Update every 500ms } } Note that the : separator in between keys and values is optional, and can be omitted. Also, notice how white space β€”including new linesβ€” are completely meaningless and the structure is determined using only braces and brackets. Last but not least, a valid key is any Unicode character sequence which does not include white space or a colon. The following code can be used to read it into a Python dictionary: import hipack with open("superfoobar3000.conf", "rb") as f: config = hipack.load(f) Conversions work as expected: The following can be used to convert a Python dictionary into its textual representation: users = { "peter": { "uid": 1000, "name": "Peter JΓΈglund", "groups": ["wheel", "peter"], }, "root": { "uid": 0, "groups": ["root"], } } import hipack text = hipack.dumps(users) When generating a textual representation, the keys of each dictionary will be sorted, to guarantee that the generated output is stable. The dictionary from the previous snippet would be written in text form as follows: peter: { name: "Peter JΓΈglund" groups: ["wheel" "peter"] uid: 1000 } root: { groups: ["root"] uid: 0 } The stable releases are uploaded to PyPI, so you can install them and upgrade using pip: pip install hipack Alternatively, you can install development versions β€”at your own riskβ€” directly from the Git repository: pip install -e git://github.com/aperezdc/hipack-python If you want to contribute, please use the usual GitHub workflow: If you do not have programming skills, you can still contribute by reporting issues that you may encounter. Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hipack/
CC-MAIN-2017-30
en
refinedweb
Introduction to Struts Architecture By: Daniel Malcolm Emailed: 1743 times Printed: 2431 times. the listing below.. // Sample ActionForm public class MyForm extends ActionForm { private String firstName; private String lastName; public MyForm() { firstName = β€œβ€; lastName = β€œβ€; } public String getFirstName() { return firstName; } public void setFirstName(String s) { this.firstName = s; } public String getLastName() { return lastName; } public void setLastName(String s) { this.lastName = s; } } The ActionServlet then instantiates a Handler. The Handler class name is obtained from an XML file based on the URL path information. This XML file is referred to as Struts configuration file and by default named as struts-config.xml.forwards to the selected view. ext section.
http://java-samples.com/showtutorial.php?tutorialid=352
CC-MAIN-2017-30
en
refinedweb
27, 2012 9:26 AM by 976400 Ubuntu and API bluecove : java and bluetooth 976400 Nov 27, 2012 9:26 AM Hi Here is my final project I want study worked with bluecove api (java bluetooth technology) and J2ME you guessed I am looking to make contacts and nokia laptop via bluetooth then voila I first exposed my equipment: hp laptop duel core processor, 200 gig hard drive, 2 GB rowing bone ubuntu 10.10 and set a date and I installed the bluetooth driver that ubuntu recommended me short I followed this short course: for the library installed on ubuntu bluecove then I tested a small java program: to see that Bluetooth is enabled and can detect the presence of mobile phone nokia I created a test project in eclipse and I joined the library in my project brief bluecove here is the program: import java.io.*; import javax.bluetooth.*; import javax.microedition.io.*; public class EchoServer { public final UUID uuid = new UUID( //the uid of the service, it has to be unique, "27012f0c68af4fbf8dbe6bbaf7aa432a", false); //it can be generated randomly public final String name = "Echo Server"; //the name of the service public final String url = "btspp://localhost:" + uuid //the service url + ";name=" + name + ";authenticate=false;encrypt=false;"; LocalDevice local = null; StreamConnectionNotifier server = null; StreamConnection conn = null; public EchoServer() { try { System.out.println("Setting device to be discoverable..."); local = LocalDevice.getLocalDevice(); local.setDiscoverable(DiscoveryAgent.GIAC); System.out.println("Start advertising service..."); server = (StreamConnectionNotifier)Connector.open(url); System.out.println("Waiting for incoming connection..."); conn = server.acceptAndOpen(); System.out.println("Client Connected..."); DataInputStream din = new DataInputStream(conn.openInputStream()); while(true){ String cmd = ""; char c; while (((c = din.readChar()) > 0) && (c!='\n') ){ cmd = cmd + c; } System.out.println("Received " + cmd); } } catch (Exception e) {System.out.println("Exception Occured: " + e.toString());} } public static void main (String args[]){ EchoServer echoserver = new EchoServer(); } } here is the result of the execution in eclipse: Setting device to be discoverable... BlueCove version 2.1.1-SNAPSHOT on bluez Exception Occured: javax.bluetooth.BluetoothStateException: Bluetooth Device is not ready. [1] Operation not permitted BlueCove stack shutdown completed Do you have an idea for solving the problem? cordially I have the same question Show 0 Likes (0) This content has been marked as final. Show 0 replies Actions
https://community.oracle.com/thread/2470818?tstart=1
CC-MAIN-2017-30
en
refinedweb
ReaderWriterLockSlim Class Represents a lock that is used to manage access to a resource, allowing multiple threads for reading or exclusive access for writing. Assembly: System.Core (in System.Core.dll) System.Threading.ReaderWriterLockSlim to enter read mode are blocked. When all threads have exited from read mode, the blocked upgradeable thread enters write mode. If there are other threads waiting to enter write mode, they remain blocked, because the single thread that is in upgradeable mode prevents them from gaining exclusive access to the resource. When the thread in upgradeable mode exits write mode, retrieves the value associated with a key and compares it with a new value. If the value is unchanged, the method returns a status indicating no change. It no value is found for the key, the key/value pair is inserted. If the value has changed, it is updated. Upgradeable mode allows the thread to upgrade. public class SynchronizedCache { private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim(); private Dictionary<int, string> innerCache = new Dictionary<int, string>(); public int Count { get { return innerCache.Count; } } public string Read(int key) { cacheLock.EnterReadLock(); try { return innerCache[key]; } finally { cacheLock.ExitReadLock(); } } public void Add(int key, string value) { cacheLock.EnterWriteLock(); try { innerCache.Add(key, value); } finally { cacheLock.ExitWriteLock(); } } public bool AddWithTimeout(int key, string value, int timeout) { if (cacheLock.TryEnterWriteLock(timeout)) { try { innerCache.Add(key, value); } finally { cacheLock.ExitWriteLock(); } return true; } else { return false; } } void Delete(int key) { cacheLock.EnterWriteLock(); try { innerCache.Remove(key); } finally { cacheLock.ExitWriteLock(); } } public enum AddOrUpdateStatus { Added, Updated, Unchanged }; ~SynchronizedCache() { if (cacheLock != null) cacheLock.Dispose(); } } The following code then uses the SynchronizedCache object to store a dictionary of vegetable names. It creates three tasks. The first writes the names of vegetables stored in an array to a SynchronizedCache instance. The second and third task display the names of the vegetables, the first in ascending order (from low index to high index), the second in descending order. The final task searches for the string "cucumber" and, when it finds it, calls the EnterUpgradeableReadLock method to substitute the string "green bean". public class Example { public static void Main() { var sc = new SynchronizedCache(); var tasks = new List<Task>(); int itemsWritten = 0; // Execute a writer. tasks.Add(Task.Run( () => { String[] vegetables = { "broccoli", "cauliflower", "carrot", "sorrel", "baby turnip", "beet", "brussel sprout", "cabbage", "plantain", "spinach", "grape leaves", "lime leaves", "corn", "radish", "cucumber", "raddichio", "lima beans" }; for (int ctr = 1; ctr <= vegetables.Length; ctr++) sc.Add(ctr, vegetables[ctr - 1]); itemsWritten = vegetables.Length; Console.WriteLine("Task {0} wrote {1} items\n", Task.CurrentId, itemsWritten); } )); // Execute two readers, one to read from first to last and the second from last to first. for (int ctr = 0; ctr <= 1; ctr++) { bool desc = Convert.ToBoolean(ctr); tasks.Add(Task.Run( () => { int start, last, step; int items; do { String output = String.Empty; items = sc.Count; if (! desc) { start = 1; step = 1; last = items; } else { start = items; step = -1; last = 1; } for (int index = start; desc ? index >= last : index <= last; index += step) output += String.Format("[{0}] ", sc.Read(index)); Console.WriteLine("Task {0} read {1} items: {2}\n", Task.CurrentId, items, output); } while (items < itemsWritten | itemsWritten == 0); } )); } // Execute a red/update task. tasks.Add(Task.Run( () => { Thread.Sleep(100); for (int ctr = 1; ctr <= sc.Count; ctr++) { String value = sc.Read(ctr); if (value == "cucumber") if (sc.AddOrUpdate(ctr, "green bean") != SynchronizedCache.AddOrUpdateStatus.Unchanged) Console.WriteLine("Changed 'cucumber' to 'green bean'"); } } )); // Wait for all three tasks to complete. Task.WaitAll(tasks.ToArray()); // Display the final contents of the cache. Console.WriteLine(); Console.WriteLine("Values in synchronized cache: "); for (int ctr = 1; ctr <= sc.Count; ctr++) Console.WriteLine(" {0}: {1}", ctr, sc.Read(ctr)); } } // The example displays the following output: // Task 1 read 0 items: // // Task 3 wrote 17 items // // // Task 1 read 17 items: [broccoli] [cauliflower] [carrot] [sorrel] [baby turnip] [ // beet] [brussel sprout] [cabbage] [plantain] [spinach] [grape leaves] [lime leave // s] [corn] [radish] [cucumber] [raddichio] [lima beans] // // Task 2 read 0 items: // // Task 2 read 17 items: [lima beans] [raddichio] [cucumber] [radish] [corn] [lime // leaves] [grape leaves] [spinach] [plantain] [cabbage] [brussel sprout] [beet] [b // aby turnip] [sorrel] [carrot] [cauliflower] [broccoli] // // Changed 'cucumber' to 'green bean' // // Values in synchronized cache: // 1: broccoli // 2: cauliflower // 3: carrot // 4: sorrel // 5: baby turnip // 6: beet // 7: brussel sprout // 8: cabbage // 9: plantain // 10: spinach // 11: grape leaves // 12: lime leaves // 13: corn // 14: radish // 15: green bean // 16: raddichio // 17: lima beans Available since 8 .NET Framework Available since 3.5 Portable Class Library Supported in: portable .NET platforms Windows Phone Silverlight Available since 8.0 Windows Phone Available since 8.1 This type is thread safe.
https://msdn.microsoft.com/en-us/library/windows/apps/system.threading.readerwriterlockslim.aspx
CC-MAIN-2017-30
en
refinedweb
What would be a good direction to go to implement WebHooks in a ServiceStack stack (as a producer)?For cloud hosted REST services (we are on Azure) are we better to integrate with what the cloud platform providers have instead of building our own stuff (preferred)? Has anyone got any learning from going there? Clearly, we are interested in the basic three things: 1) how a SS service is going to publish an event to the webhook.2) how to manage the external subscriptions to the web hooks?3) how and when to fire the webhook to the subscribers? Not sure if it's relevant but @layoric has developed a Discourse WebHook with ServiceStack to sync with this forum over at DiscourseWebHook. The incoming WebHook didn't map cleanly to a ServiceStack DTO so it needed to implement IRequiresRequestStream so the body could be parsed manually. I don't know of any example using ServiceStack to produce web hooks, although I'd expect you're just going to want to publish a DTO to a specified endpoint. The closest thing we've got for behavior like this in the ServiceStack framework itself is being able to specify a HTTP URL as the replyUri in a MQ Message in which case it will POST the Response DTO of the MQ Service to that HTTP URL. I'd imagine you'd want to do something similar where you'd provide an API to register URLs against different WebHook events then when it's time (i.e. event is triggered), go through each registered URL and POST a DTO to it which you can easily do with HTTP Utils. I don't have any experience with creating WebHooks but I thought it'd be fairly straight-forward, provide an API that lets users register for an event, then when an event happens (e.g. Customer is created) go through each url registered for that event and post the same Response DTO to each registered URL (incidentally a task like this would be an ideal role for an MQ). But depending on your requirements you may want to develop a more sophisticated Web Hooks implementation in which case I'd look into Github's WebHooks API which has a fairly vast and elaborate WebHooks API. Thanks, Ill digest the references you gave here. WebHooks just like github is what we are after. I am kind of hoping there is somekind of well-known cloud architecture we can leverage. i.e. a combo of queues/functions/whatever, that makes it easy to publish events (via a queue), dispatch events to subscribers and manage their subscriptions without having to support all that in my own service. Actually having said that, having an encapsulated WebHookFeature that you can just register in your own SS service that gives all the endpoints and configuration of events would be very neat indeed. Would make a great SS project! (i.e. ServiceStack.WebHooks,or ServiceStack.WebHooks.Azure or ServiceStack.WebHooks.Aws) WebHookFeature ServiceStack.WebHooks ServiceStack.WebHooks.Azure ServiceStack.WebHooks.Aws thoughts? Yeah I definitely think a lot of the basic WebHooks functionality could be encapsulated in a reusable Plugin. You'd need a data store to maintain the WebHook registrations which I'd do in OrmLite so it can integrate with any existing major RDBMS. Firing the web hooks would need to be done in a background thread or for bonus points you can check if they're running an MQ Service and if they are publish the Request DTO that will fire the WebHook events, or if not, falling back to a background thread if not. I might be up for helping to create such a thing. Give me a couple hours, I'll sketch out a notional 'WebHookFeature' and see what the variability/extensibility points may look like, with some suggestions of notional technology implementations that could be supported by it. I think question 3 is up to what purpose the webhooks are serving. If you want to fire a webhook on an action of another service, just firing HTTP request using the HTTP Utils on a background thread might be the way to go (simple but no redundancy/retry/backoff etc). Could wrap it into an attribute filter, eg: public class WebHookFilterAttribute : ResponseFilterAttribute { public override void Execute(IRequest req, IResponse res, object responseDto) { var dbConnectionFactory = req.TryResolve<IDbConnectionFactory>(); var session = req.GetSession(); if (session == null || session.IsAuthenticated == false) { return; } string verb = req.Verb; string typeName = req.Dto.GetType().Name; using (var db = dbConnectionFactory.OpenDbConnection()) { var webHooks = db.Select<WebHookInstance>(x => x.UserName == session.UserName).ToList(); if (webHooks.Count == 0) { return; } if (webHooks.Any(x => x.IncomingVerb == verb && x.IncomingRequestType == typeName) == false) { return; } foreach (var webHookInstance in webHooks) { var request = new WebHookData { Request = req.Dto, Response = res.Dto }; WebHookInstance webHook = webHookInstance; new Task(() => NotifyExternalListener(webHook.Url, request)).Start(); } } } private static void NotifyExternalListener(string url, WebHookData request) { JsonServiceClient client = new JsonServiceClient(); client.Post<WebHookDataResponse>(url, request); } } You just need a could of DTOs and services to manage the CRUD of the webhooks into DB of your choice to let users manage them, eg: [Authenticate] public object Get(WebHook request) { if (request.Id == null) { //Get all for user and return var webHook = Db.Single<WebHookInstance>(x => x.UserName == SessionAs<AuthUserSession>().UserName && x.Id == request.Id); return new WebHookResponse { WebHook = webHook }; } var result = Db.Select<WebHookInstance>(x => x.UserName == SessionAs<AuthUserSession>().UserName); return new WebHookResponse { WebHooks = result.ToList() }; } [Authenticate] public object Post(CreateWebHook request) { WebHookInstance webHookInstance = request.ConvertTo<WebHookInstance>(); webHookInstance.UserName = SessionAs<AuthUserSession>().UserName; Db.Insert(webHookInstance); return new WebHookResponse { WebHook = webHookInstance }; } DTOs [Route("/webhooks")] [Route("/webhooks/{Id}")] public class WebHook : IReturn<WebHookResponse> { public int? Id { get; set; } } [Route("/webhook",Verbs = "POST")] public class CreateWebHook : IReturn<WebHookResponse> { public string Url { get; set; } public string Name { get; set; } public string IncomingVerb { get; set; } public string IncomingRequestType { get; set; } } public class WebHookResponse { public List<WebHookInstance> WebHooks { get; set; } public WebHookInstance WebHook { get; set; } } public class WebHookInstance { [AutoIncrement] public int Id { get; set; } public string UserName { get; set; } public string Url { get; set; } public string Name { get; set; } public string IncomingVerb { get; set; } public string IncomingRequestType { get; set; } } public class WebHookData { public object Request { get; set; } public object Response { get; set; } } This is a very simple approach for allowing users to hook into your other exposed services, but obviously doesn't handle any retrying, backoff, timeouts etc that a cloud solution might (AWS SNS). So the other end of the extreme would be I guess wrapping AWS SNS creation of Topics + HTTP/HTTPS/Other Subscriptions which could be also wrapped into a nice IPlugin feature. Just a first cut at the shape of a candidate WebhookFeature. WebhookFeature The idea being that this would be a ServiceStack IPlugin with one-line registration IPlugin appHost.Plugins.Add(new WebhookFeature()); that includes a built-in webhook subscription web service, that consumers can register and manage callbacks to be POSTED events raised by your own service. We would need to make it highly extensible so ServiceStack developers can control how the main components work in their architectures, and make it a highly testable and easy to stub the various components in testing environments. Your service (internally) uses a IWebhookPublisher.Publish<T>(string eventName, T event) to publish things that happen in your service (each with a unique eventName and unique POCO event (containing event data). IWebhookPublisher.Publish<T>(string eventName, T event) eventName The WebhookFeature then publishes your events to a reliable storage mechanism of your choice (i.e. queue, DB, remote cloud service, etc). This is configurable in the WebhookFeature Then depending on your architecture, you can have any number of workers relay those events to subscribers to your events. Note: not quite resolved the relay end of things yet (ran out of time). For example, in an Azure cloud architecture you might use a WorkerRole to check the queue every second, and then relay to all subscribers. In AWS you might use Lambda to relay to all subscribers. In another architecture, you might do things on a background thread, triggered by the `IWebhookPublisher.Publish(). In any case we need a cross-process way that relays can find out who the subscribers are and easily dispatch to them. Another thing to thing about is whether to include those things in a separate nuget (specific to a architecture) rather than bundling everything in to the main one. I can see people wanting flexibility in the following things: the kind of storage for subscriptions (MemoryCache/DB by default), the kind of reliable storage of events (MemoryCache/DB by default), and the relay component (in memory, background thread by default). I guess what I am looking for at this point is if there is anyone with an appetite to co-design/develop this further??, and whether we should create a new repo for it, and get started? @mythz, @layoric? @mythz @layoric were you guys keen on this? (please see the end of the last post) Come down to time, but happy to help when I can. @jezzsantos maybe create a separate repo for it on GitHub to get things started will do, my github I presume, or SS github? I think we can start with just yours for now I guess (@mythz ?), see where it goes. It would need to be in your own repo, we'd have to support anything in ServiceStack's repos and I don't see creating WebHooks would be popular enough to justify the time to maintain/support it. It would be ideal if you're be able to find someone else here who also needs to use this feature as it will help with designing something generic enough to meet each of your requirements. Failing that just design something that works for you, as additional requirements will start to manifest once you start making use of it. Righto. Thanks guys. Moving the project over here: Just circling back on this thread. We now have a working webhook framework encapsulated in a WebhookFeature plugin over at: ServiceStack.Webhooks, which permits other packages to plugin to it for various architectural components of people's own services.We have demonstrated that for some Azure technologies in the ServiceStack.Webhooks.Azure package. Now looking to see what others might need.Also looking for contribs. Nice! We will (most probably) be using this for our external services - we do not want to expose our MessageQueue and Webhooks is a good candidate. That's great @Rob,Looking forward to supporting you. We should probably move this discussion over to the project, but can I ask?Can I ask what platform you are on (to relay from the queue to the subscribers?).If it one we dont yet have a relay for, then I'd be keen to either help create it or show you the way. I'm creating this page right now to show the way. Building Your Own Plugin
https://forums.servicestack.net/t/webhook-design-with-servicestack/3528
CC-MAIN-2017-30
en
refinedweb
.datatype;22 23 24 25 /**26 * @author Manuel Laflamme27 * @since Aug 13, 200328 * @version $Revision: 1.2 $29 */30 public class DefaultDataTypeFactoryTest extends AbstractDataTypeFactoryTest31 {32 public DefaultDataTypeFactoryTest(String s)33 {34 super(s);35 }36 37 public IDataTypeFactory createFactory() throws Exception 38 {39 return new DefaultDataTypeFactory();40 }41 }42 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/dbunit/dataset/datatype/DefaultDataTypeFactoryTest.java.htm
CC-MAIN-2017-30
en
refinedweb
Terminal Emulator.16 * The Initial Developer of the Original Software is Sun Microsystems, Inc..17 * Portions created by Sun Microsystems, Inc. are Copyright (C) 2001.18 * All Rights Reserved.19 *20 * Contributor(s): Ivan Soleimanipour.21 */22 23 /*24 * "RegionException.java"25 * RegionException.java 1.2 01/07/1026 */27 28 package org.netbeans.lib.terminalemulator;29 30 public class RegionException extends Exception {31 RegionException(String op, String why) {32 super(op + ": " + why); // NOI18N33 }34 }35 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/lib/terminalemulator/RegionException.java.htm
CC-MAIN-2017-30
en
refinedweb
.[8] . Since many web application developers are interested in the content delivery phase and come from a CGI background, mod_perl includes packages designed to make the transition from CGI simple and painless. Apache::PerlRun and Apache::Registry run unmodified CGI scripts, albeit much faster than mod_cgi.[9] [9] Apache::RegistryNG and Apache::RegistryBB are two new experimental modules that you may want to try as well. The difference between Apache::Registry and Apache::PerlRun is that Apache::Registry caches all scripts, and Apache::PerlRun doesn't. To understand why this matters, remember that if one of mod_perl's benefits is added speed, another is persistence. Just as the Perl interpreter is loaded only once, at child process startup, your scripts are loaded and compiled only once, when they are first used. This can be a double-edged sword: persistence means global variables aren't reset to initial values, and file and database handles aren't closed when the script ends. This can wreak havoc in badly written CGI scripts. Whether you should use Apache::Registry or Apache::PerlRun for your CGI scripts depends on how well written your existing Perl scripts are. Some scripts initialize all variables, close all file handles, use taint mode, and give only polite error messages. Others don't. Apache::Registry compiles scripts on first use and keeps the compiled scripts in memory. On subsequent requests, all the needed code (the script and the modules it uses) is already compiled and loaded in memory. This gives you enormous performance benefits, but it requires that scripts be well behaved. Apache::PerlRun, on the other hand, compiles scripts at each request. The script's namespace is flushed and is fresh at the start of every request. This allows scripts to enjoy the basic benefit of mod_perl (i.e., not having to load the Perl interpreter) without requiring poorly written scripts to be rewritten. A typical problem some developers encounter when porting from mod_cgi to Apache::Registry is the use of uninitialized global variables. Consider the following script: use CGI; $q = CGI->new( ); $topsecret = 1 if $q->param("secret") eq 'Muahaha'; # ... if ($topsecret) { display_topsecret_data( ); } else { security_alert( ); } This script will always do the right thing under mod_cgi: if secret=Muahaha is supplied, the top-secret data will be displayed via display_topsecret_data( ), and if the authentication fails, the security_alert( ) function will be called. This works only because under mod_cgi, all globals are undefined at the beginning of each request. Under Apache::Registry, however, global variables preserve their values between requests. Now imagine a situation where someone has successfully authenticated, setting the global variable $topsecret to a true value. From now on, anyone can access the top-secret data without knowing the secret phrase, because $topsecret will stay true until the process dies or is modified elsewhere in the code. This is an example of sloppy code. It will do the right thing under Apache::PerlRun, since all global variables are undefined before each iteration of the script. However, under Apache::Registry and mod_perl handlers, all global variables must be initialized before they can be used. The example can be fixed in a few ways. It's a good idea to always use the strict mode, which requires the global variables to be declared before they are used: use strict; use CGI; use vars qw($top $q); # init globals $top = 0; $q = undef; # code $q = CGI->new( ); $topsecret = 1 if $q->param("secret") eq 'Muahaha'; # ... But of course, the simplest solution is to avoid using globals where possible. Let's look at the example rewritten without globals: use strict; use CGI; my $q = CGI->new( ); my $topsecret = $q->param("secret") eq 'Muahaha' ? 1 : 0; # ... The last two versions of the example will run perfectly under Apache::Registry. Here is another example that won't work correctly under Apache::Registry. This example presents a simple search engine script: use CGI; my $q = CGI->new( ); print $q->header('text/plain'); my @data = read_data( ) my $pat = $q->param("keyword"); foreach (@data) { print if /$pat/o; } The example retrieves some data using read_data( ) (e.g., lines in the text file), tries to match the keyword submitted by a user against this data, and prints the matching lines. The /o regular expression modifier is used to compile the regular expression only once, to speed up the matches. Without it, the regular expression will be recompiled as many times as the size of the @data array. Now consider that someone is using this script to search for something inappropriate. Under Apache::Registry, the pattern will be cached and won't be recompiled in subsequent requests, meaning that the next person using this script (running in the same process) may receive something quite unexpected as a result. Oops. The proper solution to this problem is discussed in Chapter 6, but Apache::PerlRun provides an immediate workaround, since it resets the regular expression cache before each request. So why bother to keep your code clean? Why not use Apache::PerlRun all the time? As we mentioned earlier, the convenience provided by Apache::PerlRun comes at a price of performance deterioration. In Chapter 9, we show in detail how to benchmark the code and server configuration. Based on the results of the benchmark, you can tune the service for the best performance. For now, let's just show the benchmark of the short script in Example 1-6. use strict; use CGI ( ); use IO::Dir ( ); my $q = CGI->new; print $q->header("text/plain"); my $dir = IO::Dir->new("."); print join "\n", $dir->read; The script loads two modules (CGI and IO::Dir), prints the HTTP header, and prints the contents of the current directory. If we compare the performance of this script under mod_cgi, Apache::Registry, and Apache::PerlRun, we get the following results: Mode Requests/sec ------------------------------- Apache::Registry 473 Apache::PerlRun 289 mod_cgi 10 Because the script does very little, the performance differences between the three modes are very significant. Apache::Registry thoroughly outperforms mod_cgi, and you can see that Apache::PerlRun is much faster than mod_cgi, although it is still about twice as slow as Apache::Registry. The performance gap usually shrinks a bit as more code is added, as the overhead of fork( ) and code compilation becomes less significant compared to execution times. But the benchmark results won't change significantly. Jumping ahead, if we convert the script in Example 1-6 into a mod_perl handler, we can reach 517 requests per second under the same conditions, which is a bit faster than Apache::Registry. In Chapter 13, we discuss why running the code under the Apache::Registry handler is a bit slower than using a pure mod_perl content handler. It can easily be seen from this benchmark that Apache::Registry is what you should use for your scripts to get the most out of mod_perl. But Apache::PerlRun is still quite useful for making an easy transition to mod_perl. With Apache::PerlRun, you can get a significant performance improvement over mod_cgi with minimal effort. Later, we will see that Apache::Registry's caching mechanism is implemented by compiling each script in its own namespace. Apache::Registry builds a unique package name using the script's name, the current URI, and the current virtual host (if any). Apache::Registry prepends a package statement to your script, then compiles it using Perl's eval function. In Chapter 6, we will show how exactly this is done. What happens if you modify the script's file after it has been compiled and cached? Apache::Registry checks the file's last-modification time, and if the file has changed since the last compile, it is reloaded and recompiled. In case of a compilation or execution error, the error is logged to the server's error log, and a server error is returned to the client.
http://www.yaldex.com/perl-tutorial-3/0596002270_pmodperl-chp-1-sect-3.html
CC-MAIN-2017-30
en
refinedweb
Python plugin to send logs to Logmatic.io Project description logmatic-python Link to the Logmatic.io documentation: Python helpers to send logs to Logmatic.io. It mainly contains a proper JSON formatter and a socket handler that streams logs directly to Logmatic.io - so no need to use a log shipper if you don’t wan’t to. Pre-requirements To install this library, use the following command: pip install logmatic-python Usage Use the JSON formatter To use the JSON formatter, simply associate it to any handler such as the StreamHandler here. import logmatic import logging logger = logging.getLogger() handler = logging.StreamHandler() handler.setFormatter(logmatic.JsonFormatter(extra={"hostname":socket.gethostname()})) logger.addHandler(handler) logger.setLevel(logging.INFO) Once this setup is done, any child logger will use this configuration (eg logging.getlogger("my_logger")). As you can see, you can associate any extra information to the base formatter such as the hostname here or any environment variable you’ll need depending of your usage. test_logger = logging.getLogger("test") test_logger.info("classic message", extra={"special": "value", "run": 12}) Returns the following format: { "asctime": "2016-02-16T09:51:31Z", "name": "test", "processName": "MainProcess", "filename": "write_in_console.py", "funcName": "<module>", "levelname": "INFO", "lineno": 20, "module": "write_in_console", "threadName": "MainThread", "message": "classic message", "special": "value", "run": 12, "timestamp": "2016-02-16T09:51:31Z", "hostname": "<your_hostname>" } Let’s take some time here to understand what we have: - The default format is β€œ%(asctime) %(name) %(processName) %(filename) %(funcName) %(levelname) %(lineno) %(module) %(threadName) %(message)”. So that’s why all these attributes are present on all the log events. If you need less, you can change the format when defining the formatter: logmatic.JsonFormatter(fmt="",...) - The hostname attribute here is added all the time as it was defined on the root logger. - The special and run attributes were added specifically to this log event. Good to know, an traceback from an exception is totally wrapped into the JSON event. That’s suppress the handling of multiline formatting: { ... "exc_info": "Traceback (most recent call last):\n File \"test/write_in_console.py\", line 24, in exception_test\n raise Exception('test')\nException: test", ... } Stream log straight to Logmatic.io The LogmaticHandler can be coupled to the JsonFormatter as follow: import logmatic import logging logger = logging.getLogger() handler = logmatic.LogmaticHandler("<your_api_key>") handler.setFormatter(logmatic.JsonFormatter(extra={"hostname":socket.gethostname()})) logger.addHandler(handler) logger.setLevel(logging.INFO) Don’t forget to replace by the one provided on your Logmatic.io’s platform. With this configuration, any log coming from your Python’s application will be sent to your platform and will fulfill the same format as described in the previous section. Please contact us if you want anything more to be added in this toolset! Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/logmatic-python/
CC-MAIN-2022-27
en
refinedweb
Hi, I have a rather complex app where the layout content is generated by a callback (depending on what should be displayed). For one specific layout content, I need a long callback because one callback function takes some time to execute. The problem is that, somehow, right after the layout is generated, the long callback is triggered automatically (even when using prevent_initial_call). No problem if I use a simple callback (not long), though. Below is a sample script to reproduce this issue (if it is intended) based on the long callback example in the documentation: import dash from dash import html from dash.long_callback import DiskcacheLongCallbackManager from dash.dependencies import Input, Output import diskcache cache = diskcache.Cache("./cache") long_callback_manager = DiskcacheLongCallbackManager(cache) app = dash.Dash( __name__, suppress_callback_exceptions=True, long_callback_manager=long_callback_manager, ) layout1 = html.Div( [ html.Button("Generate layout", id="button_1"), html.Div(id="main"), ], ) layout2 = html.Div( [ html.Div([html.P(id="paragraph_id", children=["Button not clicked"])]), html.Button(id="button_id", children="Run Job!"), ] ) app.layout = layout1 app.validation_layout = html.Div([layout1, layout2]) @app.callback( Output("main", "children"), Input("button_1", "n_clicks"), prevent_initial_call=True, ) def generate_layout(n_clicks): return layout2 @app.long_callback( Output("paragraph_id", "children"), Input("button_id", "n_clicks"), prevent_initial_call=True, ) def callback(n_clicks): return [f"Clicked {n_clicks} times"] if __name__ == "__main__": app.run_server(debug=True) Execute the app, click on β€œGenerate layout”, wait a few seconds, β€œparagraph_id” children will automatically change to β€œClicked None times”. Otherwise, if it’s actually intended, why is it happening and how to prevent it? Edit: following the documentation, I also tried with a validation layout to reference all the callbacks, but same issue (code updated).
https://community.plotly.com/t/long-callback-function-executed-without-being-triggered/62236
CC-MAIN-2022-27
en
refinedweb
The an IDE (e.g. IntelliJ IDEA, Eclipse). The integrated processing unit container enables to run the PU inside your IDE. The artifacts that belong to a PU are packaged as a JAR or WAR file.. The application can be a typical Spring application deployed to an XAP PU. Mixed PU This type of PU’s includes both business logic and a space. Typically, the business logic interacts with a local space instance (i.e. a data grid instance running within the same PU instance) to achieve lowest possible latency and best performance. Elastic Processing Unit (EPU)).Learn more Web PU XAP allows you to deploy web applications (packaged as a WAR file) onto the Service Grid. The integration is built on top of the Service Grid Processing Unit Container..Learn more Mule PU XAP’s Mule integration allows you to run a pure Mule application (with or without XAP special extension points and transports) as a PU.Learn more The PU Jar File Much like a JEE web application or an OSGi bundle, The PU is packaged as a .jar file and follows a certain directory structure which enables the XAP runtime environment to easily locate the deployment descriptor and load its classes and the libraries it depends on. A typical PU looks as follows: |----META-INF |--------spring |------------pu.xml |------------pu.properties |------------sla.xml |--------MANIFEST.MF |----xap |--------tutorial |------------model |----------------Payment.class |----------------User.class |----lib |--------hibernate3.jar |--------.... |--------commons-math.jar The PU jar file is composed of several key elements: META-INF/spring/pu.xml (mandatory): This is the PU’s deployment descriptor, which is in fact a Spring context XML configuration with a number of XAP-specific namespace bindings. These bindings include XAP specific components (such as the space for example). The pu.xml file typically contains definitions of XAP components (space, event containers, remote service exporters) and user defined beans. META-INF/spring/sla.xml (not mandatory): This file contains SLA definitions for the PU (i.e. number of instances, number of backup and deployment requirements). Note that this is optional, and can be replaced with an <os:sla>definition in the pu.xml file. If neither is present, the default SLA will be applied. SLA definitions can also be specified at the deploy time via command line arguments. META-INF/spring/pu.properties (not mandatory): Enables you to externalize properties included in the pu.xml file (e.g. database connection username and password), and also set system-level deployment properties and overrides, such as JEE related deployment properties. User class files: Your processing unit’s classes (here under the xap.tutorial package) lib: Other jars on which your PU depends. META-INF/MANIFEST.MF (not mandatory): This file could be used for adding additional jars to the PU classpath, using the standard MANIFEST.MF Class-Path property. The pu.xml file This file is a Spring framework XML configuration file. It leverages the Spring framework IoC container and extends it by using the Spring custom namespace mechanism. The definitions in the pu.xml file are divided into 2 major categories: - GigaSpaces specific components, such as space, event containers or remote service exporters. - User defined beans, which define instances of user classes to be used by the PU. For example, user defined event handlers to which the event containers delegate events as those are received. Here is an example of a pu.xml file: Learn moreLearn more <?xml version="1.0" encoding="UTF-8"?> <!-- top level element of the Spring configuration. Note the multiple namespace definition for both GigaSpaces and Spring. --> <beans xmlns="" xmlns: <!--. Here we configure an embedded space (note the url element which does not contain any remote protocol prefix. Also note that we do not specify here the cluster topology of the space. It is declared by the `os-sla:sla` element of this pu.xml file. --> <os-core:embedded-space <!-- Define the GigaSpace instance that the application will use to access the space --> <os-core:giga-space </beans>. A sample SLA definition is shown below: Learn moreLearn more <beans xmlns="" xmlns: <os-sla:sla </os-sla:sla> </beans> Deployment When deploying the PU to the XAP Service Grid, the PU jar file is uploaded to the XAP Manager (GSM) and extracted to the deploy directory of the local XAP installation (located by default under Each GSC to which a certain instance was provisioned, downloads the PU jar file from the GSM, extracts it to its local working directory (located by default under.xml: @EventDriven @Polling @NotifyType(write = true) public class PaymentProcessor { // Define the event we are interested in @EventTemplate Payment unprocessedData() { Payment template = new Payment(); template.setStatus(ETransactionStatus.NEW); return template; } @SpaceDataEvent public Payment eventListener(Payment event) { System.out.println("Payment received; processing ....."); // set the status on the event and write it back into the space event.setStatus(ETransactionStatus.PROCESSED); return event; } } Create pu.xml In this step will create the configuration file for the PU deployment < (an IJSpace implementation) --> <os-core:embedded-space <!-- Define the GigaSpace instance that the application will use to access the space --> <os-core:giga-space </beans> Deployment Now we have all the pieces that are necessary to create the jar file for the PU. After we have created the jar file its time to deploy the PU onto the data grid. Again, you can do this in three ways; by script, Java code or via the admin UI. In our example will use the scripts to deploy the PU. First we start the GigaSpace Agent (GSA) that will create our IMDG on this machine: GS_HOME\bin\gs-agent.bat GS_HOME/bin/gs-agent.sh And now we deploy the PU onto the IMDG: GS_HOME\bin\gs.sh deploy eventPU.jar We assume that the jar we created is named eventPU.jar If you startup the Admin UI you will be able to see that through the deployment a space called eventSpace was created and a PU named with the jar name. Client interface Now its time to create a client that creates events and writes them into the space. We will attach a listener on the client side to the space that will receive events when the payment is processed. @EventDriven @Polling @NotifyType(write = true) public class ClientListener { // Define the event we are interested in @EventTemplate Payment unprocessedData() { Payment template = new Payment(); template.setStatus(ETransactionStatus.PROCESSED); return template; } @SpaceDataEvent public Payment eventListener(Payment event) { System.out.println("Processed Payment received "); return null; } } public void postPayment() { // Register the event handler on the Space this.registerPollingListener(); // Create a payment Payment payment = new Payment(); payment.setCreatedDate(new Date(System.currentTimeMillis())); payment.setPayingAccountId(new Integer(1)); payment.setPaymentAmount(new Double(120.70)); // write the payment into the spaceO space.write(payment); } public void registerPollingListener() { Payment payment = new Payment(); payment.setStatus(ETransactionStatus.PROCESSED); SimplePollingEventListenerContainer pollingEventListenerContainer = new SimplePollingContainerConfigurer( space).eventListenerAnnotation(new ClientListener()) .pollingContainer(); pollingEventListenerContainer.start(); } PU By default the PU is single threaded. With a simple annotation you can tell XAP how many threads the PU should run with. @EventDriven @Polling @Polling(concurrentConsumers = 3, maxConcurrentConsumers = 10) @NotifyType(write = true) public class PaymentProcessor { } Multiple PU’s Lets assume that we have two machines available for our deployment. We want to deploy 4 instances of our PU, two on each machine. The deployment script for this scenario looks like this: Learn moreLearn more With a statefull PU, embedded space ./gs.sh deploy -cluster schema=partitioned total_members=4,0 -max-instances-per-machine 2 eventPU.jar With a stateless PU ./gs.sh deploy -cluster total_members=4 -max-instances-per-machine 2 eventPU.jar
https://docs.gigaspaces.com/xap/10.2/tut-java/java-tutorial-part5.html
CC-MAIN-2022-27
en
refinedweb
Getting Started on Linux 25 Jun 201812 minutes to read ASP.NET Core application to use our components. The below guidelines demonstrate how to create an ASP.NET Core application and configure with our components. Prerequisites Set up the apt-get feeds, then install .NET Core on Ubuntu or Linux Mint. Execute the below commands in terminal window to set up the apt-get feeds for Ubuntu 17.10 and 17.04. Ubuntu 17.10 Open your terminal window and execute the following commands. - Register the Microsoft Product key as trusted. sudo sh -c 'echo "deb [arch=amd64] artful main" > /etc/apt/sources.list.d/dotnetdev.list' sudo apt-get update - Setup host Package feed commands. sudo sh -c 'echo "deb [arch=amd64] artful main" > /etc/apt/sources.list.d/dotnetdev.list' sudo apt-get update Ubuntu 17.04 - Setup host Package feed commands sudo sh -c 'echo "deb [arch=amd64] zesty main" > /etc/apt/sources.list.d/dotnetdev.list' sudo apt-get update Mono Installation The Mono Project (powered by Xamarin) is a project that tends to make the .NET Framework available to Microsoft’s foreign platforms. To run our ASP.NET Core 2.1.4 web application on Linux, install the Mono by executing the below commands. - Execute this command to add the Mono’s GPG key to the packages manager. apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF - Then need to add the required repositories to the configuration file. echo "deb wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list echo "deb wheezy-apache24-compat main" | sudo tee -a /etc/apt/sources.list.d/mono-xamarin.list - Before the Mono installation, execute the below command to download the packages list from the repositories and update the packages with new version. sudo apt-get update - Finally install the Mono. sudo apt-get install mono-complete .NET Core SDK installation Before you start, please ensure any previous .NET Core version installed on your machine. If its exists remove the previous version by using this script. - Executing the following command automatically install the .NET Core SDK. sudo apt-get install dotnet-sdk-2.1.4 - Execute the following command to prove the SDK installed successfully. dotnet --version Configuration To configure an ASP.NET Core application and utilize our components, follow the below guidelines. - Create an ASP.NET Core Project. - Configuring Syncfusion Components. Create an ASP.NET Core Application ASP.NET Core Web application can be created in any one of the following ways. Terminal (Command Line). Yeoman. Building Projects with Command Line The following steps helps to create a ASP.NET Core web application using terminal window. - Open a terminal window to create a new directory for your project creation. mkdir Sample Then navigate to your folder directory in your terminal window. In terminal window, the following steps helps to create a ASP.NET Core web application to configure our components. In the terminal window, we have an options to develop a below listed types of projects. The default type as console application. To know more about the project options and its syntax declarations refer the .NET link. Run the below command to know about project creation templates. dotnet new --help - Then run the below mentioned command to create a new web application. After command execution the project will be created within your folder. dotnet new mvc Building Projects with Yeoman Yeoman is a scaffolding tool for modern web apps and helps us to quick start a new web project. The following steps helps to create an ASP.NET Core 1.0 application using yeoman tool. Since Visual Studio Code uses folder structure for storing files of application, create a folder of the name ASP.NET. - Open the Terminal window and execute the below mentioned command to install the Node.js. curl -sL | sudo -E bash - sudo apt-get install -y nodejs - Install Yeoman and aspnet generator. sudo npm install -g yo generator-aspnet - Once Yeoman generator installed successfully, run the below command to invoke a ASP.NET Core project creation wizard. yo aspnet From the list of available projects, select the Web Application Basic [without Membership and Authorization] by using arrow keys. And then provide the project name or simply press the β€˜Enter’ key to create the project with default name. Configuring Syncfusion Components Open Visual Studio Code and open your Sample application folder using Open Folder option. Now your project folder is loaded in Visual Studio Code application. bower.json file has been deprecated from the latest version of DotNetCore 2.1. We have used syncfusion NPM packages and gulp task runner to download the necessary syncfusion scripts and CSS files into wwwroot folder. - Make sure latest version of npm and Node.js has installed in your machine. To check the npm and node version installed in your machine type the following commands in the terminal window. node -v npm -v - Open the global.json file. Remove the content in that file and include the installed dotnet version as depicted in the following code. { "sdk": { "version": "2.1.4" } } - Type the following command in the terminal window to create package.json file in your application. package.json will contain the project dependency packages and its version information. npm init --yes - After package.json file is created. Remove the content in that file and include the following dependencies. { "version": "1.0.0", "name": "asp.net", "private": true, "devDependencies": { "bootstrap": "^3.3.6", "jquery": "^3.1.1", "jsrender": "^0.9.75", "gulp": "^3.9.1", "syncfusion-javascript": "^16.1.24" } } - Now, run the following commands to download syncfusion scripts and CSS in the node_modules directory. npm install - Add the gulpfile.js in the root directory and kindly include the below mentioned gulp task in the gulpfile.js. var gulp = require('gulp'); gulp.task('copy', function () { gulp.src('./node_modules/syncfusion-javascript/**') .pipe(gulp.dest('./wwwroot/lib/syncfusion-javascript')); }); - To copy any other project dependency packages into the wwwroot folder, write a new task for each packages as given in the following code sample. gulp.task('bootstrap', function () { gulp.src('./node_modules/bootstrap/**') .pipe(gulp.dest('./wwwroot/lib/bootstrap')); }); gulp.task('jquery', function () { gulp.src('./node_modules/jquery/**') .pipe(gulp.dest('./wwwroot/lib/jquery')); }); gulp.task('jquery-validation', function () { gulp.src('./node_modules/jquery-validation/**') .pipe(gulp.dest('./wwwroot/lib/jquery-validation')); }); gulp.task('jsrender', function () { gulp.src('./node_modules/jsrender/**') .pipe(gulp.dest('./wwwroot/lib/jsrender')); }); To configure Visual Studio Code to use Gulp as task runner, Press Ctrl+Shift+P to bring up the command palette. Now type Configure Task and select Create task.json file from template. - This will create task.json file in .vscode directory. Once again, press Ctrl+Shift+P to bring up the command palette. Type β€œRun Task” and select it, which will bring up a list of tasks configured in Gulp. Choose the Gulp Task copy to run gulp task to copy necessary script and CSS files from node_modules directory to wwwroot directory. By the same way,type β€œRun Task” and select each gulp task mentioned in gulpfile.js to copy the scripts and CSS from the required package in node_modules directory to wwwroot directory. Now refer our Syncfusion package Syncfusion.EJ.AspNet.Core into your application for our components deployment. The packages configuration & installation guidelines will be documented here. Once the NuGet packages installation gets completed, the Syncfusion.EJ.AspNet.Core package reference was automatically added in .csproj file. <PackageReference Include="Syncfusion.EJ.AspNet.Core" Version="16.1600.0.24"> The ASP.NET Core NuGet packages versioning has been streamlined as 16.1.0.32 in shorter than older versioning (16.1600.0.32) from Volume 1, 2018 service pack 1 release (16.1.0.32). Since all the framework version wise assemblies are grouped into a single package. The package β€œSyncfusion.EJ.MVC” renamed into β€œSyncfusion.EJ.AspNet.Core” from Volume 3, 2016 (14.3.0.49) release. The β€œpreview2-final” keyword removed our Syncfusion packages naming from Volume 1, 2017 (15.1.0.33) release. - Open _viewimports.cshtml file from the views folder and add the following namespace for components references and Tag Helper support. @using Syncfusion.JavaScript @addTagHelper *, Syncfusion.EJ - Open Terminal window and navigate to your project folder then execute the following command to restore the packages which are all specified in your .csproj file. dotnet restore - Now refer the necessary scripts and CSS files in your _layout.cshtml page. NOTE Include the below mentioned scripts and CSS references under the appropriate environment. (For eg: If your environment is β€œDevelopment”, then refer the scripts and CSS files under the tag environment names=”Development”). Refer all the required external and internal scripts only once in the page with proper order. Refer this link to know about order of script reference. <html> <head> <link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.css" /> <link href="~/lib/syncfusion-javascript/Content/ej/web/bootstrap-theme/ej.web.all.min.css" rel="stylesheet" /> <link href="~/lib/syncfusion-javascript/Content/ej/web/responsive-css/ej.responsive.css" rel="stylesheet" /> <script src="~/lib/jquery/dist/jquery.js"></script> <script src="~/lib/jsrender/jsrender.min.js"></script> <script src="~/lib/syncfusion-javascript/Scripts/ej/web/ej.web.all.min.js"></script> </head> <body> </body> </html> NOTE jQuery.easing external dependency has been removed from version 14.3.0.49 onwards. Kindly include this jQuery.easing dependency for versions lesser than 14.3.0.49 in order to support animation effects. - Add ScriptManager to the bottom of the layout.cshtml page. The ScriptManager used to place our control initialization script in the page. <ej-script-manager></ej-script-manager> - Now open your view page to render our Syncfusion components in Tag Helper syntax. <ej-date-picker</ej-date-picker> Finally execute the dotnet run command to run your sample browser. Then open your browser and paste the listening port localhost:5000 to view your sample in browser.
https://help.syncfusion.com/aspnet-core/gettingstarted/getting-started-linux-2-0-0
CC-MAIN-2022-27
en
refinedweb
Available with Spatial Analyst license. Summary Performs a maximum likelihood classification on a set of raster bands and creates a classified raster as output. Learn more about how Maximum Likelihood Classification). Any signature file created by the Create Signature, Edit Signature, or Iso Cluster tools is a valid entry for the input signature file. These will have a .gsg extension. By default, all cells in the output raster will be classified, with each class having equal probability weights attached to their signatures. The input a priori probability file must be an ASCII file consisting of two columns. The values in the left column represent class IDs. The values in the right column represent the a priori probabilities for the respective classes. Valid values for class a priori probabilities must be greater than or equal to zero. If zero is specified as a probability, the class will not appear on the output raster. The sum of the specified a priori probabilities must be less than or equal to one. The format of the file is as follows: 1 .3 2 .1 4 .0 5 .15 7 .05 8 .2 The classes omitted in the file will receive the average a priori probability of the remaining portion of the value of one. In the above example, all classes from 1 to 8 are represented in the signature file. The a priori probabilities of classes 3 and 6 are missing in the input a priori probability file. Since the sum of all probabilities specified in the above file is equal to 0.8, the remaining portion of the probability (0.2) is divided by the number of classes not specified (2). Therefore, classes 3 and 6 will each be assigned a probability of 0.1. A specified reject fraction, which lies between any two valid values, will be assigned to the next upper valid value. For example, 0.02 will become 0.025. There is a direct relationship between the number of unclassified cells on the output raster resulting from the reject fraction and the number of cells represented by the sum of levels of confidence smaller than the respective value entered for the reject fraction. If the Class Name in the signature file is different than the Class ID, then an additional field will be added to the output raster attribute table called CLASSNAME. For each class in the output table, this field will contain the Class Name associated with the class. For example, if the Class Names for the classes in the signature file are descriptive string names (for example, conifers, water, and urban), these names will be carried to the CLASSNAME field. The extension for an input a priori probability file is .txt. See Analysis environments and Spatial Analyst for additional details on the geoprocessing environments that apply to this tool. Parameters MLClassify(in_raster_bands, in_signature_file, {reject_fraction}, {a_priori_probabilities}, {in_a_priori_file}, {out_confidence_raster}) Return Value Code sample This example creates an output classified raster containing five classes derived from an input signature file and a multiband raster. import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" mlcOut = MLClassify("redlands", "c:/sapyexamples/data/wedit5.gsg", "0.0", "EQUAL", "", "c:/sapyexamples/output/redmlcconf") mlcOut.save("c:/sapyexamples/output/redmlc") This example creates an output classified raster containing five classes derived from an input signature file and a multiband raster. # Name: MLClassify_Ex_02.py # Description: Performs a maximum likelihood classification on a set of # raster bands. # Requirements: Spatial Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.sa import * # Set environment settings env.workspace = "C:/sapyexamples/data" # Set local variables inRaster = "redlands" sigFile = "c:/sapyexamples/data/wedit5.gsg" probThreshold = "0.0" aPrioriWeight = "EQUAL" aPrioriFile = "" outConfidence = "c:/sapyexamples/output/redconfmlc" # Execute mlcOut = MLClassify(inRaster, sigFile, probThreshold, aPrioriWeight, aPrioriFile, outConfidence) # Save the output mlcOut.save("c:/sapyexamples/output/redmlc02") Environments Licensing information - Basic: Requires Spatial Analyst - Standard: Requires Spatial Analyst - Advanced: Requires Spatial Analyst
https://pro.arcgis.com/en/pro-app/latest/tool-reference/spatial-analyst/maximum-likelihood-classification.htm
CC-MAIN-2022-27
en
refinedweb