text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
One of the problems that Microsoft IT people found about working threads of execution in the Windows Operating System is that while microprocessor speeds have increased dramatically, the same clock interrupt timer was used by default. This was a problem for the time accounting for threads, and how much CPU usage they require. A clock interval timer would fire every 15 ms. At that point, the actual thread in execution would have been charged with the entire 15 ms. In reality, one thread would have been awakened to do some work, be sent back into the sleep state to have another thread awakened to do some work, to then be put back to sleep. Then, a third thread would begin to execute, but be charged for the entire 15 milliseconds because the clock interrupt timer had fired. The kernel changes in Vista SP1 seek to rectify this lack of accounting for threads “that fly under the radar”.
While the success of an application may depend on it being broken down into multithreaded parts, creating and destroying a thread is expensive. Having lots of threads wastes memory resources, and thus hurts performance because the Operating System has to schedule and context switch between runnable threads. To improve this situation, the CLR contains code to manage its own thread pool. Now, while many applications use multiple threads, often those threads spend a lot of time in the sleeping state, waiting for an event to occur. Other threads might enter a sleeping state, and be awakened only periodically to poll for a change or update status information before going to sleep again. Using thread pooling provides your application with a pool of worker threads that are managed by the system, which allows one to concentrate on application tasks rather than thread management. Actually, if you have a number of short tasks that require you to use more than one thread, using the ThreadPool class is the most effective way to use multiple threads. One thread monitors the status of several wait operations queued to the thread pool. When a wait operation completes, a worker thread from the thread pool executes the corresponding callback function.
ThreadPool
You can also queue work items that are not related to a wait operation to the thread pool. To request that a work item be handled by a thread in the thread pool, call the QueueUserWorker. This use of delegates and callbacks indicates the strength of the C# language and its design after the C/C++ languages. The following section will briefly touch on some basics, before we create an instance of the thread pool class.
QueueUserWorkerItem
There are a number of steps to follow to create a new thread:
ThreadStart
Thread
Start()
using System;
using System.Threading;
class Test {
static void Main() {
ThreadStart operation = new ThreadStart(DoWork);
Thread myThread = new Thread(operation);
myThread.Start();
}
static void DoWork() {
Console.WriteLine(“Thread: {0}”,
Thread.CurrentThread.ManagedThreadId);
}
}
When the Start method is called, the DoWork() method is called on a new thread and the thread executes until the method completes. The DoWork method writes the phrase “In Thread#” and shows the ManagedThreadId property. To create multiple threads, use the for loop (in this simple scenario), which by iterating creates, but does not start, a new thread:
Start
DoWork()
DoWork
ManagedThreadId
for
using System;
using System.Threading;
class Test {
static void Main() {
ThreadStart operation = new ThreadStart(DoWork);
for (int x = 1; x <= 5; ++x)
{
Thread myThread = new Thread(operation);
myThread.Start();
}
}
static void DoWork() {
Console.WriteLine("Thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}
}, and there is only one ThreadPool object per process. When the CLR initializes, the thread pool has no threads in it. Internally, the thread pool maintains a queue of operation requests. When an application performs an asynchronous operation, you call some method that appends an entry into the thread pool’s queue. The thread pool is created the first time you call ThreadPool.QueueUserWorkItem, or when a timer or registered wait operation queues a callback method. One thread monitors all tasks that have been queued to the thread pool. When a task has completed, a thread from the thread pool executes the corresponding callback method. There is no way to cancel a work item after it has been queued.
CorSetMaxThreads
ThreadPool.QueueUserWorkItem
The number of operations that can be queued to the thread pool is limited only by the available memory; however, the thread pool will enforce a limit on the number of threads it allows to be active in the process simultaneously.
Each thread uses the default stack size, runs at the default priority, and is in the multithreaded apartment. If one of the threads becomes idle (as when waiting on an event) in managed code, the thread pool injects another worker thread to keep all the processors busy. If all thread pool threads are constantly busy, but there is pending work in the queue, the thread pool will, after some period of time, create another worker thread. However, the number of threads will never exceed the maximum value. The ThreadPool also switches to the correct AppDomain when executing thread pool callbacks.
There are several scenarios in which it is appropriate to create and manage your own threads instead of using the ThreadPool:
This first example queues a very simple task, represented by the ThreadProc method, using QueueUserWorkItem:
ThreadProc
QueueUserWorkItem
using System;
using System.Threading;
public class Test {.");
}
}
One of the most predominant uses of the thread pool is to perform an asynchronous compute-bound operation. A compute bound operation is an operation that requires computation. For instance, recalculating cells in a spreadsheet application, or spell-checking words in a word-processing application. Both involve calculation of some length of side or string, as well as adherence to grammar or format. Ideally, compute bound operations will not perform any synchronous I/O operations because synchronous I/O operations “suspend" the calling thread while the underlying hardware does the work. A thread that is suspended and not running is a thread using system.
|
http://www.codeproject.com/Articles/32374/Thread-Basics-and-the-CLR-s-Thread-Pool?msg=2871922
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Goldman Sachs predicts low oil prices
4 in 10 millennials are 'overwhelmed' by debt
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more!
Looking back to 106 days ago, Bright Horizons Family Solutions,
Inc (Symbol: BFAM) priced a 7,000,000 share secondary stock
offering at $36.75 per share. Buyers in that offering made a
considerable investment into the company, expecting that their
investment would go up over the course of time and based on early
trading on Thursday, the stock is now 11.7% above the offering
price.
Investors who did not participate in the offering but would be a
buyer of BFAM at a cheaper price, might benefit from considering
selling puts among the alternative strategies at their disposal.
One interesting put contract in particular, is the September 2.9% annualized rate of return (at Stock
Options Channel we call this the
YieldBoost
).
Top YieldBoost Puts of of Stocks with Recent
Second?
|
http://www.nasdaq.com/article/use-options-for-a-chance-to-buy-bfam-at-a-16-discount-cm369108
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
KDECore PolicyKit PolicyKit,.auth.example", so those three slots implement the actions "org.kde.auth.example.read", "org.kde.auth.example.write" and "org.kde.auth. Please note that due to QMetaObject being picky about namespaces, you NEED to declare the return type as ActionReply and not KAuth::ActionReply. So the using declaration is compulsory The QVariantMap object that comes as argument contains custom data coming from the application. put into the reply.data() object, which is another QVariantMap. It will be sent back to the application with the reply.
Because this class will be compiled into a standalone executable, we need a main() function and some code to initialize everything: you don't have to write it. Instead, you use the KDE4_AUTH one at least one of them.
To build the helper, KDE macros provide a function named kde4_install_auth_helper_files(). Use it in your cmake file like this:
The first argument is the cmake target name for the helper executable, which you have to build and install separately. Make sure to INSTALL THE HELPER IN ${LIBEXEC_INSTALL_DIR}, otherwise kde4_install_auth.auth
Calling the helper from the application
Once the helper is ready, we need to call it from the main application. In the example's mainwindow have a couple of choices:
- A synchronous call, using the Action::execute() method, will start the helper, execute the action and return the reply.
- An asynchronous call, by setting Action::setExecutesAsync(true) and calling ::execute(), will start the helper and return immediately.
The asynchronous call is the most flexible alternative, but you need a way to obtain the reply. This is done by connecting to a signal, but the Action class is not a QObject subclass. Instead, you connect to signals exposed by the ActionWatcher class. For every action id you use, there is one ActionWatcher object. You can retrieve it using the Action::watcher() method. If you execute an action using Action::executeAsync(), you can connect to the actionPerformed(ActionReply) signal to be notified when the action has been completed (or failed). As a parameter, you'll get a reply object containing all the data you need. As a convenience, you can also pass an object and a slot to the executeAsync() method to directly connect them to the signal, if you don't want to mess with watchers.
The piece of code that calls the action of the previous example is located in example/mainwindow.cpp in the on_readAction_triggered() slot. It looks like, although the execute() method will return only when the action is completed, the GUI will remain responsive because an internal event loop is entered. This means you should be prepared to receive other events in the meanwhile. Also, notice that you have to explicitly set the helper ID to the action: this is done for added safety, to prevent the caller from accidentally invoking a helper, and also because KAuth actions may be used without a helper attached (the default). In this case, action.execute() will return ActionSuccess if the authentication went well. This is quite useful if you want your user to authenticate before doing something, which however needs no privileged permissions implementation-wise.
Asynchronous calls, data reporting, and action termination
For a more advanced example, we look at the action "org.kde.auth:ActionReply MyHelper::longaction(QVariantMap args){for (int i = 1; i <= 100; i++) {if (HelperSupport::isStopped())break;usleep(250000);}return ActionReply::SuccessReply;}
In this example, the action is only waiting a "long" time using a loop, but we can see some interesting line. The progress status is sent to the application using the HelperSupport::progressStep() method. When this method is called, the ActionWatcher stop() method on the correponding action object. The stop() method, this way, asks the helper to stop the action execution. It's up to the helper to obbey to this request, and if it does so, it should return from the slot, not exit.
The code that calls the action in the application connects a slot to the actionPerformed() signal and then call executeAsync(). The progressStep() signal is directly connected to a QProgressBar, and the Stop button in the UI is connected to a slot that calls the Action::stop() method.
Please pay attention that when you call an action, the helper will be busy executing that action. Therefore, you can't call two execute() methods in sequence like that:
If you do, you'll get a HelperBusy reply from the second action. A solution would be to launch the second action from the slot connected to the first's actionPerformed signal, but this would be very ugly. Read further to know how to solve this problem.
Other features
To allow to easily execute several actions in sequence, you can execute them in a group. This means using the Action::executeActions() static method, which takes a list of actions and asks the helper to execute them with a single request. The helper will execute the actions in the specified order. All the signals will be emitted from the watchers associated with each action.
Sometimes the application needs to know when a particular action has started to execute. For this purpose, you can connect to the actionStarted() signal. It is emitted immediately before the helper's slot is called. This isn't very useful if you call execute(), but if you use executeActions() it lets you know when individual actions in the group are started..
Function Documentation
Definition at line 34 of file AuthServicesBackend.cpp.
Definition at line 384 of file DBusHelperProxy.cpp.
Definition at line 50 of file kauthhelpersupport.cpp.
Definition at line 43 of file AuthServicesBackend.cpp.
Definition at line 181 of file kauthactionreply.cpp.
Definition at line 186 of file kauthactionreply.cpp.
Variable Documentation
Definition at line 47 of file kauthhelpersupport.cpp.
Definition at line 30 of file AuthServicesBackend.cpp.
Definition at line 44 of file kauthactionwatcher.cpp.
Documentation copyright © 1996-2015 The KDE developers.
Generated on Thu Jan 29 2015 22:58:02 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
http://api.kde.org/4.x-api/kdelibs-apidocs/kdecore/html/namespaceKAuth.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Generally till now while developing Android Apps or I should say PhoneGap Supported Apps, to get the current GPS location, I always referred to the following function:
navigator.geolocation.getCurrentPosition(onSuccess, onError);
which instantly gives all the required location details like latitude, longitude, accuracy etc.
But then, how do I get these details which I'm not using a web app or I should say not a PhoneGap Application but a pure native Android Application or an Android Service, So for the same I tried researching on the various articles and documents and finally arrived at the solution as follows:
package com.mayuri.location;
import java.sql.Timestamp;
import java.util.Date;
import android.app.Activity;
import android.content.Context;
import android.location.Location;
import android.location.LocationListener;
import android.location.LocationManager;
import android.os.Bundle;
import android.util.Log;
import android.widget.TextView;
import android.widget.Toast;
public class LocationSample extends Activity {
TextView tv;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
tv = (TextView)this.findViewById(R.id.txtLocation);
LocationManager lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE);
LocationListener ll = new mylocationlistener();
lm.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 0, 0, ll);
}
private class mylocationlistener implements LocationListener {
@Override
public void onLocationChanged(Location location) {
Date today = new Date();
Timestamp currentTimeStamp = new Timestamp(today.getTime());
if (location != null) {
Log.d("LOCATION CHANGED", location.getLatitude() + "");
Log.d("LOCATION CHANGED", location.getLongitude() + "");Strig str = "\n CurrentLocation: "+
"\n Latitude: "+ location.getLatitude() +
"\n Longitude: " + location.getLongitude() +
"\n Accuracy: " + location.getAccuracy() +
"\n CurrentTimeStamp "+ currentTimeStamp;
Toast.makeText(LocationSample.this,str,Toast.LENGTH_LONG).show();
tv.append(str);
}
}
@Override
public void onProviderDisabled(String provider) {
Toast.makeText(LocationSample.this,"Error onProviderDisabled",Toast.LENGTH_LONG).show();
}
@Override
public void onProviderEnabled(String provider) {
Toast.makeText(LocationSample.this,"onProviderEnabled",Toast.LENGTH_LONG).show();
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
Toast.makeText(LocationSample.this,"onStatusChanged",Toast.LENGTH_LONG).show();
}
}
}
The heart of the Logic lies in the 'LocationManager' class, which provides an access to the System location services.
'requestLocationUpdates' gives the periodic location updates as per the arguments specified. Current Timestamp.
Most Important thing to be remembered while using the LocationService in Android Application is to modify the Application manifest file with the following,
<uses-permission android:
<uses-permission android:
<uses-permission android:
<uses-permission android:
<uses-permission android:
Thats all on this from my end!!
Happy Coding :-))
~ Mayuri
12 comments:
Thanks a lot...:)
Really great... thanks for sharing your code. It helped to get me started...
Just 2 things..
1) Strig str => should be String str
2) You can switch the
LocationManager.NETWORK_PROVIDER for LocationManager.GPS_PROVIDER if you want to get finer position updates outside.
Does this get the location using Native GPS, not Assisted GPS. WHat it means is that will you be able to get coordinates WITHOUT cell coverage ?
hi mam,
when i execute this code in the emulator it couldn't not run. showing"unfortunately Location(name of the project) has stopped".pls can u tel me wht's wrong or anytoher setting to made while running this code
@Rekha
Please turn ON the gps on the simulator and try to add the permission to android manifest file
I want to get the current location of my place when i am executing this code there is some error in the location so please can anyone help me to resolve it
i want to develop a application called HELP which has the features like send the messages to five persons in the from the selected contacts in emergency by pressing help button and the message should also include the current location of the peson
Hey Hi ,I want to display the current location of my friend,is it possible in android ,if yes tell me the how i can get the his location by using his number.
If possible plz mail me info according ==>> [email protected]
Hey Hi ,I want to display the current location of my friend,is it possible in android ,if yes tell me the how i can get the his location by using his number.
If possible plz mail me info according ==>> [email protected]
i am having the error for the line : tv = (TextView)this.findViewById(R.id.txtLocation);
(txtLocation can not be resolved or is not a field)
and for line: Toast.makeText(LocationSample.this,str,Toast.LENGTH_LONG).show();
(Locationsample can not be resolved to a type.)
kindly help
|
http://catchmayuri.blogspot.com/2011/05/get-current-location-gps-position-in.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
-
2. In the sample execution, we highlight the user's input in bold.
Fig 2.5. Addition program that displays the sum of two integers entered at the keyboard.
The comments in lines 1 and 2 state the name of the file and the purpose of the program. The C++ preprocessor directive in line 3 includes the contents of the <iostream> header. The program begins execution with function main (line 6). The left brace (line 7) begins main's body and the corresponding right brace (line 22) ends it.
Variable Declarations
Lines 9–11 31914. All variables must be declared with a name and a data type before they can be used in a program. Several variables of the same type may be declared in one declaration or in multiple declarations. We could have declared all three variables in one declaration by using a comma-separated list as follows:
This makes the program less readable and prevents us from providing comments that describe each variable's purpose.
Fundamental Types
We'll soon discuss the type double for specifying real numbers, and the type char for specifying character data. Real numbers are numbers with decimal points, such as 3.4, 0.0 and –11.19. A char variable may hold only a single lowercase letter, a single uppercase letter, a single digit or a single special character (e.g., $ or *). Types such as int, double and char are called fundamental types. Fundamental-type names are keywords and therefore must appear in all lowercase letters. Appendix C contains the complete list of fundamental types.
Identifiers.
Placement of Variable Declarations
Declarations of variables can be placed almost anywhere in a program, but they must appear before their corresponding variables are used in the program. For example, in the program of Fig. 2.5, the declaration in line 9
could have been placed immediately before line 14
the declaration in line 10
could have been placed immediately before line 17
and the declaration in line 11
could have been placed immediately before line 19
Obtaining the First Value from the User
Line 13
displays Enter first integer: followed by a space. This message is called a prompt because it directs the user to take a specific action. We like to pronounce the preceding statement as "std::cout gets the character string "Enter first integer: "." Line 14
uses the standard.
The std::cout and std::cin stream objects facilitate interaction between the user and the computer. Because this interaction resembles a dialog, it's often called interactive computing.
Obtaining the Second Value from the User
Line 16
prints Enter second integer: on the screen, prompting the user to take action. Line 17
obtains a value for variable number2 from the user.
Calculating the Sum of the Values Input by the User
The assignment statement in line 19
adds the values.
Displaying the Result
Line 21
displays the character string Sum is followed by the numerical value of variable sum followed by std::endl—a so-called stream manipulator. The name endl is an abbreviation for "end line" and belongs to namespace std. The std::endl stream manipulator outputs a newline, then "flushes the output buffer." This simply means that, on some systems where outputs accumulate in the machine until there are enough to "make it worthwhile" to display them on the screen, std::endl forces any accumulated outputs to be displayed at that moment. This can be important when the outputs are prompting the user for an action, such as entering data.
The preceding statement outputs multiple values of different types. The stream insertion operator "knows" how to output each type of data. Using multiple stream insertion operators (<<) in a single statement is referred to as concatenating, chaining or cascading stream insertion operations. It's unnecessary to have multiple statements to output multiple pieces of data.
Calculations can also be performed in output statements. We could have combined the statements in lines 19 and 21 into the statement
thus eliminating the need for the variable sum.
A powerful feature of C++ is that you can create your own data types called classes (we introduce this capability in Chapter 3 and explore it in depth in Chapters 9 and 10). You can then "teach" C++ how to input and output values of these new data types using the >> and << operators (this is called operator overloading—a topic we explore in Chapter 11).
|
http://www.informit.com/articles/article.aspx?p=1705442&seqNum=4
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Out of Band Management Security Best Practices and Privacy Information
Atualizada: Outubro de 2009
Aplica-se a: System Center Configuration Manager 2007, System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3, System Center Configuration Manager 2007 SP1, System Center Configuration Manager 2007 SP2
Out of band management in Microsoft System Center Configuration Manager 2007 SP1 and later.
Security Best Practices
Request customized firmware before purchasing AMT-based computers Computers that can be managed out of band have BIOS extensions that can set customized values to significantly increase security when these computers are on your network. Check which BIOS extension settings are available from your computer manufacturer, and specify your choice of values. For more information, see Decide Whether You Need a Customized Firmware Image From Your Computer Manufacturer. If your AMT-based computers do not have the firmware values that you want to use, you might be able to manually specify them yourself. For more information about manually configuring the BIOS extensions, refer to the Intel documentation or the documentation from your computer manufacturer. You can also refer to the Intel vPro Expert Center: Microsoft vPro Manageability Web site (). Customize the following options to increase your security:
- Replace all certificate thumbprints of external certification authorities (CAs) with the certificate thumbprint of your own internal CA. This prevents rogue provisioning servers from attempting to provision your AMT-based computers, and you will not have to purchase provisioning certificates from external CAs. For information about how to locate the certificate thumbprint of your internal root CA, see How to Locate the Certificate Thumbprint of Your Internal Root Certificate for AMT Provisioning.
- Use a custom password for the MEBx Account so that the default value of admin is not used. Then specify this password with an AMT Provisioning and Discovery Account in Configuration Manager. This prevents rogue provisioning servers from attempting to provision your AMT-based computers with the known default password. For more information, see About the MEBx Account and How to Add an AMT Provisioning and Discovery Account.
- Change the value for the default provisioning server.. However, an IP address cannot be used for multiple AMT-based computers if they will be provisioned by different sites. If you configure an alternative name rather than an IP address, you must configure DNS to perform name resolution. When you use. Using a custom port is more secure than using the default port for out of band provisioning. If you will use out of band provisioning, configure your alternative port number on the Out of Band Management Properties: General tab. CA.:
- Applicable to Configuration Manager 2007 SP2 only: From Component Configuration, do not select the option Allow out of band provisioning on the General tab of the Out of Band Management Properties dialog box. This option is not selected by default. With this default setting, Configuration Manager will not respond to out of band provisioning requests, which helps to prevent rogue computers from being provisioned out of band.
- To help prevent rogue computers from being provisioned out of band: Do not use the Import Out of Band Computers wizard to add new computers to the Configuration Manager database; configure Windows firewall on the server running the out of band service point role to block the provisioning port (by default, TCP 9971); and do not register an alias for the out of band service point in DNS. For more information about the DNS alias, see Decide Whether You Should Register an Alias for the Out of Band Service Point in DNS. Additionally, restrict physical access to the network, and monitor clients to detect unauthorized computers.
- To help prevent rogue servers from provisioning your AMT-based computers, use a custom password for the MEBx Account in the AMT BIOS extensions so that the default value of admin is not used. Then specify this password with an AMT Provisioning and Discovery Account in Configuration Manager. For more information, see About the MEBx Account and How to Add an AMT Provisioning and Discovery Account.
If you cannot use in-band provisioning because the computer is new and has no operating system installed, consider using operating system deployment to install the operating system and install the client for Configuration Manager 2007 SP1 or later so that the computer can be provisioned in-band. Unlike out of band provisioning, operating system deployment does not create an authenticated account in Active Directory Domain Services and does not request a server authentication certificate from your enterprise CA. For more information about operating system deployment, see Operating System Deployment in Configuration Manager. If you cannot use in-band provisioning because the computer does not have the client installed for Configuration Manager 2007 SP1 or later or because the computer does not have a version of AMT that is natively supported by Configuration Manager, install the client for Configuration Manager 2007 SP1 or later and upgrade the firmware to a supported version as appropriate. For more information about the AMT versions supported by Configuration Manager, see Overview of Out of Band Management.
Manually revoke certificates and delete Active Directory accounts for AMT-based computers that are blocked by a Configuration Manager 2007 SP1 site Computers that are blocked by a Configuration Manager 2007 SP1 site continue to accept out of band management communication. When an AMT-based computer is blocked because it is no longer trusted, take the following manual action:
- On the issuing CA, revoke the certificate that was issued to the site server with the FQDN of the AMT-based computer in the certificate Subject.
- In Active Directory Domain Services, disable or delete the AMT account that was created for the AMT-based computer. Properties dialog box. and later.
Use a dedicated certificate template for provisioning AMT-based computers If you are using an Enterprise version of Windows Server for your enterprise CA, create a new certificate template by duplicating the default Web Server certificate template, ensure that only Configuration Manager site servers have Read and Enroll permissions, and do not add additional capabilities to the default of server authentication. Having a dedicated certificate template allows you to better manage and control access to help prevent elevation of privileges. If you have a Standard version of Windows Server for your enterprise CA, you will not be able to create a duplicate certificate template. In this scenario, do not allow Read and Enroll permissions to computers other than Configuration Manager site servers that will provision AMT-based computers..
Disable AMT in the firmware if the computer is not supported for out of band management Even when AMT-based computers have a supported version of AMT, there are some scenarios that out of band management does not support. These scenarios include the following: workgroup computers, computers that have a different namespace, and computers that have a disjointed namespace. To ensure that AMT-based computers are not published to Active Directory Domain Services and do not have a PKI certificate requested for them, disable AMT in the firmware. AMT provisioning in Configuration Manager creates domain credentials for the accounts published to Active Directory Domain Services, which risks the elevation of privileges when the computers are not part of your Active Directory forest.
Use a dedicated OU to publish AMT-based computers Do not use an existing container or OU to publish the Active Directory accounts that are created during AMT provisioning. A separate OU allows you to better manage and control these accounts and helps to ensure that they are not granted more privileges than they need.
Use Group Policy to Restrict User Rights for the AMT Accounts Apply restrictive user rights to the AMT accounts that are published to Active Directory Domain Services to help protect against elevation of privileges and to reduce the attack surface if an attacker gains access to one of these accounts. Create a security group that contains the AMT accounts automatically created by Configuration Manager during the ATM provisioning process, and then add this group to the following enabled group policy settings under \Computer Configuration\Windows Settings\Security Settings\Local Policy\User Rights Assignment:
- Deny access to this computer from the network
- Deny log on as a batch job
- Deny log on as a service
- Deny log on locally
- Deny log on through Terminal Services
Apply these group policy settings to all computers in the forest. Periodically review and revise if necessary the group membership to ensure that it contains all the AMT accounts currently published to Active Directory Domain Services.
Use a dedicated collection for in-band provisioning Do not use an existing collection that contains more computers than you want to provision in-band. Instead, create a query-based collection by using the procedure for in-band provisioning in How to Provision Computers for AMT. When the site is in mixed mode, ensure that these computers are approved. For more information about approval, see About Client Approval in Configuration Manager and How to Approve Configuration Manager Clients.
Restrict who has the Media Redirection right and the PT Administration right (Configuration Manager 2007 SP1) or Platform Administration right (Configuration Manager 2007 SP2) PT Administration right (Configuration Manager 2007 SP1) and Platform Administration right (Configuration Manager 2007 SP2) automatically includes all AMT rights, which includes the Media Redirection right.
Retrieve and store image files securely when booting from alternative media to use the IDE redirection function When you boot from alternative media to use the IDE redirection function, whenever possible, store the image files locally on the computer running the out of band management console. If you must store them on the network, ensure that connections to retrieve the files over the network use SMB signing to help prevent the files being tampered with during the network transfer. In both scenarios, secure the stored files to help prevent unauthorized access (for example, using NTFS permissions and the encrypted file system).
Minimize the number of AMT Provisioning and Discovery Accounts Although you can specify multiple AMT Provisioning and Discovery Accounts so that Configuration Manager can discover computers that have management controllers and provision them for out of band management, do not specify accounts that are not currently required and delete accounts that are no longer needed. Specifying only the accounts that you need helps to ensure that these accounts are not granted more privileges than they need and helps to reduce unnecessary network traffic and processing. For more information about the AMT Provisioning and Discovery Account, see Determine Whether to Configure an AMT Provisioning and Discovery Account for Out of Band Management and About the AMT Provisioning and Discovery Account.
For Configuration Manager 2007 SP2 only: Manually add computers provisioned for 802.1X and wireless to a security group From Component Configuration, do not automatically add computers to a security group by using the option Automatically add AMT-based computers to security group on the 802.1X and Wireless tab of the Out of Band Management Properties dialog box. To help guard against elevation of privileges, carefully control membership of a security group that is used to grant network access to computers. Select the option Do not automatically add AMT-based computers to security group, and manually add known and trusted computer accounts to a security group.
For Configuration Manager 2007 SP2 only: Use a single certificate template for client authentication certificates whenever practical Although you can specify different certificate templates for each of the wireless profiles, use a single certificate template unless you have a business requirement for different settings to be used for different wireless networks, specify only client authentication capability, and dedicate this certificate template for use with Configuration Manager out of band management. For example, if one wireless network required a higher key size or shorter validity period than another, you would need to create a separate certificate template. Having a single certificate template allows you to more easily control its use and guard against elevation of privileges.
For Configuration Manager 2007 SP2 only: Ensure only authorized administrators perform auditing actions and manage the audit logs as required Depending on the AMT version, Configuration Manager might stop writing new entries to the AMT audit log when it is nearly full or might overwrite old entries. To ensure that new entries are logged and old entries are not overwritten, periodically clear the audit log if required, and save the auditing entries. For more information about how to manage the audit log and monitor auditing activities, see How to Manage the Audit Log for AMT-Based Computers.
Privacy Information
The out of band management console in Microsoft System Center Configuration Manager 2007 SP1 and later manages computers that have the Intel vPro chip set and Intel Active Management Technology (Intel AMT) with a firmware version that is supported by Configuration Manager. Configuration Manager 2007 SP1 and later temporarily collects information about the computer configuration and settings, such as the computer name, IP address, and MAC address. Information is transferred between the managed computer and the out of band management console by using an encrypted channel. This feature is not enabled by default and typically no information is retained after the management session is ended. If you enable auditing in Configuration Manager 2007 SP2, you can save auditing information to a file that includes the IP address of the AMT-based computer that is managed, together..
Consulte Também
For additional information, see Configuration Manager 2007 Information and Support.For additional information, see Configuration Manager 2007 Information and Support.
To contact the documentation team, email [email protected].
|
https://technet.microsoft.com/pt-pt/library/cc161895.aspx
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
problem in my module with field many2one.
hi.
i just created a table/model named "logistic.charge" for storing three values with field say: from ,to, fare. Then from the form i succesfully saved the three values.
then i created a table/model named "logistic.logistic" with same field mentioned above here many2one is used for picking the value from logistic.charge . this will installed succesfully. Here is the py file
from openerp.osv import fields, osv
class logistic_detail(osv.osv): _name = "logistic.logistic" _description = "Logistic charges" _columns = { 'from': fields.many2one('logistic.charge','from','from'), 'to': fields.many2one('logistic.charge','to','to'), 'fare': fields.many2one('logistic.charge','fare','fare'), } logistic_detail()
But in the form view there have three fields. from , to , inorder to click the arrow in the field . it will not display the content of the corresponding coloum in the table logistic.charge this will display the name of the table and an id here is the image
is there any error in programming. in place i put res.partner this will gives output by listing the partners in the table.
You forgot the _rec_name. You have to add it when you are not using 'name' column.
class logistic_charge(osv.osv): _name='logistic.charge' ... _rec_name='field_name' _columns = { 'field_name': fields.char(....), }
I'm sure that you created the logistic_charge object, so add the _rec_name before columns with the name of one of your fields, for example here i have 'field_name' maybe you have other fields. The field that you want to display is the _rec_name.
hai..Grover . I am new in python . so will you please mention how it is to be implemented in my py file. Where _rec_name should be placed, it is a field in the table.
answer updated
Hello Grover now it solves the problem in my question. But an another problem is in my first field "from" the actual value will be displayed that i entered in the first form. This value should be also displayed in the another fields.
I use a function and relation with this in my_file.py:
def _buscar_shortname_alm(self, cr, uid, context=None):
obj=self.pool.get('stock.location') ids = obj.search(cr, uid, [('shortcut', '!=', False),('usage','in',('internal','production'))], order='shortcut') resultado = obj.read(cr, uid, ids, ['id','shortcut'], context) #convertimos a una lista de tuplas res = [] for record in resultado: #creamos la tupla interna rec = [] #convertimos a cadena el ID para crear la tupla rec.append(str(record['id'])) rec.append(record['shortcut']) #agregamos a tupla final res.append(tuple(rec)) return res
And the _columns put this:
'aux_almacen_orig': fields.selection(_buscar_shortname_alm, type="char",store=True, method=True,size=256, required=True, string="Almacen Origen" ),
Then i put this in my_file.xml
<field name="aux_almacen_orig" />
And finally this is the result.
I don't want to create another topic because my problem is the same. I've tried your solutions, this is what I have so far:
class open_classB(osv.osv): _name = "open.classB" _rec_name = 'desc' _columns = { 'desc':fields.char("Description:", size = 50, required = True), } open_classB() class open_classA(osv.osv): _name = 'open.classA' _columns = { 'notes':fields.text('Notes:'), 'desc_id':fields.many2one('open.classB', 'Description:', 'desc'), } open_classA()
Before what people said here, I didn't have the "_rec_name", and it didn't work also. The items continues to show me like the image on the first post.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
I think in object "logistic.charge" does not have field "name". You try to add field "name" for this object or use _rec_name = "field_a" ("field_a" is a field which you want to view value in many2one field).
I've already told you. You have to add _rec_name in 'logistic.charge' class. res.partner displays the name because that class has a 'name' field. If you don't have a name field you have to add _rec_name property.
|
https://www.odoo.com/forum/help-1/question/problem-in-my-module-with-field-many2one-29746
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Created on 2015-04-24.11:54:51 by eaaltonen, last changed 2015-11-10.16:30:18 by zyasoft.
My patched version of IPython 0.10.2 crashed on startup with jython 2.7rc3.
The reason was simply mismatch of attribute name.
98c98
< _console.startup_hook = function
---
> _console.startupHook = function
With this change, the old Ipython did start. No tab completion, sadly.
Let's see if we can get the current version of IPython to work in 2.7.1. Please note that Jython's default console now supports tab completion.
there's a setStartupHook method, one probably should
use that one.
As a side note, I'm trying this branch, and with
the change in readline.py I have IPython 3.0 running. Tab completion does work, but is a bit wonky.
This is the diff against current trunk:
diff -r bb6cababa5bd Lib/readline.py
--- a/Lib/readline.py Sun May 17 09:10:22 2015 +0100
+++ b/Lib/readline.py Wed Jun 24 18:02:31 2015 +0200
@@ -95,7 +95,7 @@
_reader.redrawLine()
def set_startup_hook(function=None):
- _console.startup_hook = function
+ _console.setStartupHook(function)
def set_pre_input_hook(function=None):
warn("set_pre_input_hook %s" % (function,), NotImplementedWarning, stacklevel=2)
This is an easy fix, but hard to test with something like pexpect. In part, this is because JLine2 does not have completely compatible support compared to readline for this functionality, especially under a pty. I also suspect that this may vary from platform to platform (OS X vs Linux vs Windows), but something we need to look into more.
I.
Fixed as of
|
http://bugs.jython.org/issue2338
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Introdu about it can search in my website (link on my profile). I do not want to insist on it too much. Let's say it's a project I'm working on long ago and has undergone many transformations over time. I think it will not ever finish it, but working on it I discovered/learned many interesting things, some of which will be posted (or already posted) here on Instructables.
Idea that excited me (so much) was to use this TFT touchscreen display to make a XY controller (pad) with visual feedback.
Step 1: Overview of the Shield
Resolution: 240x320.
Size: 2.8 Inch.
Colors: 262K
TFT driver: ILI9325DS (supported by UTFT library)
Touch driver: XPT2046 (suported by Utouch library *this particular model with some changes)
Interface:
- TFT: 8bit data and 4bit control.
- Touch Screen: 5 bit.
- SD : 4bit.
Price: $19 at... (From here I bought it )
Step 2: TFT Hardware Setup
Installing shield is straightforward. You still need to select the correct voltage before use. It is a switch in the top right, next to SD socket. For Arduino Mega and Arduino Uno must select 5V. Also do not push shield completely.
Step 3: Libraries Setup
1. First you need to install UTFT library. Latest version is here I'm going to put the library here too (file UTFT.zip). You never know what might happen in the future.
2. Same thing about UTouch library (file UTouch.zip).
3. Now we need to replace UTouch.cppUTouch.h with same files from UTouchWorking.zip. You can read more about this here: .
4. If you use Arduino MEGA need to edit file
...arduino-1.5.8\libraries\UTFT\hardware\avr\HW_AVR_defines.h
and uncomment the line: (If you use Arduino Uno is not need this modification):
#define USE_UNO_SHIELD_ON_MEGA 1
5. To optimize memory usage we need to edit file
...\arduino-1.5.8\libraries\UTFT\memorysaver.h
and uncomment the lines for the display controllers that you don't use: for this display uncomment all lines except this:
//#define DISABLE_ILI9325C 1 // ITDB24
Step 4: Touch Screen Calibration
To work properly, the touchscreen need calibration.
To make the calibrations for modified UTouch library we need to run this sketch: SimplerCalibration.ino (SimplerCalibration.zip):
We need to match the orientation of UTFT library with UTouch library:
myGLCD.InitLCD(LANDSCAPE); myTouch.InitTouch(LANDSCAPE);
There are 4 steps. We need to edit line #define selector for every step and upload and run sketch step by step:
#define selector 1
In this step we will verify that we put the correct resolution in SimplerCalibration ino file. This is an optional step. I put it here because that was designed by the author of this solution.
#define selector 2
This is the most important of the four. Here is actually calibration. After uploading sketch you must obtain left-top point and right-bottom point like in photo above; and make modification in file:
...\arduino-1.5.8\libraries\UTouch\UTouch.cpp
void UTouch::InitTouch(byte orientation){ orient = orientation; _default_orientation = 0; touch_x_left = 306; //enter number for left most touch touch_x_right = 3966; //enter number for right most touch touch_y_bottom = 3906; //enter number for bottom most touch touch_y_top = 174; //enter number for top most touch disp_x_size = 320; // do not forget them if different disp_y_size = 240; // do not forget them if different prec = 10; // ..................................................
We see that values for touch_y_bottom and touch_y_top are swaped in relation to values obtain from screen. (because origin of TFT axes are different from origin of touch screen). You will figure out that for every model of TFT. You might need or not to swap y-axis or x-axis values depend of your TFT model. For this particular model works like above.
#define selector 3
Test program. Display x y coordinates of touch point. Optional.
#define selector 4
Test program. Put a white pixel at touch point. Optional. It is still very intuitive. If you will see those pixels are mirrored on x or y axis you need to swap values for that axix.
Step 5: Examples
If everything is ok with calibration we can move forward an run examples from UTFT and UTouch libraries.
Let's not forget to edit lines that refers to the type of display and touch screen:
UTFT myGLCD(ITDB24, A5,A4,A3,A2);
UTouch myTouch(A1,10,A0,8,9);
I have attached photos taken from two examples. UTouch_ButtonTest and UTouch_QuickPaint.
Please note that it was quite difficult (for me) to take usable photos of the TFT, because if I shoot directly (vertical) appear camera reflection. It is as if I try to photograph the surface of a mirror (with details).
Step 6: XY MIDI Pad
If you run last examples have noticed they run quite slowly. There is nothing wrong with TFT display and also there is nothing wrong with the code or libraries. This is because we try to use an 8-bit microcontroller at 16MHz (or 20MHz). In fact this display can run much faster than we can send data (with our processor).
Indeed we could do some improvements to the code and libraries, but changes will not be dramatic. Ideally we need more powerful processor, 32bit (even 16 bit), DMA controller, >150 Mhz, more RAM (for video buffer) etc...
Instead we can design our programs to update only a small area of the screen when we need speed.
I put the whole code for Arduino project XY Pad MIDI here(attached to this step, MIDIPad.zip). Can be studied in detail to see how I applied what I said above. However I will comment on some sections.
In function draw_Pad(long x, long y), before drawing new lines, clear old lines redrawing them with background color.
void draw_Pad(long x, long y)<br>{ // we draw 3 three lines for x and three lines for y // for better visibility myGLCD.setColor(pad_bk); myGLCD.drawLine(old_x-1,pad_topY,old_x-1,pad_bottomY); // clear old line x-1 myGLCD.drawLine(old_x+1,pad_topY,old_x+1,pad_bottomY); // clear old line x+1 myGLCD.drawLine(old_x,pad_topY,old_x,pad_bottomY); // clear old line x myGLCD.drawLine(pad_topX,old_y-1,pad_bottomY,old_y-1); // clear old line y-1 myGLCD.drawLine(pad_topX,old_y+1,pad_bottomY,old_y+1); // clear old line y+1 myGLCD.drawLine(pad_topX,old_y,pad_bottomY,old_y); // clear old line y myGLCD.setColor(reticle_color); myGLCD.drawLine(x-1,pad_topY,x-1,pad_bottomY); // draw new line x-1 myGLCD.drawLine(x+1,pad_topY,x+1,pad_bottomY); // draw new line x+1 myGLCD.drawLine(x,pad_topY,x,pad_bottomY); // draw new line x myGLCD.drawLine(pad_topX,y-1,pad_bottomX,y-1); // draw new line1 y-1 myGLCD.drawLine(pad_topX,y+1,pad_bottomX,y+1); // draw new line2 y+1 myGLCD.drawLine(pad_topX,y,pad_bottomX,y); // draw new line3 y }
I have not used the well known Arduino MIDI library (like my previous project). Instead I use a simple function to send MIDI CC commands:
void SendMIDIControl(byte channel, byte controller, byte value) { byte tmpChannel = (channel & 0b00001111)-1; //0= channel1...1=channel2... etc tmpChannel = 0b10110000 + tmpChannel; //midi data first bit allways 1, //+ 011 control change command //+ midi channel byte tmpController = controller & 0b01111111; //midi data first bit allways 0 byte tmpValue = value & 0b01111111; //midi data first bit allways 0 Serial1.write(tmpChannel); Serial1.write(tmpController); Serial1.write(tmpValue); }
For sending MIDI commands to PC via USB I used a module that I made previously. For details see my project here:-...
Important!:
We can not use the first serial port pins because its pins are already used by TFT shield.
- For Arduino UNO we must use SoftwareSerial.
- For Arduino MEGA we can use SoftwareSerial or Serial1 / Serial2 (I tested with SoftwareSerial and Serial1)
My Arduino USB Midi Interface module can be replaced(theoretical) with a combination of MIDI Shield and USB To MIDI converter. I have not tested this way (I do not have neither).
Step 7: Final
After I played for a while with this project I saw that there is room for improvement (as always).
We can give up the right buttons to manage settings with some physical push buttons. This will increase the usability of the pad. This project was designed in this form to be a starting point (and proof of concept) for your MIDI projects.
In this case we need to make map coordinats separately for X an Y
byte CoordToMIDI(unsigned int coord){ float temp; temp=coord; temp=temp/1.72; return (byte)temp; }
will change in
byte CoordXToMIDI(unsigned int coord){ float temp; temp=coord; temp=temp/another_value1; // depend of your virtual pad x size return (byte)temp; }
byte CoordYToMIDI(unsigned int coord){ float temp; temp=coord; temp=temp/another_value2; // depend of your virtual pad y size return (byte)temp; }
We can also try using Arduino Due. Because this board use 3V, my interface need level converter and TFT switch moved on 3V position.
Thanks for your attention!
Recommendations
We have a be nice policy.
Please be positive and constructive.
6 Comments
i want to use it as a MIDI XY pad to controll functions on my DAW like Ableton Live 9. I will try this ?
It should work smoothly. At the time I did the project, I tried several DAW programs (demo versions) ... Meanwhile I gave up windows forever... so I tried it with LMMS with Linux and worked too.
thank u so much. i will get to work soon. what a brilliant instructable. excellent work!
Thank you for your words! :) I like to see some of my projects are useful. Good luck with your projects. See you around!
Smart idea! Thanks for shearing :)
Thank you! :)
|
http://www.instructables.com/id/XY-MIDI-Pad-with-Arduino-and-TFT/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
When was the last tutorial you’ve taken? It could have been my Free Email Course about Django REST Framework, or it could have been the “Official Django Tutorial” that will have you go through a simple Django Application.
What did you think about these tutorials? Which one helped you to learn the best? Why did it help you learn?
I’m not saying my course is the best, ever. Far from it. However, it was an experiment to see if tutorials can be better, or not.
This is an opinion piece about my feelings about tutorials and how it might be possible to make them better.
Tutorials Don’t Help you Learn
Tutorials don’t really help you learn how to use the technology that you want to learn. Every tutorials is an overview to show how the technology works, how easy it is to get up to speed, and gets you excited to read the documentation so you can learn more.
You will get a broad overview of the technology in question, and after you’re finished with the tutorial, you might try to create an app of your own. But, how well do you do? Does it work out? After creating the Polls App, how confident were you to create your own app?
“Type This Here then run it and see what happens”
Look at the Official Django Tutorial. Here is a small snippet of it:
Let’s write the first view. Open the file
polls/views.py and put the following Python code in it:
polls/views.py
from django.http import HttpResponse def index(request): return HttpResponse("Hello, world. You're at the polls index.")
This is a typical example of, “Hey! Go to this page and type this here.”
Afterwards, it doesn’t tell you to run the application. The tutorial makes sure the application will run correctly before it tells you to run it. This is completely backwards.
There is A LOT you can learn by running the application at this point. You will get an error!! But, later, you’ll understand why you’re getting that error.
As an example, my email course had a few errors in it that you won’t know or understand until later in the course. I did this very blatantly. For the reader, it would be really hard to put 2 and 2 together and your understanding wouldn’t be as strong if you didn’t see the outcome of the errors. Then, also, you learned how to fix the issue!
It that makes sense, read on. I’ll show you how to make your tutorials better.
Fix your Tutorials, Here’s how
If you want to create a tutorial, here’s some things you can do to make it better for people to learn from.
Introduce them to common errors
While you are doing the, “Write this code here…” lessons, have your readers run the code so that they can see common issues. If the Official Django Tutorial told you to try to run the code after your created your first view, you would learn that in order for your app to work, you need a URL pattern to access it.
Imagine if the tutorial forgot to tell you to import a builtin Django function. Then, you ran the app. You’d learn exactly what error you get and you’ll be able to learn how to fix it.
Learning what common errors look like, you’d learn exactly what needs to be fixed in your app to fix the error.
FIX: Force your readers to see common errors. Over the course of the material, your readers will understand exactly how to fix them.
Let them figure some things out on their own
Don’t tell your readers everything. Let them do some work to figure things out on their own. That way, they will learn how to search for information and how to use the information to modify an existing application. That skill is indispensible.
One great way to do this, is to by creating Homework. Homework is a great way to have your readers learn the material better by getting out of their comfort zone and learn new material on their own.
This brings me to the third lesson.
FIX: Give your reader some homework to do. The ability to find information and know how to use that information to modify an existing app does wonders for teaching somebody how to use the technology you are teaching
Be available for feedback
If you give your reader homework, you HAVE to be there for questions and answers. Email works really well.
If you’re not there for feedback, you are leaving your readers behind. I tell everybody who takes my email course that they can email me their questions and I read and answer every email I receive.
FIX: Be available for feedback. If your readers have questions they deserve to ask you to clear up anything you are teaching them.
Give the application away so they can compare their code
It’s very possible that your reader is going to make some errors in their code and it’s going to be really annoying not knowing what happened that caused these errors.
Most of the time it’s a simple typo that can break the entire application. Try finding that typo in thousands of lines of code! In a simple tutorial with less than a few hundred lines of code, a typo is unacceptable.
FIX: Send your readers the exact code that you used for that lesson. This will allow them to check their code with yours. A simple
diff my_app your_app will show you where your typos are.
Make bad tutorials better
So, if you’re going through a tutorial how can you use the above fixes to make it better while you’re going through it?
- Find groups of readers who took the tutorial so you can ask them for help and, if they are willing, their working examples of the project.
- Come up with some cool ideas that you want to implement and search the Googles for information about how to implement these ideas. Don’t wait for someone else to tell you what you should be implementing.
- Don’t be afraid to run the application midway through the tutorial. You can learn a lot by just reading error messages and attempt to fix them before you move on with the rest of the tutorial. Doing things by yourself is more important for your learning than simply copying code.
Excited to take your next tutorial? Sign up below for a free Django REST Framework
You're a newbie and you're making a HUGE mistake
|
https://chrisbartos.com/articles/why-tutorials-suck/
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
talons 0.2
Hooks for Falcon
Talons is a library of WSGI middleware that is designed to work with
the [Falcon]() lightweight Python framework
for building RESTful APIs. Like Falcon, Talons aims to be fast, light, and
flexible.
The first middleware in Talons is authentication middleware, enabling one
or more backend identity plugins to handle authentication.
# What is `talons.auth`?
`talons.auth` is a namespace package that contains utilies for
constructing identifying and authenticating middleware and plugins
designed for applications running the Falcon WSGI micro-framework
for building REST APIs.
## A simple usage example
A simple Falcon API application is constructed like so:
```python
import falcon
# falcon.API instances are callable WSGI apps
app = falcon.API()
```
To add middleware to a Falcon API application, we simply instantiate the
desired `talons.auth` middleware and supply it to the `falcon.API()` call:
```python
import falcon
from talons.auth import middleware
from talons.auth import basicauth, httpheader, htpasswd
# Assume getappconfig() returns a dictionary of application configuration
# options that may have been read from some INI file...
config = getappconfig()
auth_middleware = middleware.create_middleware(identify_with=[
basicauth.Identifier,
httpheader.Identifier],
authenticate_with=htpasswd.Authenticator,
**config)
app = falcon.API(before=[auth_middleware])
```
# Details
There are a variety of basic plugins that handle identification of the user making
an API request and authenticating credentials with a number of common backends,
including LDAP and SQL data stores.
Authentication involves two main tasks:
* Identifying the user who wishes to be authenticated
* Validating credentials for the identified user
Classes that derive from `talons.auth.interfaces.Identifies` implement an `identify`
method that takes the `falcon.request.Request` object from the WSGI pipeline and
looks at elements of the request to determine who the requesting user is.
The class that stores credential information -- including a login, password/key,
a set of roles or groups, as well as other metadata about the requesting user --
is the `talons.auth.interfaces.Identity` class. `talons.auth.interfaces.Identifies`
subclasses store this `Identity` object in the WSGI environs' "wsgi.identity" bucket.
Classes that derive from `talons.auth.interfaces.Authenticates` implement an
`authenticate` method that takes a single argument -- a `talons.auth.interfaces.Identity`
object -- and attempts to validate that the identity is authentic.
To give your Falcon-based WSGI application authentication capabilities, you
simply create middleware that has one or more `talons.auth.identify` modules
and one or more `talons.auth.authenticate` modules. We even give you a helper
method -- `talons.auth.middleware.create_middleware` -- to create such middleware
in a single call.
## Identifiers
Each class that derives from `talons.auth.interfaces.Identifies` is called an "Identifier". Each
class implements a single method, `identify()`, that takes the incoming `falcon.request.Request`
object as its sole parameter. If the identity of the authenticating user can be determined,
then the Identifier object stores a `talons.auth.interfaces.Identity` object in the WSGI environ's
`wsgi.identity` key and returns True.
Multiple Identifier classes can be supplied to the
`talons.auth.middleware.create_middleware` method to support a variety of ways of
gleaning identity information from the WSGI request. Each Identifier's
`identify()` method checks to see if the `wsgi.identity` key is already
set in the WSGI environs. If it is, the method simply returns True and does
not attempt to process anything further.
### `talons.auth.basicauth.Identifier`
The most basic identifier, `talons.auth.basicauth.Identifier` has no
configuration options and simply looks in the
[`Authenticate`]() HTTP
header for credential information. If the `Authenticate` HTTP header is found
and contains valid credential information, then that identity information is
stored in the `wsgi.identity` WSGI environs key.
### `talons.auth.httpheader.Identifier`
Another simple identifier, `talons.auth.httpheader.Identifier` looks
for configurable HTTP headers in the incoming WSGI request, and uses the values
of the HTTP headers to construct a `talons.auth.Identity` object.
A set of configuration options control how this Identifier class behaves:
* `httpheader_user`: HTTP header to look for user/login
name (required)
* `httpheader_key`: HTTP header to look for password/key
(required)
* `httpheader_$ATTRIBUTE`: HTTP header that, if found, will
be used to add $ATTRIBUTE to the Identity object stored in the WSGI
pipeline. (optional)
The above configuration options are supplied to the constructor as keyword
arguments.
#### Example
Suppose we wanted to extract identity information from the following HTTP
Headers:
* `X-Auth-User` -- The value of this header will be the authenticating user's
user name
* `X-Auth-Password` -- The value of this header will be the authenticating
user's password
* `X-Auth-Domain` -- The value of this header should be considered the
authentication domain that will be considered when authenticating the
identity. We want to store this value on the `talons.auth.Identity` object's
`domain` attribute.
Our configuration options would look like this:
```
httpheader_user=x-auth-user
httpheader_key=x-auth-password
httpheader_domain=x-auth-domain
```
## Authenticators
Each class that derives from `talons.auth.interfaces.Authenticates` is
called an "Authenticator". Each Authenticator implements a single method,
`authenticate()`, that takes a `talons.auth.interfaces.Identity` object
as its sole parameter.
The `authenticate` method verifies that the supplied identity can be
verified (authenticated). Different implementations will rely on various
backend storage systems to validate the incoming identity/credentials.
If authentication was successful, the method returns True, False otherwise.
Talons comes with a few simple examples of Authenticator plugins.
### `talons.auth.external.Authenticator`
A generic Authenticator plugin that has one main configuration option,
`external_authn_callable` which should be the "module.function" or
"module.class.method" dotted-import notation for a function or class
method that accepts a single parameter. This function will be called by
the instance of `talons.auth.authenticate.external.Authenticator` to
validate the credentials of a request.
In addition, there are two other configuration options that indicate
whether the `external_authfn` function may set the roles or groups
attributes on the supplied identity:
* `external_sets_roles`: Boolean (defaults to False). A True value
indicates the plugin may set the roles attribute on the identity
object.
* `external_sets_groups`: Boolean (defaults to False). A True value
indicates the plugin may set the groups attribute on the identity
object.
#### Example
Suppose we have some application code that looks up a stored password
for a user in a [`Redis`]() Key-Value Store. Salted, encrypted
passwords for each user are stored in the Redis KVS, along with a
comma-separated list of roles the user belongs to.
Our application has a Python file called `/application/auth.py` that looks
like this:
```python
import hashlib
import redis
_AUTH_DB = redis.StrictRedis(host='localhost', port=6379, db=0)
def _pass_matches_stored_pass(pass, stored_pass):
# Assume that passwords are stored in Redis in the following format:
# salt:hashedpass
# and that the passwords have been hashed with SHA-256
salt, stored_hashed_pass = stored_pass.split(':')
hashed_pass = hashlib.sha256(salt.encode() + pass.encode()).hexdigest()
return hashed_pass == stored_hashed_pass
def authenticate(identity):
user = identity.login
pass = identity.key
# Assume that user "records" are stored in Redid in the following format:
# salt:hashedpass#roles
# Where roles is a comma-separated list of roles
user_record = _AUTH_DB.get(user)
if user_record:
stored_pass, role_list = user_record.split('#')
auth_success = _pass_matches_stored_pass(pass, stored_pass)
if auth_success:
identity.roles = role_list.split(',')
return auth_success
```
To use the above `application.auth.authenticate` method for authenticating
identities, we'd supply the following configuration options to the
`talons.auth.external.Authenticator` constructor:
* `external_authn_callable=application.auth.authenticate`
* `external_sets_roles=True`
### `talons.auth.htpasswd.Authenticator`
An Authenticator plugin that queries an Apache htpasswd file to check
the credentials of a request. The plugin has a single configuration option:
* `htpasswd_path`: The filepath to the Apache htpasswd file to
use for authentication checks.
## Authorizers
Each class that derives from `talons.auth.interfaces.Authorizes` is
called an "Authorizer". Each Authorizer implements a single method,
`authorize()`, that takes a `talons.auth.interfaces.Identity` object,
and a `talons.auth.interfaces.ResourceAction` object.
The `ResourceAction` object currently has a single method, `to_string`,
that returns a "dotted-notation" string describing the requested
HTTP resource.
For instance, let's say the identity made an HTTP request to:
POST /users/12345/groups
The `ResourceAction.to_string` method that is supplied to the `authorize`
function would yield the string "users.12345.groups.post". This string is
useful to plugins that compare the string with the supplied identity object.
See below for an example that makes this more clear.
At present, there is only a single Authorizer built in to Talons: the
`talons.auth.external.Authorizer` class. Like its sister, the
`talons.auth.external.Authenticator`, it accepts an external callable that
accepts the identity and resource action parameters and returns whether
the identity is allowed to perform the action on the resource. The single
configuration parameter is called `external_authz_callable`.
Let's continue the example from above and add an external callable that
will be used as an authorizer. This callable will compare the result of
the `ResourceAction`'s `to_string` method against the supplied identity
object and a hashmap of regular expressions in order to determine if the
user is permitted to perform an action.
Assuming our application has a Python file called `/application/auth.py` that
contains the above authenticate code, as well as this:
like this:
```python
import re
def self_or_admin(match, identity):
"""
Returns True if the identity has an admin role or the identity
matches the requesting user.
"""
if "admin" in identity.roles:
return True
return match.groups(1) == identity.login
def anyone(*args):
return True
_POLICY_RULES = [
(r'^users\.(^\.)+\.get$', self_or_admin),
(r'^users\.post$', anyone),
]
POLICIES = []
for regex, fn in _POLICY_RULES:
POLICIES.append((re.compile(regex), fn))
def authorize(identity, resource_action):
user = identity.login
res_string = resource_action.to_string()
for p, fn in _POLICIES:
m = p.match(res_string)
if m:
return fn(m, identity)
```
To use the above `application.auth.authorize` method for authorizing the
identity that was authenticated, we'd supply the following configuration
options to the `talons.auth.external.Authorizer` constructor:
* `external_authz_callable=application.auth.authorize`
Why `talons.auth`?
==================
Why not just use middleware like [repose.who]() for
authentication plugins? Why re-invent the wheel here?
A few reasons, in no particular order:
* Use of the Webob library. I'm not a fan of it, as I've run into numerous issues with
this library over the years.
* Use of zope.interfaces. Also not a fan of it. It's a library that seems to be designed
for traditional C++ programmers instead of feeling like it's designed for Python developers.
Just use the [abc]() module if you absolutely must
have strict interface enforcement.
* Trying to override things like logging setup in constructors of middleware.
* No Paste.
* Wanted something that fit Falcon's app construction paradigm.
But hey, there's nothing inherently wrong with repoze.who. If you like it, and it works
for you, use it.
## Contributing
[Jay Pipes]() maintains the Talons library. You can usually find him on the Freenode IRC #openstack-dev
channel. Interested in improving and enhancing Talons? Pull requests are always welcome.
##ay Pipes
- Keywords: falcon middleware
- Categories
- Intended Audience :: Developers
- Intended Audience :: Information Technology
- License :: OSI Approved :: Apache Software License
- Operating System :: POSIX :: Linux
- Programming Language :: Python
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3.3
- Package Index Owner: Jay.Pipes
- DOAP record: talons-0.2.xml
|
https://pypi.python.org/pypi/talons/0.2
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 121956
rpm-python version incompatibility
Last modified: 2007-11-30 17:10:41 EST
Description of problem:
Trying to "import rpm" from python gives error:
ImportError: librpm-4.2.so: cannot open shared object file: No such
file or directory
Version-Release number of selected component (if applicable):
rpm-4.3.1-0.3
python-2.3.3-3
rpm-python-4.3.1-0.3
How reproducible:
Consistently
Steps to Reproduce:
1. run python at command prompt
2. type "import rpm"
3.
Actual results:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ImportError: librpm-4.2.so: cannot open shared object file: No such
file or directory
Expected results:
import without error
Additional info:
librpm-4.3.so exists and belongs to the rpm package. librpm-4.2.so
does not exist
this also prevents anaconda from running
$ python
Python 2.3.3 (#1, Mar 16 2004, 16:37:35)
[GCC 3.3.3 20040311 (Red Hat Linux 3.3.3-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import rpm
>>>
Look at your search path, verify that you don't have
another rpmmodule on that path that is linked against librpm-4.2.
Resinstalling rpm-python probably fixes.
Alas! You were right, sorry for the spam...
|
https://bugzilla.redhat.com/show_bug.cgi?id=121956
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
MPI_Info_set - Set a (key, value) pair in an MPI_Info object
#include <mpi.h> int MPI_Info_set(MPI_Info info, char *key, char *value)
key - null-terminated character string of the index key value - null-terminated character string of the value
info - info object _INFO_KEY - This error class is associated with an error code that indicates that a null key, empty key, or key that was longer than MPI_MAX_INFO_KEY characters was passed as an argument to an MPI function where it was not allowed. MPI_INFO_NOKEY - This error class is associated with an error code that indicates that a key that was looked up on an MPI_Info object and was not found. MPI_ERR_INTERN - An internal error has been detected. This is fatal. Please send a bug report to the LAM mailing list (see- mpi.org/contact.php ).
For more information, please see the official MPI Forum web site, which contains the text of both the MPI-1 and MPI-2 standards. These documents contain detailed information about each MPI function (most of which is not duplicated in these man pages).
infoset.c
|
http://huge-man-linux.net/man3/MPI_Info_set.html
|
CC-MAIN-2018-13
|
en
|
refinedweb
|
1. I like the mixin style idea, especially if it let’s me drop down to IL, and especially if you could nest it. IOW, something like this…
void SomeFuncInCSharp()
{
_IL
{
ldarg.1 // etc. e.g setup try-catch
_C#
{ // back to C#
}
}
}
This sort of construct would make it possible to provide access to some features supported by the runtime but not exposed by C#, such as fault blocks or user-filtered exceptions.
2. Provide a type that provides auto-destruct semantics when it goes out of lexical scope. I know the IDispose and using syntax provides similar functionality, but it is really ugly and forces me to write code like this…
using ( new MyAutoDestructWrapper(o1,o2,o3,o4) )
{
}
where MyAutoDestructWrapper is a class that takes a variable arg list and just calls Dispose on each object when the implicit finally is invoked. It requires extra memory allocations to occur as it must allocate one object plus an array list to track the items. It also forces all objects to go out of scope simultaneously which is not always what is desired. Why make each of us invent this ourselves? It ought to be part of the plumbing.
I would like to see some sort of keyword to handle default interface implementations.
ie:
interface IHandler
{
void HandleRequest();
}
class DefaultHandler : IHandler
{
public void HandleRequest()
{
// do something really really interesting.
}
}
class SomeHandler : IHandler
{
IHandler providedBy new DefaultHandler();
}
granted implementedBy is a shit key word. But you get the idea right?
I guess its a sort of tack mixin.
oops s/implementedBy/providedBy
oh and tack = tacky.
I agree completely! It would also be useful if you could extend the class provided by .net this way (ie without the source). I should would have liked to add a method or two to some framework classes recently, instead of completely refactoring everything.
Seems like this would be another useful way to be able to retrofit functionallity from the next release of a product backwards without providing the original dll as well.
Perhaps something like
extend ns.ns.class with public void method()
{
}
Obviously, the original class author would have to be able to stop it, and could using the sealed keyword.
would like to see local method or nested method like it’s possible in Delphi
public int DoSomething() {
local int DoSomething1 {
return 1;
}
local int DoSomething2 {
return 2;
}
local int DoSomething3 {
return 3;
}
return DoSomething1() + DoSomething2() + DoSomething3();
}
David!! Just because you CAN do it in Delphi, doesn’t mean you SHOULD! I think that’s not a good one.. very confusing, and for very little benefit.. isn’t this clearer??
public int DoSomething()
{
return DoSomething1() + DoSomething2() + DoSomething3();
}
private int DoSomething1
{
return 1;
}
private int DoSomething2
{
return 2;
}
private int DoSomething3
{
return 3;
}
It’d be nice if you could just go all the way and make functions first-class values. The method-group conversion stuff in Whidbey goes most of the way, but it just seems strange that you’ve got so close, but haven’t quite gone all the way.
On a similarly functional note, i’d also like to see:
Type inference.
AReallyLongClassName foo = new AReallyLongClassName(); //argh.
def foo = new AReallyLongClassName(); //much nicer.
(although you’d probably want public members to be manifestly typed).
Algebraic datatypes and pattern matching would be /very/ useful, for certain things.
i’ll second the request for nested functions, too, especially if they could be returned as closures (i’m guessing this wouldn’t be too hard, and could largely just piggyback on the work done for anonymous methods).
Oh, and one last, relatively trivial thing: it’d be nice to have a nice syntactic shortcut for defining a List<T>, similar to {1,2,3,4} for arrays.
Just something like:
def foo = [1, 2, 3, 4];
Well, to throw in my opinoin, the type inference and nested methods just aren’t a good idea in C#, sorry to say. Use ironpython or something.
However, I like the list idea as well. On the public newsgroups we’ve been discussing lists and list comprehensions. Not sure if they are worth the time but I am certainly exploring them.
There is a substantial discussion of lists(and comprehensions and ranges) at
I’d be interested in any comments you have on the issue as I’ll be implementing an experimental compiler supporting it if I can.
I also like the auto disposal ideas I’ve seen floating around, still not sure if they’ll fit in the language, but they do work(I have implemented support in mono for some very basic auto destruct using an auto local modifier).
ugh, i can’t stand python 😛 (limp lambda, implicit variable declaration, the brokenness of lexical scoping [partly related to the implicit declarations], etc, and i happen to prefer strict typing where possible) [no offence intended to those who do like python, of course, it’s just not my cup ‘o tea].
I won’t exactly be disappointed if some of the stuff i’d like never makes it into C# (i can always keep using Nemerle [], which has just about everything i want, but i’d prefer it if some of those features could make it into a language i have a slightly less remote chance of being paid to work with…), but i figure if there’s a chance to get the guys in charge to think about it, it’s worth a try ;).
C#’s already (v2, anyway) a pretty passable functional language, i’d just like to see it go a bit further in that direction…
(speaking of which, local methods in particular aren’t strictly necessary — anonymous methods are, afaict, fully-fledged lambdas, and a local method definition is just syntactic sugar for a lambda anyway…)
I write alot of ‘business application’ type code (as opposed to dev tools etc etc) and all of these applications have one thing in common…lots of database access.
75% of all the code in these applications is written to get around the impedence mismatch between objects and databases. I would love to see C# deal with this problem natively (Objects Spaces may help but I think you can take it even farther as Xen or X# have shown).
Generics has helped alot in terms of dealing with collections of Entity objects etc but there is still way too much plumbing to write manually or generate with your code generator of choice.
I have only recently started to look at C-Omega but if you could provide a way, in a production quality compiler, to solve this problem in an elegant and efficient manner people would flock to C# in droves.
Ya, I don’t like python much either. For the reasons you named plus a general dislike of the syntax and philosophy, however there are a few things I like(lists and list comprehensions specifically).
Nemerle has some interesting features and a few I don’t like. I don’t like the whole-hog functional approach, especially dynamic typing. I’ve also seen a thing or two I liked in Boo(don’t have a link on hand, sadly), but it is much too pythony for my day to day usage.
Oh, one other thing. I’ve commented on it a few times and I’ve seen at least one suggestion on the feedback center about it. I abhor the existing casting operator. It seems to me that it was probably a bad choice. I know it looks like java and C casts, but I don’t think its a terribly good design. A keyword like
cast<string>(<expression) is more inline with the rest of the language(especially now that generics exist) and considerably clearer, IMHO.
It might also be of value to consider a "safe" cast(maybe safecast<string>(<expression>)), something that explicitly shows that a cast is occuring but is only valid if the compiler can verify the cast is legal. Its use would be minimal, basically only allowing explicit conversions and base classinterface casts to occur, but I think it could help self-documentation and casting enforcement considerably.
I would dearly love to see C++ stream operators implemented!
Something like:
Console << "hello world" ;
or
myObject << value ;
EnC
Enough said.
The delegation of implementation is a very cool idea. It will make writing wrappers (to dynamically add behaviour to an existing objects) a piece of cake too.
public class MyDecorator : IInterface
{
public IInterface m_Wrappee : IInterface;
public MyDecorator(IInterface wrapppee)
{
m_Wrappee = wrappee;
}
}
Inline creation of value types:
struct Money {
decimal amount;
char symbol;
}
static void Main() {
Money wallet={amount:=1.5M, symbol:=’£’};
}
The *, ?, + syntax for lists (streams) in COmega. Especially preventing a reference type from ever being null and having that statically analysed.
In fact, more statical analysis would be great – especially static analysis for exceptions in a better way that FxCop.
Some static analysis to tell me when I’ve ignored the return value to a non-void method. I forget this regularly in the following scenario:
DateTime date=DateTime.Now;
date.AddDays(1);
(By posting this example here, I’m trying to drill it into my head that DateTime is a valuetype and therefore is immutable – so AddDays() obviously won’t modify the data in the struct…)
arha, Anonymous methods are not quite closures, although they appear similar.
For example, try running this piece of code, and tell me if gives you the results you would expect.
class Program
{
static void Main(string[] args)
{
Test t = new Test();
t.Do();
}
}
delegate void Thing();
class Test
{
List<Thing> things = new List<Thing>();
int _index = 0;
public void Do()
{
for (_index = 0; _index < 10; ++_index)
{
things.Add(delegate { Console.WriteLine(this._index); });
}
things.ForEach(delegate(Thing t) { t(); });
}
}
Make operators available to generics – something like that:
T Add<T>( T a, T b) where T: operator+(T, T) {
return a + b;
}
Kevin: i don’t actually have access to a compiler that can compile that, currently (haven’t got round to rebooting into windows since the whidbey betas were released, and mono doesn’t do anonymous methods yet), but assuming it does what i think you’re referring to (print ’10’ ten times), then that’s exactly what i’d expect — the anonymous method is capturing a reference to the object it’s defined in (explicitly, in your code, since you used this._index, rather than relying on the implicit this for member access (or is that an anonymous method limitation? i assume not, but it’s worth asking…)), and then when it’s called later, it uses the value of the member it gets through that reference, which is obviously the last value it was set to in the for loop.
I assume that if you changed _index to be a local variable, then it would properly close over the value it held at the time the anonymous method was created (the interim C# v2 standard that was recently released certainly gives that impression, since it uses the standard ‘counter’ example, that gets used in just about every explanation of closures ever written).
In fact, i just tried it out in Nemerle, and got exactly that result (ten ’10’s). So, as long as C#’s anonymous methods are as good as Nemerle’s anonymous functions, i’ll be happy 🙂
The equivalent Nemerle, btw, is:
using Nemerle.Collections;
module M {
Main(): void {
def t = Test();
t.Do();
}
}
class Test {
public this(){things = []}
mutable things : list<(void -> void)>;
mutable i : int;
public Do(): void {
for (i = 0; i < 10; ++i) {
things = things + [fun() { Console.WriteLine(this.i)}];
}
List.Iter(things, fun(t: void -> void) { t() })
}
}
Every now and then I come across cases where I would like covariant return types to be possible.
Ooh yeah. Ability to add methods to the class library.
Make it so partial classes can be added after a class has been compiled (Unlike now, where all partial parts must be compiled together)
ala:
partial class System.Object
{
public virtual XmlDocument asXml()
{
return new XmlDocument(ToString());
}
}
and then, wala,
int i = 0;
XmlDocument doc = i.asXml();
Totally useless example, but that would be kind of nice. Sure, people could blow shit up, but isn’t that what programming is all about?
oh and virtual by default. Just because its a slight performance boost (to make C# look like its faster than java…..) VIRTUAL BY DEFAULT.
Type safe databinding:
I would definitely like having language support for databinding. Now, we have to do things like:
statusBar1.DataBindings.Add("Text", game1, "Score"); Quite ugly, I would prefer
having to write something like:
statusBar1.DataBindings.Add(new XXX(statusBar1.Text), new YYY(game1.Score));
It would be a kind of delegate for properties, less complex as the signature is simpler.
This would allow compile time checking and use of Intellisense. When databinding with expressions the expression would need to be in a real delegate.
Two related ideas:
Partial generics. One thing I’d like to be able to do is to compose code of lumps of generics. I’ve wanted to do this recently.
In particular I wanted it for use in a database browsing type scenario so I could have an interface called IBrowsable which I could implement just by sticking in a generic (passing in an instance of my OR mapping object).
There still isn’t a replacement for C style macros either which would be nice, sometimes I’d like to be able to write code like this:
DatabaseStringProperty(DataSet,FieldName1);
DatabaseDecimalProperty(DataSet,FieldName2);
DatabaseStringProperty(DataSet,FieldName3);
DatabaseDecimalProperty(DataSet,FieldName4);
And have the compiler generate a pile of properties with get/set and names that match the fieldname. A partial generic could allow that kind of macro type stuff (You’d probably need the ability to turn a parameter into a string ala the C macro engine # construct).
Perhaps this is just my 1984 C hacker heritage showing through.
Great work so far Eric, I really enjoy using C# and I can’t wait to use the new features in 2.0 in a living environment.
1. Indippendent Property Accessors (i.e. public getter, private setter.)
2. Deep language support for something like ObjectSpaces (seems v popular…)
3. Attributes that call functions, which opens up Aspect oriented programming (like AspectJ)
4. Attributes to hold UML diagram information so that the entire diagram is in the source. (not a language feature, i know)
5. Comments as meta-data, comments for Namespaces
6. Standard "test" keyword so any testing framework can be used to run tests. Test code can be easily stripped from finished assembly. Tests can be run as a compiler step, since the compiler knows which methods yeilded different IL from last time it can trim the number of tests run on a typical pass.
7. Similarly a "tests" keyword so…
test class TestClass{
…
public void TestMethod tests ns.ns.class.method(param one,param two)
{
setup{…}
test
{
if(condition)
pass;
else
fail;
}
teardown{…}
}
}
since the method signature is in the test method decalation it’s easier to detect code coverage. Also allows intellisense support for classes which implement (bad word, maybe fulfill is better) a given test; giving support for test-first-design (if not test-driven design)
Well that’s my 2 cents.
Jan
hmmm. I meant "live" environment. Not some sort of C# Cybernetics….
Jan
Is there something like a ‘typedef’ keyword as in C++?
– Covariant return types
– Simplified property syntax with automatic backing store (public property int Foo;)
– Operator constraints for generics
– More flexible interface implementation support. I find the current choice of
either public implicit or private explicit implementation much too limiting. I want to
be able to use any accessibility level I want, and handle multiple interface
methods with the same implementing method.
– Built-in IntPtr arithmetic support. When I brought this up over in Cyrus blog he said it
was more of a BCL feature, but I don’t agree. The IL arithmetic opcodes support native int
so I see no reason not to emit them directly, which would require compiler support. To
increment an IntPtr I currently have to write something like
IntPtr p = something;
p = (IntPtr)((long)p + 1);
and it’s really ugly ;-), plus it produces unnecessarily bloated IL. It would be much nicer with just
p++;
resulting in a simple add instruction.
– A generic constraint restricting me to unmanaged-types (as defined in A.2 in the spec), so I could use T* in unsafe code.
Syntactic support for composition and delegation so that its as "easy" to use as inheritance (see)
As Mattias mentioned, Cyrus () took a survey () for the Whidbey timeframe. He posted the results (). Since I’m certain most (if not all) of this wouldn’t be implemented for Whidbey, perhaps the results would serve as a good base for further discussion (though some of it *isn’t* related to the compiler, much of it is).
By making the format of a CS file be some XML dialect, you could open up all kinds of (pre-)processing opportunities which could free you from having to make so many enhancements to the base compiler itself.
Guys, Please give us atleast Comega’s Stream, anonymous structs and its cool member access.
If possible give us Spec#’s requires, modifies and ensure (and Boogie)
I would love to see typedefs.
COmega is a really good start for 3.0. I am going to assume you will get a lot of feedback from the preview – it is very cool stuff, and take what you learn, but better Xml serialization, better DB stuff (bake in everything that you know about how to properly do DB work), and concurrency would be huge.
tools support for this would be crucial, XSDs are still too difficult to create and there should be some easy way to find out the best way to encapsulate concepts in XSD (is your goal object performance?, is it document-centric?, etc.) – whatever we learn from the models that come out of the preview. Concurrency needs some patterns and some wizard support, but poerhaps I am moving too much into BCL, Tools. +1 on the concepts in COmega.
Unless I’m missing something, static variables can currently only be declared at the class level.
Although I rarely use static variables, when i do use one I often wish I could declare it within the one (and only) method that actually uses it (since no other methods use it, care about it, or should even know it exists).
I’m talking about visibility scoping, not lifetime scoping (a static is a static).
1)
how about replacing
public class MyReallLongAndProperClassName{
public MyReallLongAndProperClassName(){
}
}
with
public class MyReallLongAndProperClassName{
public ctor(){
}
}
or something similar.. this is mainly prompted by classes with overloaded constructors… we have a lot.
2)
this is something you will probably never do but i’d love to see overloaded return types…
Static analysis of a "this parameter can’t be null" rule. Ideally it would be the compiler doing the analysis, rather than FxCop. Make it a compiler error, just like using an uninitialized value type.
Delegates on properties.
Type aliases that aren’t local to one file, and that ideally make it into the assembly (like Delphi’s "type X = Y;" syntax).
Ability to ask the compiler for a parse tree, manipulate that parse tree, and emit it again as a .cs file. Maybe through a superset of CodeDOM.
If I understand mixins correctly, they would be awesome. In particular, for unit testing: I want a test fixture that imports this group of methods for testing a database, and this group of methods for testing a data stream, and this group of methods for testing a socket — without requiring me to keep those all together through inheritance. Another good use would be for putting helper methods on an interface, and providing a default implementation that anyone can mix in without needing to use my base class; for example, it would be easy to provide a default implementation of IList.Contains that simply calls IList.IndexOf (but that you don’t have to mix in if you don’t want it). Right now, if I want more complex helper methods, I have to put them on a static class.
Value types with auto-destruct semantics. Such a type would probably be non-boxable, and non-copiable for that matter (no way to pass it to another method, no way to assign it to another variable), since that way the destruction semantics would be unambiguous. Example usages:
private void Foo_Click(object sender, EventArgs e)
{
Hourglass hourglass = new Hourglass(); // or maybe just Hourglass hourglass;
// … do lengthy processing here …
// cursor automatically reverts when the procedure exits
}
private void Bar_Click(object sender, EventArgs e)
{
SafeFile file = new SafeFile(@"c:tempxyzzy.txt");
// SafeFile can’t be passed around, but has a Stream property that can
StreamReader reader = new StreamReader(file.Stream);
// …
// file automatically closed when the procedure exits
}
I already suggested this in the ‘MSDN Feedback Center’, (search for the suggestion ID: FDBK11721 ), but I’ll repeat it here:
Please allow the ‘new()’ constraint of generics to take arguments. For instance,
public class MyThing<T> where T: new(string) {…}
would restrict T to types having a constructor that takes a single string as argument. The current solution only allows using a public no-argument constructor with the ‘new()’ constraint.
The ‘new()’ constraint seems definitely useful, and people will be lured into adding a ‘fake’ no-argument constructor to their classes, just to be able to use this C#2.0 feature, adding an ‘Initialize(…)’ method that passes the arguments that should have been passed to the constructor…
In my experience, such classes tend to cause heavy headaches, because that no-argument constructor creates a partially constructed object, which is an attractive breeding ground for bugs.
C-Omega definitely has several nice features. It is funny to see how different people in their response to it pick out completely different aspects of it as ‘the’ feature of C-Omega.
For me, the main feature of C-Omega is not so much in its Xen/X# lineage, but in its Polyphonic C# ancestry:
A feature of C-Omega that hasn’t been listed above yet, but which I would like to see in C# 3.0 are chords and asynchronous methods. I have encountered many situations in the past where these would have come in handy.
I second Mattias on this one:
– Simplified property syntax with automatic backing store (public property int Foo;)
C++ 8 gets this and I think C# should get it too. Especially since the C# compiler does exactly this same thing for "event" fields.
Also, I want more compile-time type safety WRT method attributes. That is, I want to be able to specifiy an additional property on most method attributes called TargetSignature that takes typeof(SomeDelegate). For more info on this see:
Of course, the C# compiler would need to enforce the signature check.
Stuff that we talked about recently:
1) Support for multiple iterator (or indexers) in foreach loop. Today I need to write something like:
int i = 0;
foreach(e in employees)
{
e.adreess = addresses[i];
i++;
}
2) A language construct for automatic delegation of interface implementation to another object. I.e.
class A : ISomeInterface
{
SomeClass objectThatImplementsISomeInterface = new SomeClass();
InterfaceImplementation<ISomeInterface, objectThatImplementsISomeInterface>
}
Arha,
You’re right, I would expect my example above to print 10 10 times. However, if I replace index with a local, it still prints 10 10 times. That’s what I meant to do last night, but I was too tired to think straight.
actually, i think that /is/ the correct behaviour, in the presence of mutable variables… (nemerle prints 10 ten times if you use a mutable local instead of a class member, too…)
i tried firing up DrScheme and seeing what it would do in a similar situation (using (set!) to mutate _index), but i’ll be damned if i can remember enough scheme to be able to do anything useful with it… Overall, i think the lesson is that mixing functional programming and mutable state can be confusing.
(and going back on topic, that reminds me – a ‘const’ keyword would be quite nice to have… oh, and tuples. tuples are good, too. especially when you can do pattern-matching assignment).
(btw, there are two Rs in my name 🙂 [or you could just call me mike])
(argh (too many (parentheses) (damn (scheme)))).
Traits would be nice.
Sorry Mike (arhra). I somehow didn’t notice the second r…
There are a few things I would like to see, though they’re not all specific c# language features.
1) Deep marshalling for p/invoke. If you attempt to marshal a structure or class that contains a member that is an array of another structure or class, the marshaller fails. For example, assume you have some structure like Polygon that has a member of type Point[]. This structure will not be marshalled because the member is an array of a non-primitive type.
2) Allow default constructors on structs.
3) Allow pinning of types that contain arrays as members. If I have a type that contains an array of anything as a member, then that type cannot be pinned ( fixed ). Very troublesome in some interop scenarios.
4) Allow multiple, disparate type declarations within a fixed clause. If I have to pin 3 or 4 different types, then all of the nested code is indented *way* too far.
5) Allow for custom aggregate functions or delegates within the System.Data namespace. Today, DataTable supports aggregate functions through Select and Compute methods; however these aggregate functions are "buil-in" and only operate on known primitive types. I would like the ability to bind a data column to some custom type, like Real or Fraction or something, and have a delegate be called when I want to compute the Avg, Sum, Prod, etc of values contained in that column. Also, I would like the ability to write my own aggregate functions as well.
that’s all I can think of for now.
I have a small one regarding the CSharp Compiler. I would like to be able to change the ImtermediatePath that the compiler uses. I find it quite annoying for the compiler to generate files in the source tree by default (and *not* be able to change it). <note>this may already be fixed, I don’t know</note>
Don’t get me wrong. Visual Studio works wonderfully working under the assumption that you wish to use the exact directory structure that Visual Studio enforces. But as soon as you step out of that directory structure, you’re screwed.
The reason why I want to do this is for project management reasons. On other projects, I find the easiest structure to work with is for the project to have several directories at the root level "docs", "bin", "test", "obj", and "src". This way, when using non-VSS source control, you can dump everything in the src tree into source control and not worry about implementing a hack to delete the obj directory or move it, or anything else.
Just a pet peeve of mine ;).
I understand that you can change the output directory, but that’s only one part of the equation which makes it even more frustrating to me.
I’d like to second the earlier request for an easier way of writing straightforward properties. I’m forever writing simple wrappers around private fields because I need (or at least I want) a property. (Often ‘need’ not merely ‘want’, because I’m building a component that has to be editable in VS.NET.)
I like the way the existing ‘event’ syntax generates a field and a pair of accessors for me. I’d like something similar for properties, because I write a whole lot more of those!
It would be nice of course if there were some way of also generating property change notification events for the property automatically.
Great thread Eric (and fellow readers!).
My 2c worth:
1. Smart inline array creation. Instead of writing this:
myStringArray.Split(new Char[] {‘a’, ‘b’})
I’d like to be able to write this:
myStringArray.Split({‘a’, ‘b’})
(the compiler would need to check the types of all the elements in the "in-line" array, to see that they match (one of) the array types accepted by the method.
2. I’d like to be able to override static methods. For instance, I wanted to set up a number of static classes to cache things. (The first time you call the "Get" method on the cache it retrives the object, and then caches it). The "natural" solution was an ancestor class that defined the basic, thread safe, load-on-demand behaviour, and then derived classes to override the method that actually does the one-off retrieval of whatever it is that you want to cache. Not possible at present tho, since static methods cannot be virtual.
3. A class library request rather than a language one: a "Magic.Empty" class. Instead of writing code like this:
if (myString is null || myString.Length == 0)
OR
if (myArray is null || myArray.Length == 0)
I could write
if (myString == Magic.Empty)
OR
if (myArray == Magic.Empty)
Where Magic.Empty overloads the == operator in such a way that it responds correctly to null arrays and null strings, and to zero length arrays and strings.
4. Speaking of closer integration of C# with databases, I think that there’s a lot of handy stuff that can be done with operator overloading in the existing language. In particular, I’d like to be able to write SQL statements in C# syntax. Like this:
Table cust = new CustomerTable();
Table order = new OrderTable();
SqlStatement s = new SqlStatement();
s.Select (cust.ID, order.ID, order.Total);
s.From (cust, order);
s.Where(cust.Class = "Premium " &
order.CustID = cust.ID);
Notice that the where clause is valid C#. Only two tricks are required to make this work: (a) generated classes that define the database structure (my "…Table" classes above) and (b) clever use of operator overloading on those classes, so that when you combine their "field" properties (which are objects too) the result of the expression is an object tree that describes the expression.
That object tree can then be converted to actual SQL as and when required, possibly tailoring the resulting SQL differently depending on what the target database is at runtime. Advantages include compile time checking, intellisense and an all round cleaner approach to SQL. Useful in application server C# coding and in Yukon C# stored procs.
5. Re the request for compiler checking of strings for data binding, I have a suggestion about that on my website. (See URL for this message.) However, what we’ve found on my current project is that it works almost as well to just generate constants from our datasets, with a little tool we wrote. There’s one constant for each column.
Re this:
myStringArray.Split({‘a’, ‘b’})
What I’m suggesting is just like Delphi’s
myStringArray.Split([‘a’, ‘b’])
(not that Delphi actually has a split function on string arrays, but it does have the functionality that I’m suggesting).
John
I would like to see XPath or OPath support for filtering an object graph or list.
Instead of:
List<int> myInts = new List<int>();
myInts.Add(1);
myInts.Add(2);
int i = 0;
myInts.ForEach(delagate(int a){i+=a;});
Console.WriteLine(i.ToString());
How ’bout:
List<int> myInts = new List<int>();
myInts.Add(1);
myInts.Add(2);
int i = myInts[sum(.)];
Console.WriteLine(i.ToString());
I know this is a really simple example. A good reference for what I am referring to check out <a href="">ObjectXPathNavigator</a>.
Add and Delete property
Example:
///a is an object
///add:
a.addproperty("name");
///use:
a.name="firstfire";
///delete:
a.delproperty("name");
Re:
if (myString is null || myString.Length == 0)
Whidbey does get this:
if (String.IsNullOrEmpty(myString))
I asked for the same static method to be put on Array but no dice. Not quite as nice as what you suggest but it is better than what we have today (at least in terms of CTS).
Computer can solve many problems.Any problem can be devided into many very small problems,and any problem can be devided in different ways.For example, "1+2+3=?" is a problem, it can be devided into two problems:
1. 1+2=?(1+2=3)
2. 3+3=?(3+3=6)
This is the first way.
and the second way to devide is:
1. 2+3=?(2+3=5)
2. 1+5=?(1+5=6)
and many other ways.
(One more:
2=1+1,3=1+1+1,1=1,1+2+3=1+1+1+1+1+1)……
C# can solve tens of thousands of basic problems,but it can’t solve some basic problems.I think there exist a "basic problem set",which can solve all the problems "the computer can solve".What basic problems C# can’t solve?It’s a good problem.Another good problem is how to organise problems,"devide,unite…speed…".
Allow delegate for properties. This would allow type safe reflection and databinding.
Change the assembly search order of the compiler.
This maybe already done in version 2 don’t know, sorry.
1. Current working directory
2. The common language runtime system directory.
3. Directories specified by /lib.
4. Directories specified by the LIB environment variable.
This makes it a real pain to compile in compact framework assemblies
as the runtime directory always get preference. Having the runtime dir
searched last would make it a lot easier.
Thanks
Bart
I would like to suggest a new custom Attribute type to would be targetted to instrument methods, classes, etc..
For example, we can imagine to deinfe the NonNullableAttribute which is use to tag non-nullable parameters of methods:
[AttributeUsage(AttributeTargets.Parameter,AllowMultiple=false)]
public class NonNullableAttribute : Attribute
{}
and a simple usage of this attribute would be:
public class MyClass
{
public void Method([NonNullable]Object o)
{
// o is non-null;
}
}
Here’s a possible implementation: suppose that .NET provides the following custom attribute:
[AttributeUsage(AttributeTargets.Method,AllowMultiple=false,Inherited=true)]
public class ParameterInstrumentorAttribute : Attribute
{}
Our custom attribute class is then modified with it:
[AttributeUsage(AttributeTargets.Parameter,AllowMultiple=false)]
public class NonNullableAttribute : Attribute
{
[ParameterInstrumentor]
public static void Instrument(Object value)
{
if (value==null)
throw new ArgumentNullException(pi.Name);
}
}
The static method Instrumentor would then be injected at the beginning of the MyClass.Method 🙂
This would mean that we could instrument natively .NET code…
Eric Gunnerson asks for suggestions about what should be added to the version of C# after Whidbey. My only suggestion was for covariant return types. I don’t come across the need for this feature every day but every now and…
I really like macros __FILE__ & __LINE__ from
C++ and I really miss it in c# 🙂
-bda-
Hi Guys
I really like C# as a language; I believe I’m 20% more productive in it than java. I’d like to see the language keep the things that make it so nice to use, i.e. consistency, clarity of vision, simplicity (small tight language core, first class primitives, two tier learning curve as you can achieve simple things quickly then move onto the more advanced stuff like interop.), and a well balanced tradeoff between language purity and practical considerations. I see a danger in C# evolving into another sprawling C++ monstrocity if it tries to be all things to all people.
With this in mind I thought I could contribute some thoughts on critia which may be useful when selecting new features for the language:
1)Should not modify existing behaviour except when
2)existing behaviour is broken badly (e.g. volatile behaviour was changed in Whideby with the IsVolatile attribute)
3)should not add to the standard library unless unavoidable (these are language features not library features). String and the other dual primitives are a special case, but even so should not be changed unless absolutely necessary. A good degree of decoupling between library and language features is particularly important in the embedded world.
4)should remain pure to the semantics of the language (C# is an strong object model OO language not a scripting language, weak object model language, or a functional language)
5)should simplify the grammer where possible not add additional complexity
6)should improve code clarity not muddy it – a lot of common ‘syntactical sugar’ abbreviations which save a few keystrokes make code much harder to read. The emerging generations of smart editors (e.g. EMACS;-) will make this sort of thing less important
7)should not detract from machine readability of the source code
8)should not be vendor specific
9)should not be data-store paradigm specific (e.g. no support for in-line SQL which is not capable of also supporting OQL or XQL)
10)should not be hardware platform specific in either bit lengths, pointer assumptions or atomicity assumptions (e.g. operations on ‘long’ are not atomic on x86 platforms – this is badly broken and exposes underlying hardware semantics to a VM based language!)
11)should not use attributes gratuitously – attributes are annotations, not core language elements
12)should be consistent – the same language construct should behave the same in all cases
I think the Generics changes did a very good job of changing the language in a well thought out and consistent fashion, with one slight issue being the impact on System.* interfaces which break existing code compatibility (IComparable anyone?).
I suggest examining (or writing a program to) a large volume of existing code looking for repeated patterns which can be implemented as language features (e.g. events, delegates, trivial properties)
My personal preference is to leave the language alone as much as possible for the next release and work on tightening up the optimiser and the semantic checks (as several other people have suggested above). Some other minor suggestions: make every tool support piping stdin/out as having to use temp files for ILasm is a pain, support a dump of the internal compiler parse tree a la GCC’s intermediate format, improve consistency between C# reflection and ilasm synatax, implement a DFA pass in ILasm to calculate maxstack, properly optimise jump length in ILasm. Generally make it easier to integrate the tools.
Hope this is helpful
Cydergoth
—
The cyder is strong in this one Lord!
This isn’t a language suggestion persay, its’ more a .net framework thing, but I think it would be a REALLY good idea.
A managed HTML/CSS/CSS2/XHTML/XML DOM compliant renderer with full editing capabilities built into the framework that isn’t a wrapper around mshtml.dll or shdocvw.dll
A couple of things would be accomplished:
1. Managed code = very few security problems.
2. Fixes IE’s horrible standards compliance because IE 7 could be based on the managed renderer.
3. Finally gives us rich text editing that means something instead of RTF support that isn’t used for anything now-a-days.
The shdocvw.dll wrapper that has been put in is cute and all, but largely pointless and doesn’t provide editing. I’m not exactly sure why this functionality was added the framework when you can put it in yourself by importing the dll. If it had been based on mshtml.dll and given edit capabilities (even rudamentary) then it might have been usefull, but then you can always get Tim Anderson’s and you’re as far along.
Add a language construct to facilitate caching. Caching seems to be one of those areas where the divide between expert and non-expert implementations is severe.
ASP.NET does a very nice job of this, and this is very possibly a BCL request.
On a totally different tack, how about adding some constraint like behaviors to a foreach loop:
foreach( Object o in Objects [where o.ToString() == "MyObject"){}
How about a mechanism for checking parameters (something that is often left out)?
Constraints could be added in a declaration (and collapsed in the editor) throwing accoring to guidelines. I am not knowledgeable enough to know whether or not being a full blown language construct would translate into better static checks or optimizations, but it might.
So, to summarize:
COmega stuff, along with generics (VS2005 makes some huge improvements and it is hard to go back to 2003 to use COmega. I think some of the advances in 2005 tools would make COmega a lot easier.)
Extend the use of [where] to add constraints to foreach constructs, and to add parameter checks to function declarations.
Just throwing stuff out there for you to consider.
I’d like casts from integral to enum types to do an implicit Enum.IsDefined check and throw an InvalidCastException on failure.
This check would not apply to [Flags] enums.
Add-on to the automatic-property discussion: support for the ‘readonly’ keyword would be great.
And for non-readonly automatic properties, I second the suggestion of firing change-notification events. Maybe a "public string Foo event FooChanged;" syntax, or "… event OnFooChanged()" to call a method (which you would then have to define manually) rather than firing the event directly. Also support for calling a method that takes a single parameter of the same type as the property.
Oh, and virtual constructors and virtual class methods (cf. Delphi) would be really nice.
I really liked the post about making it easier to translate between object world and rdb’s. That’s probably 25% of a standard business ap, minimum.
I’d also REALLY like to be able to have arrays implement operator overloading so you can write code Matlab-style. I do about 20% scientific computing and you really make code a lot more readable when it looks like what’s on the chalkboard. I know you can do it with dynamic code generation and/or creating your own classes, etc, but that’s pretty ugly.
I snt this a while ago to a microsoftie , but I’ll post it again.
I have been playing around with factory paterns lately and I have come
to the conclusion that in general factory patterns are way overrated and
just add another level of complexity to your code – that is until you
need them , but you would have no way of knowing when you need them so
you’ve got to write a object builder method for each object you create.
And of course this will break your old code that uses direct
instatation.
Here is a suggestion that I have been toying with. Perhaps it is
possible add a static method all "Factorize" or something like that
returns a object. Then you overload the "new" operator to first call
this static method – if this method return a object then you just pass
the method back to the caller, If the method returns null then you
continue with the object creation and instatiaon process.
The advantages would be 2 fold. Simpiler object definition and creation
without having to cross build the factory class and you can selectively
enhance specific objects whn needed. Another advantage is that now all
code written until now has automaticly been Factoryized and if changes
in the object manufaturing process are required you just have to
implemet the static method. This would make singletoning esspecially simple.
One possible problem is that the code might be reentrant if the Factory
method tries to actually create a new object
Perhaps this is really stupid suggestion but I would love to see native code access to a database schema. Yes it has been mentioned before that the correlation between databases and the languages really need improvement but here is my idea:
You could do something like
using System.Data.SqlClient.Schema
and then have the programmer access the schema of the database by creating a new object to that database that would access the underlying XML metadata information contained within the speciified database. This would really speed up production of classes as you could then generate classes that would analyze the schema dynamically and generate the corresponding classes to those tables, views, store procs, etc. Perhaps this is a really stupid idea or has been mentioned before but I thought I would give my 2 cents worth.
Please can we have a switch in the ide that ignores case sensitivity and allows me to write CLS compliant case insensitive C# source code …
comega stuff; tuples and streams
I love the capability to do this:
{1,2,3,4,5}.{ it.Foo(); }
I am not to sure if someone else mentioned this, hence, I do not want to be redundant. However, multiple inheritance is something that I think serves a very useful purpose.
I’d like to implement an interface, or override a virtual or abstract base class with a method or property that returns a subclass of the class defined as the return value in the interface or base class.
For example:
class BaseClass
{
public abstract object SomeMethod();
}
class SubClass : BaseClass
{
public override SomeClass SomeMethod()
{
…
}
}
I’ve run in to several places where I’ve wanted to do this, and I have to usually add in some ugly hacks to get around the language limitation.
Sometimes it woul be great if one could use
decimal x = null;
if( … )
x = 4;
if( x!=null )
…
It would be great if MSDataSetGenerator would use a virtual function to convert a stored value to a another one, like
instead of
public System.Decimal HLD_ID {
get {
return ((System.Decimal)(this[this.tableV_HOLIDAYS.HLD_IDColumn]));
}
set {
this[this.tableV_HOLIDAYS.HLD_IDColumn] = value;
}
}
better
public System.Decimal HLD_ID {
get {
return ToDecimal(this[this.tableV_HOLIDAYS.HLD_IDColumn]);
}
set {
this[this.tableV_HOLIDAYS.HLD_IDColumn] = value;
}
}
public virtual decimal ToDecimal(object obj)
{
if( obj==Convert.DBNull || obj==null )
return 0;
else return Convert.ToDecimal(obj);
}
public virtual string ToString(object obj)
{
if( obj==Convert.DBNull || obj==null )
return "";
else return Convert.ToString(obj);
}
deterministic destruction of object(-references) that go out of scope is not available, like in C++.
The caller can use the "using" statement instead, but it still is up to the caller to do so.
My suggestion is to "notify" objects when their reference(s) go out of scope.
E.g.
{
MyInstance i;
…
} // <– Here, a notification is sent/called for ‘i’, just right before it goes out of scope.
Maybe through some special interface ("IDisposableNotify") and/or events.
Hi,
I know the below is not a C# reg. But, this will be the ultimate one which dev. need in today’s compatetive world.
1. A Voice rec. s/w attached with .NET to Spell code thru VS.NET(No Keyboard / Mouse reg.)
2. Spectacles with Virtual Monitor (No Monitor required / Viewing a big Monitor virtually thru the spectacles)
With Regards,
Sankar.B
HCL
India
I’d like to be able to use primitives in Generics _with_ support for operators, as in something like
class NumericAlg<T> where T: primitive {
publc T MyAlgorithm(T num1, T num2){
return num1+num2;
}
}
This would help implementing support for numeric algorithms (no need to duplicate the same algorithms different precision types), and make C# a great language for numeric algorithms. Of course, I’d also expect the solution would not loose out on any of the performance benefit compared to writing a specific implementation for each precision.
Eiffel like constraints, for ex.
class C
{
int a, b;
object o;
where
{
a > b;
o != null;
}
public int M(Foo param)
where param != null
return >= 0
{
…
}
}
and also a keyword to force a method to call base when overriden. ex :
base class :
public virtual callbase void A();
derived class:
public override callbase void A()
{
base.A(); // Error if not present
}
Regards,
Cyp
I also would like to be able to do serious, ML-like functional programming on .NET, but i realize that features like closures, type inference etc. would change c# too dramatically.
my suggestion is: Microsoft leaves c# as an elegant but fairly Conservative imperative language but finally commits itself to supporting ONE functional language? SML.NET is dead, F# seems to be dying, Scheme.net and Haskell.net never got very far despite Microsoft support, Nemerle and Mercury live in small Academic ghettos. <b>PLEASE, Microsoft, give us at least one commercially useful functional language!<b /> We will use it side-be-side with C# and VB and stop asking for functions-as-first-class-values in these languages! 🙂
Cheers, Wendelin
I think the ‘using’ semantics are just fine, but I hate the fact that you have to create a new block for each using statement. I think it would be cleaner if the Dispose method would be called automatically at the end of the enclosing block, like this:
public void foo(string file)
{
using StreamReader reader = new StreamReader("bar.txt");
// … read the file
// the file is closed automatically
}
I would like to be able to use yield inside an anonymous methods.
By ordre of preference
* Extensible enum like in MBF slides from PDC
* IL inside C#
* I also like the array of the DefaultHandler
by Sean Malloy akind of delegated inheritance like in VB6
* oops s/implementedBy/providedBy : good also
but rather a management contract tool that a langage (ala project &| projectItem attribute)
* Hide methods, RenameTo methods on ihneritance like in Eiffel eg
class A { F(); G() }
class B : A where hide F, rename G to H { }
sometimes usefull
* vs project should handle .netmodule input/output
Good luck with this swarming haystack
PS: Of course Cw will be cool, one day …
PS: Of course Msh, Bts and Functional langage interop too
i forget typedef (rather atop of the list)
ouf
Option to force floating-point divide-by-zero to throw an exception (instead of returning PositiveInfinity, NegativeInfinity, or NaN as it does now). In most cases, divide-by-zero *is* an error condition.
You know, we really should have a forum for these feature requests, where each suggestion could be a new thread, and anyone who’s interested could respond to individual suggestions. I’ve seen several of these that I wanted to comment on (e.g., suggestions that I think could be done with generics), but any respose I posted here would be lost in the noise. Maybe you guys need something like Borland’s QualityCentral.
The ability to specify operator constraints on generics. So I can write generic math functions that work on doubles, ints, and maybe even TimeStamps.
Someone else mentioned it above: Multiple Inheritance. One example of where I think multiple inheritance (in Asp.Net) would be a class that contains stuff designed to improve upon Control, but when I go to actually implement the specific control, I may need to inherit from WebControl, or I may want this functionality on a UserControl.
As it is I have an interface that is implemented in my subclass of WebControl, and my subclass of UserControl with the functions just copied between them since everything the function uses is present on Control itself.
This might be solvable only with single inheritance if I had control over the entire hierarchy, but as I am dealing with a set of base classes that I can’t modify or insert a class in between that doesn’t work in this case.
As an old Delphi coder, there are a couple of things I really miss. Typesafe enumerations and sets would be swell. For example, declare this enum:
enum Days {Sunday, Monday, etc… };
In C#, you can assign any integer value at all to a variable of this type:
Days day = (Days)748;
This is meaningless, but valid C# code. (Yes, I can test it with Enum.IsDefined, but I don’t want to.) In Delphi, not only can you not assign an invalid int to my variable, I can’t assign even a ‘valid’ int – you *must* use one of the defined enumeration member names, or do an explicit cast (and the assignment above would still fail with an out of range error).
You can also declare a set in Delphi, and use the set to easily test a variable (I am using invented C# syntax):
set of Days WeekendDays = (Saturday, Sunday);
Days day;
// code which assigns a value to ‘day’
if (day in WeekendDays) {
// do something…
}
Finally, with an enum in Delphi, you can create arrays with associated values:
string(Days) DayNames = {"Sunday", "Monday", etc…};
string(Days) DayAbbreviations = {"Sun", "Mon", etc…};
You can use these arrays to do simple lookups for text display without case statements (obviously these aren’t limited to display text strings, but can be any data type). The array elements cannot be accessed in any way except with the defined enumeration members or an explicit cast (as with the variable assignments described above).
Enough of enums – I have one more thing. Delphi has a "property" keyword which allows you to explicitly define properties and their access methods in a single line of code. For example (again in invented C# syntax):
private int _totalSales;
public property int totalSales get _totalSales set setTotalSales;
This code defines a private member variable, then defines a public property for it. The ‘get’ accesses the member directly. The ‘set’ calls the ‘setTotalSales’ method. This method must be defined separately, and accepts a ‘value’ parameter of the appropriate type (just like the current C# setter methods). In this scheme, the ‘get’ and ‘set’ can each be defined to access the member directly, or call a method. This is really just syntactic sugar, but it seems to me a cleaner, prettier, and easier-to-type way to do it than the current scheme.
I really want to see non-enforced checked exceptions. That is, I want a tool (like the C# compiler) to be able to tell me all the exceptions a particular method throws. I don’t want a "throws" clause or compiler enforced exception handling. I just want 100% bullet proof documentation on what exceptions my code needs to consider catching and what it will leak out.
1) I’d like the idea of "inline assembly". For example, will be VERY useful when calling OpenGL extensions ( look Tao OGL wrapper for example ). For example:
class CMyClass
{
public void DoSomething ()
{
int i = 0;
i++;
__ilasm
{
blah blah blah….
}
}
}
2) Allow simple "try" blocks. Sometimes you put simply:
class CMyClass
{
public void DoSomething ()
{
try
{
DoOtherThing();
}
catch
{
}
}
}
so it will be to allow the next expression:
class CMyClass
{
public void DoSomething ()
{
try
{
DoOtherThing();
}
}
}
thx
80% to 90% of development time is spent fixing problems – so I would like changes to the language that address this.
1) A debugging runtime that allows un-running of code. At each instruction in the code there is a “state” that would need to be saved so that when you get to a problem section you could back up to any previous “state”. Then allow for the code following that “state” to be changed and re-run.
2) A debugging tool that will allow permanent information to be attached to the code. In this fashion a customer in the field can turn-on the debugging and send back a trace file that tells of every method/function that is called and every parameter that gets called in the order that it is called.
3) A decent graphical view of data structures. If it is an array with 10 items in it draw a rectangle with 10 little boxes in it. If the array then gets reallocated and grows to 20 – redraw it as a new rectangle with 20 boxes.
4) A tool in VS.NET that shows what sections of code have been reached during testing. Combine this with the capabilities of NUnit.
5) The ability to look at the code in different “views”. Some times I want the code with all the XML comments in it – sometimes I don’t want to see all the XML comments. Some times I want to see a flow-chart of a routine. Some times I want to see the code itself. Some times I want to see the testing status of the code.
6) A spelling checker. Word is phenomenal at this. A spelling checker for the comments and for the code. If I typo a variable – why is it that the VS.NET environment cant tell me right then with a little red underline – or if I just transpose a character or two – why not just auto-correct it.
I don’t know how involved the Compiler is in the creation of assembly metadata (I’m assuming fairly), but a scenario I come across when making class libraries is maintaining consistant and up-to-date documentation.
Consider the class A:
public class A
{
int x = 0;
///<summary>The X position of A</summary>
public X {get {return x}}
///<summary>My class A</summary>
///<The X position of A</param>
public A(int aX)
{ X = aX}
///<The X position of A</param>
public DoSomething(int newX, int other)
{…}
}
You can see that I’ve repeated the comment for variable X throughout the class definition and for more complex classes this situation usually gets worse.
This is a real issue when trying to maintain consistant class documentation, especially when refactoring (thankfully the actual param ids are already refactored along with code in 2005). What I’d imagine is the ability to set up a comment chain through which comment summaries, params etc can be inherited. Perhaps something like:
///<summary>The X value for A</summary>
public int X {get {return x;} }
///<param name="newX" comment="this.X" />
public void Foo(int newX)
Now when the class is compilied the two comments would be identical (I imagine this would use a local copy of the comment for performance).
This could then allow wrapper classes to use internal classes commments for consistancy, and with a little modification also be used for method and constructor overloading.
A way to emit LdToken instructions from C# would be highly desirable.
A lot of testing and logging code could benefit from having access to the MethodInfo (or equivalently, RuntimeMethodHandle) of the current method.
Presently, the best way I am aware of to access this information is via System.Diagnostics.StackTrace / StackFrame. This is inefficient, and I believe it can be incorrect in the face of inlining.
If C# code like this:
class SomeClass {
public void SomeMethod(int x) {
Logger.LogCall(thisMethod);
// do stuff
}
}
Could emit IL like this:
// method header
ldtoken method instance void SomeClass::SomeMethod(int)
call Logger Logger::LogCall(RuntimeMethodHandle)
// IL to do stuff
// method footer
I’d like to see a raise accessor for events, especially since you guys have already added multiple visiblities for each property accessor. I’d like to write events like this:
public event EventArgs ValueChanged
{
add
{
this.Events.AddHandler(EventKeys.ValueChanged, value);
}
remove
{
this.Events.RemoveHandler(EventKeys.ValueChanged, value);
}
protected raise(object sender, EventArgs e)
{
EventHandler handler = this.Events[EventKeys.ValueChanged] as EventHandler;
if(handler != null) handler(sender, e)
}
}
I think that this has been considered, and I think that somebody added the ability to Mono’s compiler at one time (but I’ll have to double-check that in a bit). Seeing as IL supports raise accessors and Managed C++ has it, I don’t see why C# shouldn’t have it. The trickiest part is the syntax for the definition…do you put the argument list again or rely on an implied argument list based on the delegate type? Either way I’d be happy.
Having this feature partially solves the cannot-call-a-null-delegate problem, but more importantly keeps me from having to clutter my class with protected or internal methods for raising events. It’s bad enough having a public ValueChanged event and a protected OnValueChanged method, without having to add a protected/internal/whetever RaiseValueChanged method. It’d be more clear to call the exposed event itself when you want to raise it.
When reading my above post, ignore my idiocy and change EventArgs to EventHandler and put a semicolon after handler(sender, e)
While you’re at it, fix any other problems that I didn’t catch before posting. 🙂
hrmm, damn webapp failed and appears to have lost my response.
Anyway, I did add faise support to mono’s C# compiler experimentally some time ago. I blogged about it at [1]. The source is both outdated, incomplete, and unreleased, however. I ran into a syntax stumbling box(explained in the blog) that I never finished. There was little interest in it and I had a number of other things to do at the time. At some point in the future I might finish it, if anyone is interested.
1.
Hmm.. I didn’t really think about the case where you don’t provide an add or remove accessor, because I never use them anymore. I suppose if a class only had a few events I could justify directly exposing the event member, but it always seems like a bit of a waste to store null delegates for every event when the client code may never hook up any of them.
Perhaps the raise accessor simply couldn’t be used without an add and remove accessor. So an event could have add/remove or add/remove/raise. If you really wanted a custom raise without doing much work in add or remove, I guess this would work:
class MyClass {
private EventHandler _valueChanged = null;
public event EventHandler ValueChanged {
add { _valueChanged += value; }
remove { _valueChanged -= value; }
raise { if(_valueChanged != null) _valueChanged(sender, e); }
}
Since you brought up return values, I now think that leaving off the parameter list for the raise accessor may be a better solution, and make them implied just as "value" is on add/remove and set accessors.
Of course, the more I think about the problem the more I realize why raise accessors will never be added to C#. Still, I believe that they help readability by encapsulating the entire event, instead of just part of it, in a single construct.
I agree they certainly do encapsulate things better into a single construct, however I also realize, as you do, that there are issues that make them difficult in C#.
Over time I prefer the OnXxx method, *HOWEVER*, I don’t think that it really deals with all possible situations and I think a combined raise+OnXxx makes more sense. You override OnXxx to handle the event while you override raise to change the way the event is actually raised.
I do think its pretty rare someone would want to change raise semantics but it would certainly make virtual events fesiable and would help with the null check and potentially centralize locking. However I don’t think any of these issues are insurmountable with the OnXxx method, just mildy less readable and flexible.
I still stand firm that C# should atleast support *calling* raise methods on events, even if it doesn’t support producing them.
Anyway, anymore discussion about this is probably well off topic here. We are already probably flooding Eric with alot more text than he actually wanted. Further responses are (probably) more correctly discussed on my blog post or via email(I’m at onyxkirx at comcast dot net if anyone has any comments to forward).
It would be nice if we can have SQL’s "in" keyword in C#
For example:
int i=this.getValue();
int[] goodValues=this.goodValues;
if (!i in goodValues)
{
//throw exceptions
}
When multiple teams are involved in development of single assembly, its sometime desired that we have another type of access modifier, using which the team can encapsulate the code, and can enforce that only certain classes (facades) should be used, even internal to the assembly.
public class SomeClass { /* calls FacadesFromTeamA */}
internal class FacadesFromTeamA { /* */ }
friends {FacadesFromTeamA} class TeamAHelper {/* only friend classes can access this class*/}
I write a lot of numeric and scientific software and I think it will better to add the following to the language:
* Like Bunnz to make operator available to generic so we can define a complex or matrix class of basic type like double:
T Add<T>( T a, T b) where T: operator+(T, T) {
return a + b;
}
The only way to overcome that now is to make sure all data type used inherite from some interface that provide the operator that we need, which force us to wrape all basic type.
* Make complex type a built in type so most numeric software use the same basic type, not each one invent its own complex type.
* To support generic of generic something like
class Foo<test<T> >{};
* Extend the marix class in the image, to be general purpose not limited to 4*4 only.
Regards,
Emad
One thing about Python that is very powerful are the array operators : and -. For instance, to lob off the last character of a string, the expression line = line[:-1] is superior, I think, to line = line.Substring(0,line.Length-1). Other uses of th colon/dash syntax are also very effective. Would it be appropriate to add these overloads to the string class?
Philip: I’ve responded to requests 1-4 from your comment here:
It would be nice if the compiler could emit warnings for incorrectly indented source files:
if (expression)
FuncA();
FuncB();
FuncC();
or less obvious:
if (e1)
if (e2)
{
f1();
}
else
f2();
I have found quite common having to read not so good code that just has those kind of bugs, hard to spot when blocks get big. The problem is that the file contains unreliable info (indentation) that humans perceive and rely on but that the machine don’t check in any way.
Ugh, the indentation is lost in my previous entry. Let’s try again:
…if (expression)
……FuncA();
……FuncB();
…FuncC();
and:
…if (e1)
……if (e2)
……{
………f1();
……}
…else
……f2();
What about pascal style subranges. I HATE having to use (say) a byte when I KNOW in advance that a variable may ONLY take values in the range 0..2.
This probably won’t save space, or time but WOULD make a program a fair bit clearer (IMHO).
I am waiting for the C# and VB.NET CodeDOM parser. Why are the providers still returnin "null" in Whidbey? 🙂
Greetings,
Markus
Some long lead work.
Address the things that Rocky Lhotka says VB can do that C# cannot do:
Especially focus interface related issues such as #2, #5, and #6.
And optional parameters (#8).
– support putting classes on the stack so they can be cleaned up at the end of function scope
– support true destructors (Deterministic destruction for all languages)
– support specifically deleting objects (can help performance of GC)
– Support multiple inheritance, if needed start with mixins
– Support multiple indexers for C#
– Support an option so that all finalizers can act like critical finalizers or use an attribute to indicate it on the class
rather than requiring deriving from a base class (burning our one base class).
– Fix the Visual Studio IDE so that all items can be configured per build (Debug, Release) for example binary references
– Don’t store specific file paths/locations in the .user file that are autogenerated from the projected hint paths ( relative paths were put there originally for a reason)
– Support aspect oriented programming in terms of addition a prolog/epilog to a specific set or all function/method calls
– Option so that shutdown of appdomains and the CLR happen more orderly (proper garbage collection, calling of finalizers, etc.)
– Make it so that Exceptions don’t take several seconds to load the first time you hit one.
– The ability to define a conditional during runtime that can conditionally compile out calls to methods during the JIT
(This would allow doing things like adding trace calls at the start of the program but they are effectively compiled out
based on a value in a config file, registry key or config information from a server. This allows for total performance
when needed and adding additional optional tests/checks when you must. An additional cool option would be to allow
swapping methods so that one method is JIT’d under one condition and another under another condition. These options
are great for high performance scenarios where the program must decide which algorithm to use at start up. Allowing
changing and re-JIT’ing at runtime would be ultimately beneficial. This would be a great feature supported by few languages.
Some might say you could use polymorphism to solve this but usually this would mean re-writing a lot of code in
a derived class, especially when we only have one base class now).
– Support dumping full stack information/core like dump loadable my VS.NET to debug customer related problems
– Allow for some safe pointer manipulation without resorting to marking code unsafe
– Allow late bound object access using function names on the object without being required to resort to
Invoke calls. I want to be able to dynamically generate types at runtime and be able to access the object just like
a static assembly – (Obviously only runtime checks for inappropriate method calls would take place). This makes
the value of dynamic assemblies higher.
– The ability to add a function to a class that gets called in the JIT stage as part of building a type that
allows me to add items or change items in my class before the JIT completes.
– Support an MFC like command hierarchy for .NET so we don’t have to rebuild the basics of a GUI application each time
or build our own frameworks.
– Support hooking garbage collection getting information about when garbage collection has run and list of all objects
– Support hooking all allocations and de-allocations – sometimes you want some fine control on objects and to keep track
of some objects yourself.
– Build in support to add resources to the .resx xml files so we don’t have to resort to sample editors
– Support a default option to copy returned object references that are from a private member of a class to
avoid the problem of exposing private object internals (an encapsulation issue you could Clone if the objects supported
it but many people forget to do this especially people coming from C++)
– Support a friend like class option (possibly with security that allows me to securely link a class to allow access to the internals
of another class (This can be very useful for test classes. Currently it is difficult to write an NUnit test
for checking out classes that store information that you want to test but is not exposed publicly from the class, requiring
use of the internal modifier but then that says the test class must be in the same assembly which is not really acceptable)
– Provide additional easier support in string builder to more easily convert to and from strings so it isn’t required
as often to switch between stringbuilder and string
– Allow defining an object that will always be passed with a delegate call (This can allow differentiating which delegate
called a particular method.
– Make it easier to adjust the control ordering in a win form to fix control layout/docking issues
– Supporting adding attribute information to types at runtime
– Do weak references really need to be finalized when the object it was watching gets finalized/de-allocated? Wouldn’t an
interface callback be better?
– Support static virtual method so that singleton classes that are derived from a base class, the base class can get
data from the derived class so the base class implementation can make decisions based on the derived class value
An example is a logger class that is singleton and say you want one singletone class to map EventSource1.Log to
a specific category and another singleton derived from the base class that implements Log() to use a different category.
For example say I want a framework log static class to log from framework code (derive from a base that implements log) and
an ApplicationLog static class derived from the same base that logs using the application category.
example (this won’t work since both are accessing the same base class variable) and the alternatives currently
require rewriting the base code in the derived class (not a big deal for this sample but if you had a lot of functions
it would be):
public class BaseEventSource
{
static BaseEventSource() {}
protected int _category;
static void Log(string s) { Event(_category,s); }
private Event(int category,string s)
{
Console.WriteLine("Category:"+category+" – "+s);
}
}
public class FrameworkEventSource : BaseEventSource
{
static Framework () {
_category=1;
}
}
public class AppEventSource: BaseEventSource
{
static Framework () {
_category=1;
}
}
//would want calls to use the proper category when called
AppEventSource.Log("Hi");
FrameworkEventSource.Log("Hi");
If we could have a virtual member function for a staticly derived class we could call that function to
get the category id which could be overridden by the derived class
I know you’ve mentioned in a previous blog that there were no plans to support AOP in C#. However, you also mentioned that you’ve seen mostly logging and monitoring examples. While logging and monitoring are fine use cases, they sell the potential of AOP short.
Here’s the sort of example where I think AOP really shines (forgive me, this is pseudo AspectJ, not pseudo C#):
aspect AlbumPlayTriggersBilling extends ObserverPattern {
//make the players conform to roles within the pattern
//should be done with attributes…
declare parents: Album implements ISubject;
declare parents: BillingService implements IObserver;
//defines operations that trigger notification of the Observer
protected pointcut subjectChange(ISubject subject):
execution(public void Album.Play()) && this(subject);
//automatically called by super-aspect for each observer
//when an operation specified by subjectChange occurs
protected void updateObserver(ISubject subject,
IObserver observer) {
String name = ((Album)subject).getName();
BillingService svc = (BillingService)observer;
svc.generateCharge(name, 6.00);
}
}
(inspired by: )
Such an aspect cleanly encapsulates the interactions between an Album and a Billing Service, and allows for both the service and the album class to be reused (and reconfigured) independently.
I’ve also seen aspects put to good use for:
* managing client specific code
* data-driven security
* thread safety
* clear exception handling policies (e.g first failure data capture)
I dearly hope I’ll be able to leverage aspects in C# one day. For right now, it’s one of the things that keeps me hovering around the Java platform.
cheers,
ndlesiecki 9at0 yahoo com
author: Mastering AspectJ
Default parameter syntax. The question of default params has been covered before — the problem being that the default value is burned into the call site. The stated workaround is to generate overloads.
I understand the problem; it’s valid. I understand the solution; it’s valid. But don’t make me type all those overloads manually! Give me default-parameter *syntax*, but make it *generate* overloads, just as if I had typed all those overloads in longhand. That fixes the versioning problem, but makes way, way less typing (and maintenance) for me. (And lets me share the same XML doc comment between all the overloads, which is also nice.)
I look foreward to Comega become a part of the C#. I was told that Comega is not supported by either the C# or the Visual Studio teams. Why not? The Comega is so great cool :). I’d like C# supports AOP either. I love C# so much and think it should be a most great modern language on planet.
I’d like something similar to what Dan, at least a Dan, mentioned above. A cleaner using statement. Currently, if you’re using many Disposable objects of different type, there are a lot of nested using statements that make the code look ugly:
using( SqlConnection conn … ){
using( SqlCommand command ….){
using( StreamWriter r …){
…
//write to a db and a file
}
}
}
It’d be cool to write one using statement for all of them at once; or as Dan said: put using in front of the declaration.
Some long lead work.
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
|
https://blogs.msdn.microsoft.com/ericgu/2004/07/20/future-language-features-scenarios/
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
I wrote this prefs document because I could not find effective prefs
documentation for the variety of technical problems I encountered.
It needs to be moved from (which is getting out of the
mozilla-related content business).
It also needs to be reviewed by an actual prefs engineer for accuracy.
Can someone declare an ideal resting place in the mozilla-org tree, so if I get
some time and do the cleanup my self, it will have a home?
iirc, chris has been working on documenting the preferences. she might be good
reference here.
a gazillion developers touch the prefs --i'm cc'ing a couple here (brian and
seth) who might have some comments...
-->cwozniak.
>What is a preference?
Preferences are not JavaScript variables. Preferences are stored in a (more or
less) human readable form which is parsed by the JavaScript parser. Some day we
would like to remove this dependency because JavaScript is overkill for this
need (and opens us up to security issues.)
>Prefernce load order: That should be PreferEnce ;)
Again preferences are not JavaScript variables, but they will be resolved in
favor of the last entry read... unless locking (AutoConfig) is involved. Let's
not go there just now...
I don't really understand item 3 "Use default, hard-coded value for prefs".
Assuming that a value exists in *any* preference file, it will override any
value set in the code.
>Because prefs.js is loaded last... changes to a profile should made to the
prefs.js.
prefs.js is a generated file. Users should not be messing with it unless they
are absolutely certain of what they are doing.
"user.js" is actually the last file loaded. This file is only read in and never
written out, and is where the user should be installing their own personal
preferences (the default homepage is a common one). This way user can create a
new profile, drag their "user.js" file into it, and immediately be in a familiar
environment. The other advantage is that you can have comments or commented out
preferences in user.js and they won't be purged the next time the file is
written out.
>Modifying preference files:
Again, users shouldn't really be mucking about in prefs.js.
>Systems administrators can modify <mozilla>/defaults/prefs/
System Administrators would generally use AutoConfig to do this sort of thing
(through CCK). Hacking individual installations is tedious.
>Care should be taken in modifying values in the "default/prefs" files...
If you (and here you means a developer) changes a value in "default/prefs", that
becomes the new default. If that value is set to the same value that is in
prefs.js, then yes, the value will be removed from prefs.js the next time it is
saved. If, however, you change the value from "0" to "3", and prefs.js has a
user value of "1", the user value will remain in effect because it is *not* the
same as the default value.
Hi I'm pulling together the Preference Reference that will help NCADM users do
customizations that they cannot do using the NCADM tool (rtm around 8/30). The
NCADM tool is a wizard that includes an Advanced Preferences page. Most of the
customizations that a majority of our customers would want to do can be
accomplished either via the wizard or the Advanced Preference Editor. CCK is a
scaled back version of NCADM (or NCADM is an enhanced version of CCK). CCK is
available for download; NCADM will be available for sale.
Also, there are appendixes in the NCADM guide (still in process, almost ready
for review) that deal with the preference architecture and remote administration.
That said, it would be good at some point to sync up the commercial doc with the
open source doc so that we are sending a uniform message.
Let's put our heads together.
Chris
I am not sure this document doesnt have anything we already have in our current
preferences documentation. However if you guys/gals could get together with
Christine and put out an overall document that would be great. If you disagree
please let me know.
keyser:
Where's the current documentation? I would have never written this if someone
had given me the info I wanted, I had to figure all this out myself and use it
to help answer a variety of networking questions I was being asked.
Believe me, I got a day job, and plenty of bugs to go with it.
*** Bug 178685 has been marked as a duplicate of this bug. ***
Is this bug still valid? We already have
btw, the packetgram.com url does not work; reporter, can you post
the doc as an attachment?
and is this bug a request for Help file content, or a request for
mozilla.org content?
The packetgram URL does work for me.
I did not understand the relationship of user.js and prefs.js until now.
I'll make a draft that has the updated changes, and attach it here, then delete
the file from the packetgram system.
I think this bug is still valid.
The prefs documentation mentioned
() was nigh impossible for me
to find. I searched the help system for prefs.js and nothing came up.
Searching mozilla brought me here, which brought me to it.
Plus, it's quite *nix-specific. (e.g. Customizing Mozilla)
(And I still can't find the pref for having MozillaMail check all IMAP folders
for new messages. Scanning about:config for it now..)
Matthew: take a look at the document in the URL field, and tell me if it is
missing anything you wanted to know.
brian: I know you are probably gone, but I finally sat down today and made a lot
of the helpful corrections you provided.
The URL is still the same.
I need to re-read your comments again, do a final re-write+spellcheck, then I'll
be punting this document into mozilla.
I'm going to remove the file and pref specific info from this file, and post it
here.
I've reopened the dupe for the per-file, per pref info.
Done.
cvs -z9 commit -m "added "A Brief Gude to Preferences"" index.html
briefprefs.html (in directory
C:\HOME\mozilla-org\html\catalog\end-user\customizing\)
Checking in index.html;
/cvsroot/mozilla-org/html/catalog/end-user/customizing/index.html,v <-- index.html
new revision: 1.6; previous revision: 1.5
done
RCS file: /cvsroot/mozilla-org/html/catalog/end-user/customizing/briefprefs.html,v
done
Checking in briefprefs.html;
/cvsroot/mozilla-org/html/catalog/end-user/customizing/briefprefs.html,v <--
briefprefs.html
initial revision: 1.1
This is only about the preferences system, not the actual prefs themselves,
right? Adding "generic" to the summary, assuming I am right. I thought this bug
was about documenting the meaning of the individual prefs.
Not really. That's the funny thing, I wrote the doc, I wrote the bug, I put the
URL in the bug. Nobody reads the doc. There used to be some pref-specific
comments, but that was because the document lived on packetgram, which was a
network-troubleshooting web site that I run.
Whatever. Ben, if you can review this doc, esp checking to see if I got bnesse's
feedback correct, I'll take ownership of this bug, and then mark it fixed.
If this document just does not do what it should, comment away.
Created attachment 121869 [details]
briefprefs.html
clarify things a little bit and add some details on the preference system.
to-do:
- example of how to change preferences by code at run-time
- naming conventions (bug 58816)
- pref-by-pref reference
Ben, are you the module owner of pref lib?
Created attachment 121908 [details]
briefprefs.html (+ pref design spec)
Nope. I wrote that document because I couldn't live without it, and needed it as
the basis of:
If you sensed the focused "get in, say some stuff, and get out" style of this
document, now you know why. My current job is to test just networking features.
You've done a great job of improving the documentation. I have not reviewed the
changes in detail, but I liked what I saw. Unfortunately, I won't be able to do
a through reading anytime soon. I also am not the ideal person to review a
document like this.
If you think this is ready to go, check it in, and we'll take changes from
people interactively. I've found that it is very hard to get people to review
changes before you make them.
A comment about the section "Naming conventions" in attachement 121908.
It uses several "capability.policy.default.foo" as example on how to name
preferences. In fact, those preferences are named as required by the object
names in class info and the method names in IDL. Of course, those have a naming
convention, but this is not a preference naming convention.
This suggests that caps prefs really have a option on how to name the preference,
which doesn't exist.
Created attachment 121964 [details]
briefprefs.html
Axel, those preferences exist before any naming rule exist
What I am trying to do is to look at how existing preferences
are named, and then infer from them a naming scheme that is
compatible with current ones. The scheme needs to be useful
but not accurate as far as the history of naming is concerned.
in this version I have added many more detail on how preferences
are handled and many contextual links to LXR. I've also added
some code examples.
Created attachment 122086 [details]
600+ preference names, descriptions and valid options
This may prove helpful in creating comprehensive documentation for Mozilla
preferences.
As a further explanation to comment #22:
I currently run a project over at preferential.mozdev.org which aims to:
(1) provide a consistent GUI interface to all Mozilla prefs (this is now subwhat
superceded by about:config) and, more importantly,
(2) document all Mozilla preferences
I don't want to duplicate effort here, so if there's any way I can contribute to
the Preferences documentation effort, I'd love to help.
I have attached my project's source preferences file (which I later convert into
two RDF files using a Perl script). It doesn't document all preferences, but
has so far recorded about 600 preferences and their options. I'm happy to
massage this into a suitable format for someone if there is interest.
Daniel: this document is great! Lots of things I've wanted to know, but never
would have been able to cover myself.
a couple comments:
"arises" could be "arising"
"In Netscape product" should be "In Netscape products"
"On application exit, all user-set value" should be "values"
(See more information) has the HREF extending over the last ")"
The paragraph that is struck out about developers changing prefs is good info,
but I think it should be moved into the same area you had your sample code,
"Accessing preferences programmatically"
I've read up to the namespace section, I'll try to digest the rest when I can.
some changes checked in
I've rewritten the doc for users & administrators. reference to developers
removed (will be moved to a separate doc). The doc will be temporary (should
have been in /docs instead of catalog); I'll find a more appropriate place once
I finished my other docs on user profile.
Created attachment 125156 [details]
preference reference for developers
no idea why I bothered it, but here we go, relieving myself of this doc
Ben - in 'What is a preference?' I believe you have a type in spelling of
prefs.js as perfs.js - only a small thing but a potential trap for a newbie.
We have delayed this long enough -> P1
I just realised that an arbitariliy named file put on
/usr/lib/mozilla-1.2.1/defaults/pref will be read, too. That's for
mozilla-1.2.1, Red Hat release. If this is standard behaviour, it ought to be
documented.
Also, I'm trying to include default mail account settings in my site config, but
can't figure out how. The problem is setting
mail.server.<server name>.userName
This is of course user specific, but the value is always identical to the
Unix/Linux login name, so it ought to be possible to set up the value in all.js
or similar. Oh, and
mail.identity.<id>.useremail
is needed as well, but that can be derived directly from the above info.
maybe I could use the autoConfig method or .cfg file along with getenv(), but I
haven't been able to get those methods to work yet (and getenv() is not
available in from .js files.)
Richard: I made a quick scan and fixed one "perf" mispelling. Thanks for the
feedback!
-> ownership to me.
I've updated the document also to remove Netscape references.
I think this document (mostly due to Daniel), is really in good shape now, so we
should start talking to other people that have less updated prefs docs, and have
them link to us.
Here's a quick list I found: (no link to us).
FYI, there's a new preference being added in bug 86193.
Some comments:
Firstly the guide doesn't explain how to set the default homepage. I've tried
setting:
pref("browser.startup.homepage", "");
but firefox just dies if I put this in unix.js
Also, the guide suggests that sysadmins set things in all.js. This doesn't work
for most settings (at least on Linux). To set fonts, network proxies and paper
sizes, I had to use unix.js.
> Firstly the guide doesn't explain how to set the default homepage.
that's bug 178685
> Also, the guide suggests that sysadmins set things in all.js. This doesn't
> work for most settings (at least on Linux). To set fonts, network proxies and
> paper sizes, I had to use unix.js.
The doc says the platform-specific file (e.g unix.js) is loaded after all.js .
As Mozilla code changes constantly, we don't want to document what prefs are
loaded in platform-specific file.
I'm not spending a lot of time w/ firefox, so if people can figure out more of
what goes on there, lets start a new document or make changes to this one.
The document in the URL is outdated. First, the directory structure for prefs
has changed. Also, There's no information about how to lock a preference in the
UI, which can be critical for admins, and I haven't found a right way to do
this, just ignore the changes but the preference is reachable, and unknown
value. Please work a little more on this, even with basic examples (I've tried
changing the cache disk size, if you want to take it as example), together with
an additional file, in order to separate from original .js files.
The proper way to change preferences in a profile is via about:config these days.
As for the other items you mentioned, I'm not reading every single prefs bugs.
Is there a bug number you can provide, or can you be more specific? For example,
I don't know what you mean about the directory structure...
(In reply to comment #37)
> The proper way to change preferences in a profile is via about:config these days.
This contrasts with what's told in the url, since they're supposed to be
guidelines for corporation admins to provide specific default settings for their
corporations, avoiding spending a lot of time doing repeatedly and by hand the
same setting once and again.
> As for the other items you mentioned, I'm not reading every single prefs bugs.
> Is there a bug number you can provide, or can you be more specific? For example,
> I don't know what you mean about the directory structure...
Take a look at Mozilla 1.7.x directory structure, which is different from
previous versions, and the .js preferences files aren't in the same paths as before.
Way back when the very first version of Netscape was released it was very messy
for an administrator to set site defaults, and it has got harder and harder as
the years have gone by. The document
is good and describes a nice simple scheme, but I'm not convinced it works any
more (a previous comment hinted about new directory structures). I have created
a file defaults/pref/all.js on a Windows installation of Firefox 1.0, but it
seems to be ignored both for new and existing users. Have I done something silly
or is this other people's experience?
This stuff is important! If you want people to get a good impression of Mozilla
you need to help the guy who administers a large site make life as easy as
possible for the thousands of users under his or her control.
> The document
>
> is good and describes a nice simple scheme, but I'm not convinced it works any
> more (a previous comment hinted about new directory structures). I have created
> a file defaults/pref/all.js on a Windows installation of Firefox 1.0
Hmmm. I think it might just be a matter of updating firefox.js instead of all.js.
I fully agree with your comments, though. Actually, this may not be a
documentation issue at all; perhaps the problem is not that the mechanisms are
undocumented, but that they are unnecessarily complex. I mean, why do we have to
mess around with byteshift etc.? And why do you have to set the config file name
in the first place? Why doesn't "general.config.filename" default to a file that
is normally not there, and may thus be added by a system admin without having to
update file from the distribution? Or even better, how about a site config
*directory* where all files are read as part of the init sequence?
(In reply to comment #40)
> Hmmm. I think it might just be a matter of updating firefox.js instead of all.js.
>
Sadly this doesn't seem to work either. I quite agree with your comments: the
problem is that the configuration mechanism is
(a) too complex
(b) doesn't conform with the documentation (i.e. is BUGGY!)
(c) keeps on changing from release to release.
I have been installing netscape and its successors on multi-user Unix systems
ever since Netscape first came out, and there has never been a release that
didn't force me to write a wrapper script to apply my site configurations. My
actual configuration lines have hardly changed at all, apart from when the
preference files switched to Javascript syntax, but every release I have wasted
hours fathoming out where to put them.
I've now hit a dead end trying to install it on a Windows system, because my
Windows scripting skills are pretty non-existent: if there isn't a simple
interface then I'm stuck.
I'm still here, and can do some updating, although I don't use firefox much (I
use a lot of camino and mozilla).
The best way to keep this document updated is to reference bugs that describe
prefs system changes. I rarely get updates from the developers, so I depend on
contributors to point me in the right direction.
Some feedback
1) there're also [mozilla app directory]/greprefs/*.js which are common among
all gecko applications [except Minimo, there's a bug on that, iirc], and which
also used to set default values.
2) in firefox extensions can have their own default preferences set in
[extension dir]/defaults/preferences/*.js, where [extension dir] is ususally
[profile]/extensions/[extension GUID], but may also be in [app
dir]/extensions/[GUID] for global installations, I think.
3) "A preferences file is a simple JavaScript file" is quite confusing. It isn't
Javascript file anymore. It used to be, that's where .js extension and some of
syntax come from. But it's not JS, it is parsed not by js but by another
(simpler) parser.
4) A link to (unless something better is put on
moz.org) and a notice about about:config
<> would be nice.
5) nit: "In the profile directory are two user pref files: prefs.js and user.js.
prefs.js is automatically generated ". A new line after the first sentence would
make the reading easier, I think. (ie. "user.js\n prefs.js") The dot separating
two sentences is not visible enough (as there are too many dots nearby :)
6) "None <a href="#filew-def-special">platform-specific</a> .js". The target
anchor does not exist in the document.
7) "Usually when the user specifically commits a preference change via user
interface such as the Preferences dialog, the application saves the change by
overwriting prefs.js". Suggesting <em>Ususually</em>. For example, changing it
from about:config doesn't seem to rewrite prefs.js on current trunk firefox
build. Clicking ok in options dialog does rewrite it though. In fact the rewrite
only happens *only* when nsIPrefService::savePrefFile(null) is called. Afaik,
many/most extensions don't do that.
8) "Note: <b>Note</b> preference names are case-sensitive."
9) "If you have Mozilla 1.4". Moz 1.4 or later or Firefox says
"feedback and comments here", so here is mine
I can't find out what fcc_folder_picker_mode means. I've come across it in my
thunderbird installation, while trying to debug something else. But I don't
know what it means, and there does not seem to be a cannonical reference to all
these preferences anywhere in the documenation.
I'd like to see a full, maintained, list of what they all are and mean. Does
anyone know where it is?
I've recently had time to re-read this document, in the context of a new-found curiosity about Camino...
I've made some gramatical and link cleanup changes, and also moved about:config to the top, in refrence to how to make changes. I think this servers the most common audience.
Here's my personal todo list:
users.js - should emphasis the advantage of being able to use comments (per #3)
I've reviewed all the comments up to #36, about the directory info being out of date. I'll look into that now.
Since the original document, I've learned enough c++ to read some of the pref loading code.
Changes:
1- added updated discussion about greprefs, application prefs.
TODO: extensions?
2- added updated list of application pref files, based on examining released versions on my Mac.
3- added further emphasis on about:config, by explaining localization features.
4- re-writing file changing sections to simply say how changing files affects the behavior. If you are a sys admin, a hacker, doing a distro, coding, etc, you get to figure out what files to hack yourself. This increases the learning curve for people who are just trying to hack *a* preference, and people hacking pref files.
(Also opens door for people interested in a specific app to write app-specific prefs docs....)
5- add discussion of hidden, default-less prefs.
I should update the file in the next few days... editing offline right now.
Please add a link to this document to the config file documentation, which seems to be at
Also, it would be nice if the documentation spoke about what config files are automatically overwritten during an automatic upgrade. I have had problems with customizing the all.js file, and then having all those changes lost the next time there was an upgrade. Using a file like AAALocal_prefs.js might work better.
If this bug is assigned to nobody@ then it shouldn't have ASSIGNED status.
app_dir is not defined anywhere in the document and is used once.
See also (probably outdated) (and what it resends to) ("this page is not complete")
(In reply to [email protected] from comment #49)
> app_dir is not defined anywhere in the document and is used once.
IIUC that value is not a preference. See also
Automatically closing all bugs that have not been updated in a while. Please reopen if this is still important to you and has not yet been corrected.
I believe that this request is not INVALID: the problem is well defined, and still exists today. OTOH fixing it might be a lot of low-priority trouble, so WONTFIX might perhaps be an acceptable resolution — one which should, however, be set (or not) by an owner or peer of the module in question, i.e. not by lowly triager me.
Reopening for review by Sheppy.
This is important and should stay open and be fixed, preferably by a volunteer.
FWIW, there exists now a "Config Descriptions" extension for Firefox and SeaMonkey (but not Thunderbird, I don't know why) which fishes the comments in the pref source files and adds those comments as an additional column in about:config. Of course it can't say anything for prefs which are undocumented in the source, but otherwise IMHO it is a must-have for the power user or developer. It might be of some help to whoever (if anyone) decides to work on this bug.
As filed, this bug is about "the pref system" not individual prefs. That is already documented, both in nsIPrefBranch.idl and on MDN. For docs on specific prefs, we should have another bug (and in many cases we shouldn't document the prefs at all).
Why I am here:
*
* "Feedback and comments to bug 158384"
Areas affected:
* "The administrator may edit the all.js† default pref file (install_directory/defaults/prefs/all.js)."
* "The administrator may add an all-companyname.js preference file (install_directory/defaults/prefs/all-companyname.js)."
Issues:
* The "install_directory/defaults/prefs/" directory mentioned does not exist by default. "install_directory/defaults/pref/" does. Is the documentation correct?
* If you copy a prefs.js from a user profile to install_directory/defaults/prefs/all-company.js or install_directory/defaults/pref/all-company.js then delete the user profile and start Thunderbird, you would expect the user preferences to be restored, but they are not.
Workaround:
* The only way I could get the preferences to be set by default is by putting prefs.js in "app_dir/defaults/profile/". This is not covered in this article.
I've updated the docs as appropriate. Despite the instruction, it's considered bad form to comment in really old bugs such as this one.
I already covered that in the "Why I am here" section in my previous comment, so perhaps that needs updating in the article too.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=158384
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Microsoft.Scripting2.Core is a namespace copy of the IP source code - based on IP 2.0 Beta 3 - just before one version that introduced incompatibility with .Net 3.5 We are using ASP.Net with IronPython support based on July CTP 2007 (which I think ran on IP 2.0 Alpha X). The latest drop of IP for ASP.Net was based on IP 2.0 Beta 4 which contained the .Net 3.5 incompatibility which makes in unusable for our application. So my application has 2 copies of IP - one that runs the asp.net dynamic support and the other that runs the scripting support (with 2 suffix in the namespace). It's a stupid situation but we rely a IP for ASP.Net and that piece of technology has not been brought up to date to a working state for .Net 3.5 framework- and our web productions people rely on the software so it's full speed ahead regardless of the technology limitation. This memory leak might be gone if we have an up to date IP for ASP.Net but one never know so here's the last 25 lines of the heap dump. I hope this is useful. 79104368 3939 94536 System.Collections.ArrayList 7911f030 1708 95648 System.Reflection.Emit.DynamicMethod 7911f400 2999 95968 System.Reflection.Emit.SignatureHelper 109cbc9c 2516 100640 System.Linq.Expressions2.ConditionalExpression 1085c930 3181 101792 System.Scripting.SourceSpan 109c4594 3244 103808 System.Linq.Expressions2.ParameterExpression 10f63bcc 1237 110188 System.Collections.Generic.Dictionary`2+Entry[[System.Linq.Expressions2.Expression, Microsoft.Scripting2.Core],[System.Linq.Expressions2.CompilerScope+Storage, Microsoft.Scripting2.Core]][] 79135014 3 114968 System.Collections.Generic.Dictionary`2+Entry[[System.Int32, mscorlib],[System.String, mscorlib]][] 10f60a2c 2260 117520 System.Collections.Generic.Dictionary`2[[System.Linq.Expressions2.Expression, Microsoft.Scripting2.Core],[System.Object, mscorlib]] 109cfb1c 2736 120384 System.Linq.Expressions2.BinaryExpression 10f6145c 1743 125700 System.Collections.Generic.Dictionary`2+Entry[[System.Linq.Expressions2.Expression, Microsoft.Scripting2.Core],[System.Object, mscorlib]][] 109c4e4c 3164 126560 System.Linq.Expressions2.ScopeExpression 109c4274 4142 132544 System.Linq.Expressions2.VariableExpression 109caf7c 3735 134460 System.Linq.Expressions2.UnaryExpression 10f5cd24 3762 135432 System.Linq.Expressions2.StackSpiller+ChildRewriter 106fa09c 40 138784 System.Collections.Generic.Dictionary`2+Entry[[Microsoft.Scripting.SymbolId, Microsoft.Scripting],[Microsoft.Scripting.DynamicMixin+SlotInfo, Microsoft.Scripting]][] 79111038 2585 144760 System.Reflection.RuntimePropertyInfo 79110680 3086 148128 System.Signature 106f63bc 4183 150588 Microsoft.Scripting.BuiltinFunction 109cbb3c 4327 155772 System.Linq.Expressions2.MemberExpression 791336d8 37 162960 System.Collections.Generic.Dictionary`2+Entry[[System.String, mscorlib],[System.Int32, mscorlib]][] 109c320c 4726 170136 System.Linq.Expressions2.AssignmentExpression 7912f348 7171 172104 System.Collections.Generic.List`1[[System.Reflection.MethodInfo, mscorlib]] 7911f538 1708 177632 System.Reflection.Emit.DynamicILGenerator 7910ea30 15685 188220 System.RuntimeMethodHandle 109c76cc 6304 201728 System.Linq.Expressions2.Block 7912d9bc 889 204576 System.Collections.Hashtable+bucket[] 790ff734 10742 214840 System.RuntimeType 79130510 11613 223528 System.RuntimeTypeHandle[] 791208b4 16407 262512 System.Reflection.Emit.GenericFieldInfo 10f552cc 18594 297504 System.Collections.ObjectModel.ReadOnlyCollection`1[[System.Linq.Expressions2.Expression, Microsoft.Scripting2.Core]] 109c4f1c 9671 386840 System.Linq.Expressions2.MethodCallExpression 7912dd40 646 432936 System.Char[] 109c7634 16161 517152 System.Linq.Expressions2.ConstantExpression 79109778 12835 718760 System.Reflection.RuntimeMethodInfo 79135538 1494 1167624 System.Reflection.Emit.__FixupData[] 7910dd38 124487 1493844 System.RuntimeTypeHandle 7912d7c0 17031 1846316 System.Int32[] 7912dae8 11218 1877012 System.Byte[] 790fd8c4 46924 2987964 System.String 7912d8f8 134460 5074708 System.Object[] On Wed, Feb 25, 2009 at 10:51 PM, Dino Viehland <dinov at microsoft.com> wrote: > Your approach should be perfectly fine (it's basically what CompiledCode was designed for) and it's not clear to me what would be leaking in this scenario. > > One suggestion I'd have for you is to download and install windbg (). > > Then attach to your process which is using all the memory and type in: > > .loadby sos mscorwks > !DumpHeap -stat > > And send back the about the last 25 lines of that output. That'll give us the most common objects which are using memory in the GC heap and we might be able to track it down from there. If that's not sufficient maybe we can iterate and figure out what's going on. > >> -----Original Message----- >> From: users-bounces at lists.ironpython.com [mailto:users- >> bounces at lists.ironpython.com] On Behalf Of Dody Gunawinata >> Sent: Wednesday, February 25, 2009 8:38 AM >> To: Dino Viehland >> Cc: Discussion of IronPython >> Subject: Re: [IronPython] Please vote up this DLR memory leak bug >> >> What is the impact of caching CompiledCode into this "collectability" >> issue? >> >> I am working on a CMS that expose functions implemented in IronPython >> to a template engine. So I read a directory of 40 or 50 xx.py files, >> load it up and compiled them. This is off course a costly operation to >> do it in every single HTTP request (and SLOW) so I compiled the source >> code to CompiledCode and cache them. >> >> Then for every HTTP request, I retrieve these CompiledCode's from the >> Cache and call Execute(scope) to make the frozen code to be a real >> function. So with X HTTP request, there will be X scope and (X * 40) >> functions being generated at runtime. >> >> Now our memory consumption is huge and is getting bigger until IIS >> gave up and recycle the process. >> >> Dody G. >> >> >> >> >> > Just >> >> >> >> -- >> nomadlife.org >> _______________________________________________ >> Users mailing list >> Users at lists.ironpython.com >> > -- nomadlife.org
|
https://mail.python.org/pipermail/ironpython-users/2009-February/009778.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Riverbank Computing is pleased to announce the release of SIP v4.4 available from. SIP is a tool for generating Python modules that wrap C or C++ libraries. It is similar to SWIG. It is used to generate PyQt and PyKDE. Full documentation is available at. SIP is licensed under the Python License and runs on Windows, UNIX, Linux and MacOS/X. SIP requires Python v2.3 or later (SIP v3.x is available to support earlier versions of Python). This release includes the following changes: - support for class and mapped type templates - support for global operators - support for signed char, long long and unsigned long long types - support for Python's buffer interface - support for ellipsis in function arguments - support for __hash__ - namespaces can now be split across Python modules. Other features of SIP include: - extension modules are implemented as a single binary .pyd or .so file (no Python stubs) - support for Python new-style classes - generated modules are quick to import, even for large libraries - support for Qt's signal/slot mechanism - thread support - the ability to re-implement C++ abstract and virtual methods in Python - the ability to define Python classes that derive from abstract C++ classes - the ability to spread a class hierarchy across multiple Python modules - support for C++ namespaces - support for C++ exceptions - support for C++ operators - an extensible build system written in Python that supports over 50 platform/compiler combinations. Phil
|
https://mail.python.org/pipermail/python-announce-list/2006-March/004814.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
knot 0.3.0
Knot is a simple dependency container for Python.
Knot is a small do-it-yourself (DIY) dependency container for Python.
Getting started
Unlike other existing implementations, knot does not make use of introspection. Therefore, dependencies are manually defined in a straight forward manner. The container acts as a central registry for providers and configuration settings.
Configuration settings
The container is just an ordinary dictionary with some additional methods. As a result, it is very easy to assign or retrieve data from it. Probably the most common way to assign configuration settings is passing a dict to the constructor.
from knot import Container c = Container({'host': 'localhost', 'port': 6379})
Obviously it is also possible to add configuration settings to an existing container.
c = Container() c['host'] = 'localhost' c['port'] = 6379
Providers
A provider creates and returns a particular value or object. It has the ability to utilize an injected container to retrieve the necessary configuration settings and dependencies.
The container expects a provider to adhere to the following rules:
- It must be callable.
- It must accept the container as the only argument.
- It must return anything except None.
Assigning a provider to a container is easy.
def connection(c): from redis import Redis return Redis(host=c['host'], port=c['port']) c.add_provider(connection, True)
It is also possible to use a decorator.
from knot import provider @provider(c, True) def connection(c): from redis import Redis return Redis(host=c['host'], port=c['port'])
The second argument in c.add_provider(connection, True) and in @provider(c, True) indicates whether or not the return value of a provider must be cached.
Retrieve what you have defined.
conn = c.provide('connection')
For convenience, you can also use the shortcut.
conn = c('connection')
Services
A service is just a provider with the cache argument set to True. Basically this means the return value is created only once.
def connection(c): from redis import Redis return Redis(host=c['host'], port=c['port']) c.add_service(connection)
Or with a decorator.
from knot import service @service(c) def connection(c): from redis import Redis return Redis(host=c['host'], port=c['port']) conn1 = c('connection') conn2 = c('connection') print conn1 is conn2 # True
Factories
A factory is just a provider with the cache argument set to False. Basically this means the return value is created on every call.
def urgent_job(c): from somewhere import Job connection = c('connection') return Job(connection=connection, queue='urgent') c.add_factory(urgent_job) job1 = c('urgent_job') job1.enqueue('send_activation_mail', username='johndoe') job2 = c('urgent_job') job2.enqueue('send_activation_mail', username='janedoe') print job1 is job2 # False
Or with a decorator.
from knot import @factory @factory(c) def urgent_job(c): from somewhere import Job connection = c('connection') return Job(connection=connection, queue='urgent')
Installation
Install Knot with the following command:
$ pip install knot
Tests
To run the tests, install pytest first:
$ pip install pytest
Then, run the tests with the following command:
$ make test
Inspiration
Pimple ()
License
MIT, see LICENSE for more details.
- Author: Jaap Verloop
- Download URL:
- Keywords: dependency,container
- License:
The MIT License (MIT) Copyright (c) 2014 Jaap Verloop
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3.3
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: jaapverloop
- DOAP record: knot-0.3.0.xml
|
https://pypi.python.org/pypi/knot/0.3.0
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Answered by:
SPCalendarView
Good afternoon everybody,
I have to customize a SP Calendar. I need to create a new item, edit and delete as well. Also, Later, I will work with other customization. To fix thsi problem,I would like to use the SPCalendarView control. But I do not kwnow if that control will help me to get what I need.
I am working with SharePoint 2010 and Visual Studio 2010. I never worked with that control. Since I am working in a local PC which does not have installed SharePoint server, I can not see SPCalendarControl in the toolbox. I wonder if I need to include any dll to my project. I already include Microsoft.SharePoint.dll.
Please give me a clue how to start working with my problem.
Thanks for any advise or help.
Jhonny M.
Jhonny Marcelo
- Moved by Mike Walsh FIN Saturday, August 06, 2011 6:50 AM "I am working with SharePoint 2010 and Visual Studio 2010." so it's not a Pre-SP 2010; SPD 2007 question is it? (From:SharePoint - Design and Customization (pre-SharePoint 2010))
Question
Answers
Hi,
Sharepoint control are included in the namespace of "using Microsoft.SharePoint.WebControls" which in the Microsoft.SharePoint.dll. If you added the Microsoft.SharePoint.dll successfully. One method is that you can create the SPCalendarView in code behind in you sharepoint project
like:
Using namespace:
using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls;<br/>
protected override void CreateChildControls() { base.CreateChildControls(); AddCalendar(); } private void AddCalendar() { SPCalendarView calView = new SPCalendarView(); calView.DataSource = GetCalendarItems; calView.DataBind(); Controls.Add(calView); } private SPCalendarItemCollection GetCalendarItems() { SPCalendarItemCollection items = new SPCalendarItemCollection(); SPCalendarItem item = new SPCalendarItem(); item.StartDate = DateTime.Now; item.EndDate = DateTime.Now.AddHours(1); item.hasEndDate = true; item.Title = "First calendar item"; item.DisplayFormUrl = "/myurl"; item.Description = "This is a testing item"; item.IsAllDayEvent = false; item.IsRecurrence = false; items.Add(item); return items; }
Another approach is to add the SPCalendarView Control in toolbox so that we can drag and drop it like other asp.net controls. But first you need to confirm that the Microsoft.SharePoint.dll has been installed in GAC. The path is "C:\WINDOWS\assembly".
If the dll doesn't exists. You can install it by using gacutil tool in VS command prompt:
gacutil.exe -if "<yourfolder>/Microsoft.SharePoint.dll".
After that, you can right click the toolbox in VS 2010->Choose Items, you can find the SPCalendarView control in .NET Framwork Components tag. Then click "OK", the control will be added in the toolbox.
Hope this can help.
- Proposed as answer by SharepointDummy Tuesday, August 09, 2011 1:40 PM
- Marked as answer by Shimin Huang Monday, August 15, 2011 9:07 AM
|
https://social.msdn.microsoft.com/Forums/office/en-US/ea37b8f2-cae9-4850-a6c2-47f6c1768148/spcalendarview?forum=sharepointdevelopmentprevious
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
It was OFFlist but is certainly useful as public AddOn, so I reply to myself:
Advertising
Hi Aurelien, don't worry. No problems. Ironically Ken - despite all of his really very good userland Xorg and Mesa work for a decade - never tested SNA on the last official Intel-Xorg-ddx which is still 2.21.5 or something (rather than 2.99). In winter I created a test-ddx for Xorg running on Hipster (Xorg-ABI matches without -ignoreABI). If I find it I gladly upload it and then youb can test and see what I mean. Then you witness that SNA is rubbish on Solaris kernels (and it appears that even Oracle doesn't know how to make it really work on Solaris). Maybe some kernel-performance engineer could debug it with dtrace to find out what's eating up all the cycles. But UXA works fine and the performance gain of SNA over UXA isn't that large, even on LinUX where it works fine. As for qt5.8: Something is different on your (probably more modern) Hipster installation. But despite putting all, then some, then specific ones, then none /usr/gnu/bin tools in front of /usr/bin (via $PATH plus permutations of lofs mounts) I always ended in and never managed to isolate what exactly causes it (including running gmake -d did'nt show me why it is trapped in re-iteration loops). But the point when I finally had enough was after qt5.8 was finally built and vbox linked against it and then rather than a window coming up it ended with "gtk_menu_attach_to_widget(): menu already attached to GtkMenuItem" We can deal with qt5.8 later, as of now it doesn't have any release stability/quality whatsoever, neither for building it nor for using it. But as said: qt4.8 *is* in the repo plus is *rock-stable*. And it was braindead easy to get it working in Vbox 5 again: /code/ALP__KMS_MATE__checkout20160911sun/oi-userland/components/virtualbox5_qt4$ cat patches/enforce_qt4.patch gdiff -Nub VirtualBox-5.1.6/Config.kmk VirtualBox-5.1.6/Config.kmk --- VirtualBox-5.1.6/Config.kmk 2016-09-16 11:41:22.013702279 +0000 +++ VirtualBox-5.1.6/Config.kmk 2016-09-16 12:09:50.428711465 +0000 @@ -591,11 +591,6 @@ VBOX_WITH_WEBSERVICES_SSL = 1 # The Qt GUI. VBOX_WITH_QTGUI = 1 -if1of ($(KBUILD_TARGET), linux win solaris) - VBOX_WITH_QTGUI_V5 = 1 # r=bird: s/VBOX_WITH_QTGUI_V5/VBOX_WITH_QT5/g - our we released version 5.0 of the VirtualBox GUI last year, remember? -else if "$(KBUILD_TARGET)" == "darwin" && $(KBUILD_HOST_VERSION_MAJOR) >= 13 # (OS X 10.9 Mavericks) - VBOX_WITH_QTGUI_V5 = 1 -endif # Indicates the Qt is Cocoa based on the Mac. ifeq ($(KBUILD_TARGET),darwin) VBOX_WITH_COCOA_QT = 1 @@ -5678,12 +5673,12 @@ ifndef VBOX_WITH_QTGUI_V5 VBOX_PATH_QT := $(lastword $(sort $(wildcard $(KBUILD_DEVTOOLS_TRG)/qt/v4*))) else - VBOX_PATH_QT := $(lastword $(sort $(wildcard $(KBUILD_DEVTOOLS_TRG)/qt/v5*))) + VBOX_PATH_QT := $(lastword $(sort $(wildcard $(KBUILD_DEVTOOLS_TRG)/qt/v4*))) endif ifeq ($(VBOX_PATH_QT),) ifneq ($(wildcard /Library/Frameworks/QtCore.framework),) # Using the global installation (for OSE). - VBOX_PATH_QT ?= /usr + VBOX_PATH_QT ?= /usr/lib/qt/4.8 VBOX_PATH_QT_FRAMEWORKS ?= /Library/Frameworks endif endif @@ -5730,22 +5725,9 @@ QtCore QtGui QtWidgets QtPrintSupport QtMacExtras \ $(if $(VBOX_GUI_USE_QGL),QtOpenGL,) else if1of ($(KBUILD_TARGET), linux freebsd netbsd openbsd solaris win) - VBOX_QT_MOD += \ - Qt5Core Qt5Gui Qt5Widgets Qt5PrintSupport \ - $(if $(VBOX_GUI_USE_QGL),Qt5OpenGL,) - if1of ($(KBUILD_TARGET), linux freebsd netbsd openbsd solaris) - VBOX_QT_MOD += \ - Qt5DBus Qt5XcbQpa Qt5X11Extras - # legacy libraries required to be installed on EL5 - VBOX_QT_LEGACY_LIBS = \ - $(if $(VBOX_LD_HAS_LIBXCB),,libxcb.so.1 libX11.so.6 libX11-xcb.so.1) - VBOX_QT_PLUGINS = plugins/platforms/libqxcb.so - else ifeq ($(KBUILD_TARGET), win) - VBOX_QT_MOD += \ - Qt5WinExtras - endif # win - endif # linux freebsd netbsd openbsd solaris win - endif # VBOX_WITH_QTGUI_V5 + VBOX_QT_MOD = QtCore QtGui $(if $(VBOX_GUI_USE_QGL),QtOpenGL,) +endif # linux freebsd netbsd openbsd solaris win +endif # VBOX_WITH_QTGUI_V5 VBOX_QT_MOD_NAMES = $(foreach qtmod,$(VBOX_QT_MOD),$(qtmod)$(VBOX_QT_INFIX)) @@ -5770,13 +5752,13 @@ TEMPLATE_VBOXQTGUIEXE_LRCTOOL = QT4 TEMPLATE_VBOXQTGUIEXE_SDKS = QT4 else # VBOX_WITH_QTGUI_V5 - TEMPLATE_VBOXQTGUIEXE_USES = qt5 - TEMPLATE_VBOXQTGUIEXE_QTTOOL = QT5 - TEMPLATE_VBOXQTGUIEXE_MOCTOOL = QT5 - TEMPLATE_VBOXQTGUIEXE_UICTOOL = QT5 - TEMPLATE_VBOXQTGUIEXE_RCCTOOL = QT5 - TEMPLATE_VBOXQTGUIEXE_LRCTOOL = QT5 - TEMPLATE_VBOXQTGUIEXE_SDKS = QT5 + TEMPLATE_VBOXQTGUIEXE_USES = qt4 + TEMPLATE_VBOXQTGUIEXE_QTTOOL = QT4 + TEMPLATE_VBOXQTGUIEXE_MOCTOOL = QT4 + TEMPLATE_VBOXQTGUIEXE_UICTOOL = QT4 + TEMPLATE_VBOXQTGUIEXE_RCCTOOL = QT4 + TEMPLATE_VBOXQTGUIEXE_LRCTOOL = QT4 + TEMPLATE_VBOXQTGUIEXE_SDKS = QT4 endif # VBOX_WITH_QTGUI_V5 TEMPLATE_VBOXQTGUIEXE_QT_INFIX = $(VBOX_QT_INFIX) TEMPLATE_VBOXQTGUIEXE_DEFS = IN_RING3 QT_NO_DEBUG QT_THREAD_SUPPORT QT_SHARED HAVE_CONFIG_H $(ARCH_BITS_DEFS) And in /code/ALP__KMS_MATE__checkout20160911sun/oi-userland/components/virtualbox5_qt4/Makefile CONFIGURE_OPTIONS = --enable-pulse CONFIGURE_OPTIONS += --with-qt-dir=/usr/lib/qt/4.8 CONFIGURE_OPTIONS += --enable-qt4 CONFIGURE_OPTIONS += --build-libxml2 The resulting Vbox works fine and now also emulates USB3 xhci controller hardware :) VirtualBox guest additions iso also gets compiled. The extensions pack necessary for USB does get built, too :) But I'm still messing with ips and after checkinstall was fixed I need to fix postinstall. Jim's 1 year old otherwise nice script doesn't help anymore, requires modifications. Today the Vbox 5.1.x will be released as test package and src diff for hipster-userland. So, nothing wrong, just finacial DISASTER here and all the terror of everybody sending me bills and asking for money. But before I get me a job I wanted to release the follwoing: Vbox 5 (plus Jim Klimov's script) i915 auf <=gen5: agpgart _enhancement_ compiz Mate integration diff plugin-wrapper ((((qt5 64bit)))) ((((xhci Sol11 bins HOWTO)))) Regards, %martin
_______________________________________________ oi-dev mailing list [email protected]
|
https://www.mail-archive.com/[email protected]/msg04689.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 156294
ppt exported from sxi looks better in oo1.9.96 then the sxi
Last modified: 2007-11-30 17:11:05 EST
Open the sxi file attached to bug 156255 and take a look at the text on slide
85, notice how it doesn't fit on the slide.
Now open the ppt file attached to this bug, this was exported from the sxi file
using ooo 1.1.?, and again goto slide 85 now it does fit. The same happens on a
lott of other slides.
I would expect the sxi import of oo2.0 to be flawless or atleast better then the
ppt import.
Created attachment 113798 [details]
PPt file which does import correctly
Text seems to fit fine on page 85 with 1.9.112
Caolan,
You probably tried the file attached to this bug, which is a ppt exported with
ooo 1.1.x, which is attached as an example to show that a ppt exported file
works better then the original sxi file.
If you open the sxi file from which the ppt was exported it still doesn't fit,
the sxi file is attached to bug 156255. Also see my initial comment, which
states: "Open the sxi file attached to bug 156255"
Created attachment 116802 [details]
original sxw
Created attachment 116803 [details]
screenshot on opening the original sxw on 1.9.117-1
How's that, that's the original .sxw opened in 1.9.117-1 in rawhide, which will
be updated to fc4 soon
Looks better,
When I find the time I'll do a yum update on my rawhide partition, verify and close.
I'll leave as needinfo until then.
can we close this then ?
I'll risk assuming that it's acceptable now.
I'm sorry, but with openoffice.org-impress-1.9.122-3.2.0.fc5 from Rawhide this
bug still happens (or is back again)
I noticed a couple of days ago that this bug has changed. Now the sxi / ppt
import are consistent and both don't fit the page anymore :|
Hans, please attach a screenshot too. Are you able to test with OOo 2.0 in FC4
updates?
Created attachment 121830 [details]
screen shot of the sxi opened at slide 85
Sorry, I'm running rawhide on both my systems not FC4, my current ooo version
is:
2.0.1-0.142.1.2
Created attachment 121831 [details]
screenshot of ppt, which was exported with ooo 1.x from the sxi
Notice that the ppt is back as it should be once again, some versions back the
ppt had the same problem as the sxi but now the ppt "works" again.
On a related note the original presentation was created in ppt then migrated to
ooo. All later editing was done in ooo, the ppt export was to be able to show
it on windows computers.
let's move this upstream for some insight from the impress developers as it
affects upstream version,
|
https://bugzilla.redhat.com/show_bug.cgi?id=156294
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
bling luxury black cell phone case for iphone 4 4s or 5 ipod touch 5 with crystal rhinestones and purple flower[JCZL DIY Shop]
Back to product details
htc hd2 phone cases
htc hero phone cases
improve picture quality
htc aria phone cases
View More
Wholesale phone case purse:
Wholesale htc wildfire phone case
Wholesale htc aria phone case
Wholesale htc evo phone case
Wholesale phone cases htc incredible
phone case purse Price:
expensive iphone cases Price
expensive iphone case Price
exhibit 2 case Price
exotic phone cases Price
phone case purse Promotion:
import iphone Promotion
import protection Promotion
expanded aluminum metal Promotion
expensive phone cases Promotion
Unit Price:
US $9.90
/
piece
Description
All the cell phone cases handmade with high-quality crystal...
View product detail
Zoom in
Zoom out
Reset
|
http://www.aliexpress.com/item-img/bling-luxury-black-cell-phone-case-for-iphone-4-4s-or-5-ipod-touch-5-with/834334073.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
August 28 2002
Public Comment Invited For Second Level Domain Proposal
InternetNZ (The Internet Society of New Zealand Inc) has begun public consultation on a second proposal to create a new second level domain in New Zealand. If successful, the proposal from the New Zealand Bankers Association for the creation of the ".bank.nz" would only be the second new Second Level Domain created in New Zealand since 1996. The Banker's Association proposal was received as the proposal from the New Zealand Maori Internet Society to create ".maori.nz" reached a successful conclusion.
The New Zealand Banker's Association proposes that the Second Level Domain be created as a 'moderated' or restricted Second Level Domain which is open to only to banks registered under the Reserve Bank of New Zealand Act 1989 . If successful the new domain, ".bank.nz", would join four other moderated domains in the New Zealand namespace: ".govt.nz", ".mil.nz", ".cri.nz", and ".iwi.nz".
InternetNZ issued a formal "Request for Discussion" on Monday which invites all interested people to take part in the first round of discussion and debate. "The process involves extensive public consultation", said the Society's Executive Director Sue Leader, "and the discussion list is already running hot."
"The Banker's Association originally proposed the creation of ".bank.nz" in late 2000, and the proposal failed to meet the threshold of support in the straw poll by a narrow margin", Leader said. "The rules state that another application cannot be made for a full year, and we understand that the Banker's Association consulted their membership before putting forward the new proposal".
The public can join the list by going to". The next stage is a straw poll of the Internet community in New Zealand which will take place in 60-90 days. All New Zealander's are eligible to vote in the non-binding straw poll. The application process can be found on the Society's website at:.
The Council of InternetNZ is expected to make an announcement for the start date for the new second level domain ".maori.nz" after its August 30 meeting.
Ends
|
http://www.scoop.co.nz/stories/SC0208/S00043.htm
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
keep your points and request that this question be deleted.
.NET does not have any native classes or namespaces that allows you to access SNMP directly (the system.Management class only uses WMI). Go to microsoft.com and query SNMP there. They have some tutorials and such, but it's all for C++.
The problem with SNMP is that it requires DCOM objects, which are basically distributed DLLs. You can try to use DLLImport("snmp32.dll") and then create your own wrappers, but SNMP DLLS have their own memory management and what not, so even those most basic queries (get IP Address) requires a lot of effort.
In short, if you really want to do SNMP, use C++ or hope that MS includes SNMP in .net 2.0
|
https://www.experts-exchange.com/questions/21191193/c-WMI-SNMP-retrieve-network-printers-switches-unix.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
# # + allow. + Added initial DIME support + SOAP Packager package/unpackage now take in a context variable as input because DIME needs to know what version of SOAP you are using. + Refactored and rewrote README + Renamed SOAP::Schema->stub() to SOAP::Schema->generate_stub. This populates a private variable called C<stub> with the autogenerate. 0.65-beta1 Mon Oct 18 15:55:00 2004 + Made SOAP::Serializer->readable actually produce readable XML! ! Fixed bug 747312 - Made XML Schema 2001 the default schema + Changed typelookup of base64 to base64Binary to conform with 2001 schema + Added support for [almost] all XML Schema 2001 built-in datatypes + Added register_ns subroutine to SOAP::Serializer + Added find_prefix subroutine to SOAP::Serializer + Added use_prefix subroutine to SOAP::Serializer + Added a necessary set of initial. + To better segment SOAP::Lite documentation, many modules have been added that contain documentation only. For those who loved the old documentation, it can be found in lib/OldDocs/SOAP/*.pm and lib/OldDocs/SOAP/Transport/*.pm ! Fix a bug in which doclit style arrays were not being deserialized properly. IOW, Repeated elements were not made into an array, only the last occurring value was recorded. + Added the ability for a SOAP Client to gain direct access to the HTTP::Request and HTTP::Response objects. + Changed default envelope namespace prefix from SOAP-ENV to soap + Changed default encoding namespace prefix from SOAP-ENC to soapenc + Reachitected MIME support to decouple MIME layer from transport layer. This only impacts the HTTP layer since that is the only transport in SOAP::Lite that supports attachments. + Creation of SOAP::Packager - decoupled in an extensible way the packaging of parts (a.k.a. attachments). This is used for encoding and decoding. This enables a more seemless addition of DIME support. Changes were made throughout SOAP::Lite to accomodate this functionality. - a call "context" was added to SOAP::Server and SOAP::Deserializer so that those classes could have access to the global SOAP::Packager instance - deprecated many function calls and packages having to do with attachment support - fixed several of the SOAP::*::new() methods so that they don't inadvertantly reinitialize themselves and destroy precious context information in the process + Gave developers direct access to MIME::Parser object so that they can optimize its use. See the following URL to optimize it for memory consumption, parsing speed, disk utilization, etc: + Added a context variable to SOAP::Serializer and SOAP::Deserializer so that they have mechanisms for getting instance data from the calling context. The value of context is either SOAP::Lite or SOAP::Server depending upon the role the service is playing in the transaction. Contexts are initialized when a call is made, and destroyed when a call is completed. + Greater than character '>' has been added to list of characters that will be automatically escaped. This is not a requirement by the XML spec, it is a MAY, but I am doing it after seeing a minority of people report a compatibility problem. - Removed deprecated methods: SOAP::Serializer::namespace and encodingspace + Added SOAP::Serializer::encodingStyle method which allows users to set the URI for default encodingStyle. + Added initial support for literal encoding style. EXPERIMENTAL + Added some true constant values representing various SOAP namespace URIs + Added SOAP::Constants::SUPPORTED_ENCODING_STYLES for better tracking of the set of encoding styles that the toolkit can [de]serialize ! Fixed bug 840172 - "Makefile.PL --noprompt flag broken," now fixed + Updated SOAP 1.2 URIs to the latest (TODO - maintain support for older ones) + Added HTTPS support for HTTP::Server Daemon class - thanks to Nils Sowen for this contribution ----------------------------------------------------------------------- PREVIOUS RELEASES ----------------------------------------------------------------------- 0.60 Mon Aug 18 12:10:27 2003 + Merged SOAP::MIME into SOAP::Lite's core + Cleaned up the UI for the Makefile.PL script - it now detects and indicated whether certain Perl modules have been detected. The table displaying installation preferences has been substantially cleaned up, the code is much more modular and relies on a simple data structure representing potential module dependencies. + Made the Makefile.PL script iterative - meaning, the user will be continually be presented with their preferences until they explicity choose to accept them and continue (bug 747295) + Differentiate between xsd:int and xsd:long to fix interoperability bug with Java Web services ! Fixed MIME interoperability bug with Apache Axis - Axis' MIME parser requires that MIME boundaries are terminiated by a CRLF character where the MIME::Tools package only outputs a CR character. A patch was integrated into the MIME::Tools module which allows SOAP::Lite to specify its own MIME Boundary delimiter !) ! Added item in TROUBLESHOOTING section explaining that there is a bug in Perl 5.8 that prevents +autodispatch from working properly. The workaround is to use dispatch_from instead (bug 747290) ! Fixed warning when autodispatched call has no parameters (bug 747286) ! Fixed warning when empty SOAPAction specified (bug 747278) ! Turned off HTTP keep alive patch by default, however users can now turn on the patch by setting the constant PATCH_HTTP_KEEPALIVE to 1 (bug 747281) ! Removed dependency on the URI module for non-HTTP transports (bug 747306) 0.55 Mon Apr 15 22:20:39 2002 ! fixed security vulnerability with fully qualified method names (thanks to Randal Schwartz, Ilya Martynov and many others) ! fixed problem with TCP transport and SSL (thanks to Chris Hurd) ! fixed TCP transport to specify correct length with utf8 strings (thanks to Robin Fuller) ! fixed incorrect encoding when parameters list includes undefined values (thanks to Chris Radcliff) ! updated 'xmlsoap' prefix (thanks to Pierre Denis) ! updated MIME parser to accept messages that start with 'From' (thanks to Chris Davies) + added check for TCP transport on Mac (thanks to Robin Fuller) + added check for shutdown() method on AIX (thanks to Jos Clijmans) + added check for blocking() method in TCP transport (thanks to Jos Clijmans) + optimized parsing strings with entity encoding (thanks to Mathieu Longtin) + added check for entity size for CGI transport ($SOAP::Constant::MAX_CONTENT_SIZE) (thanks to J. Klunder) + added example (google.pl) + updated tests and examples with new endpoints 0.52 Mon Oct 18 21:20:19 2001 ! fixed content_type returned under mod_perl with 500 SERVER ERROR status (thanks to Geoffrey Young and Scott Hutton) ! fixed problem with multiple bindings in WSDL file generated by MS SOAP toolkit ! fixed handling of boolean type in 1999 Schema and hexBinary type in 2001 Schema ! fixed warning and problem with WOULDBLOCK state in IO::SessionData (thanks to Marty Pauley) ! fixed miscalculation in position within sparse arrays ! fixed problem with URI when methods of SOAP::Data are called in certain order (thanks to Taras Shkvarchuk) ! fixed CRLF problem in CGI module on Windows platform under IIS (thanks to Werner Ackerl) ! fixed hex and hexBinary datatypes generation ! fixed content-length calculation when payload has multibyte utf8 characters ! fixed problem with XMLRPC and nested packages with more than two levels (thanks to Leonid Gernovski) ! fixed (again) memory leak in SOAP::Parser (thanks to Craig Johnston) + updated Jabber interface for new format of 'use Net::Jabber ...' does not work with Net::Jabber 1.022 and later + updated XMLRPC::Lite to not detect value as float for 'NaN' and 'INF' strings + updated XMLRPC::Lite to return 200OK on errors + updated XMLRPC do not specify charset in content-type + updated Makefile.PL to allow configuration from command line (thanks to Dana Powers) + updated publishing API tests for UDDI server to call a new server (GLUE) + changed close() to shutdown() in Daemon transport (thanks to Sean Meisner) + added support for HTTP_proxy and HTTP_proxy_* in WSDL access (thanks to Stephen Shortland) + added XMLRPC support in COM interface. XMLRPC client and server can be created using COM interface + added DO_NOT_PROCESS_XML_IN_MIME option for MIME parts with text/xml content type + modified deserialization algorithm that allows to properly deserialize SOAP1.2 messages when default is set to SOAP1.1 and vice versa + added fault in XMLRPC::Lite for incorrect datatypes specified by user (thanks to Paul Prescod) + added option to not generate XML declaration + added encoding for ']]>' (thanks to Matt Sergeant and James Amrhein) + added '\r' => '
' conversion in strings + added complaint on incorrect simple types + added byNameOrOrder and byName functions for SOAP::Server::Parameters (thanks to Matt Stum) + added handling relative locations in <import> in WSDL + added stringification of SOAP::Fault (thanks to Tim Jenness) + added documentation for SSL certificate authentication + added more examples (terraserver.pl, joke.pl, weblog.pl) + added more tests 0.51 Tue Jul 18 15:15:14 2001 ! fixed memory leak in SOAP::Parser (thanks to Ryan Adams and Michael Brown) ! fixed skipping undef elements in arrays under Perl 5.005 (thanks to Arne Georg Gleditsch) ! fixed warning from undefined type in out parameters (thanks to Jrg Ziefle) ! fixed autovivification warnings on 5.7.x (thanks to Igor Pechersky) ! fixed tests on 64bit systems (thanks to Gurusamy Sarathy) ! fixed installation problem with long filenames on MacOS (thanks to Alex Harper) ! fixed POP3 server (thanks to Kevin Hutchinson) ! number of fixes in XMLRPC::Lite o fixed <string> requirement (thanks to Matthew Krenzer and Dana Powers) o fixed empty slot skipping (thanks to Jon Udell) o fixed serialization of "0"/""/undef values (thanks to Michael E. Gage) o fixed autodispatch (thanks to Craig Kelley) + added support for SOAP 1.2 (spec is still in draft, implementation is subject to change) + added extended array support (only in deserializer) sparse arrays multidimensional arrays (deserialized as array of arrays) partially transmitted arrays + modified XML::Parser::Lite to work on Perl 5.005 (thanks to John Gotts) fixed handling empty attributes as undef fixed minors (thanks to Duncan Cameron) + modified deserializer to work with different schemas (1999/2001) + added JABBER transport + added MQ transport + added mod_xmlrpc transport (Apache::XMLRPC::Lite) + added TCP over SSL transport + added non-blocking TCP multiserver + included FastCGI transport (thanks to Marko Asplund) + added support for APOP authentication in POP3 transport + added Encoding parameter for MAILTO transport (to choose base64/binary) + added 'autoresult' option (thanks to Mathieu Longtin) + added support for import directive in WSDL + added support for short (tModel) WSDL service descriptions + added support for multiple services/ports and allowed non-SOAP bindings in WSDL + added full search example UDDI->WSDL->SOAP (fullsearch.pl) + added charset in response message for HTTP transport + modified SOAPsh/XMLRPCsh to return all parameters (thanks to Chris Davies) + modified dispatch for XMLRPC server to work exactly as for SOAP server examples included in examples/XMLRPC directory + added example with Inline::C module (inline.daemon). Dispatch to C, C++, assembler, Java, Python and Tcl :). Thanks to Brian Ingerson for his Inline module. + all transport are available for both SOAP::Lite and XMLRPC::Lite: HTTP (daemon, CGI, mod_perl), SMTP/POP3, TCP, IO, JABBER, MQ + updated INCOMPATIBILITY section in README file + tested on Perl 5.00503, 5.6.0, 5.6.1, 5.7.1 and 5.7.2 + added SOAP Cookbook () + added server scripts for MQ and JABBER transports + added roundtrip example for JABBER transport + updated documentation and added new examples + added more tests (more than 700 for now) 0.50 Wed Apr 18 11:45:14 2001 ! fixed tests on Windows platform ! fixed authInfo in UDDI publishing interface ! fixed mod_soap (Apache::SOAP) on Perl 5.005/5.004 ! fixed namespace prefix on arrays of arrays ! modified Content-encoding from 'compress' to 'deflate' + added XML::Parser::Lite, regexp-based XML parser used automatically when XML::Parser is not available + added examples of custom serialization and deserialization (XML::DOM) + added XMLRPC::Lite (XMLRPC client and server interface) all transports and features of SOAP::Lite should be available + added XMLRPC interactive shell (XMLRPCsh.pl) + added dispatching based on URI and SOAPAction (dispatch_with) + added dispatching to object (in addition to class/method) + added dispatch from specific class(es) (dispatch_from) + added limited support for mustUnderstand and actor attributes + added SOAP::Fault class for customization of returning Fault message + added charset in HTTP header for requests + added check for namespace and types resolving + added namespaces declarations from WSDL interface + added INCOMPATIBILITY section in README file + added live tests/examples for UDDI publishing interface + added live tests/examples for basic authentication + added XMLRPC server code that validates with Userland's validator + added more examples, tests and documentation 0.47 Wed Feb 21 17:11:12 2001 ! fixed lack of parameter in MAILTO transport ! fixed minimal version of COM interface to not require absent modules + added compression for HTTP transport + added mod_soap interface, add SOAP server functionality with couple of lines in .htaccess or httpd.conf file + added proper serialization of circular multiple references + significantly redesigned handling types and URIs ! incompatibilities with ApacheSOAP clients may occur + added handling PIPE and INT signals in Daemon server implementation + changed return from autodispatched calls: result() in scalar context and paramsall() in list context + redesigned tests and split on core and optional for smooth CPAN installation + added examples for cookie-based authorization + added examples in C# and PerlScript for COM interface + added more documentation for COM interface + updated documentation and added new examples 0.46 Wed Jan 31 16:30:24 2001 ! fixed SOAP:: prefix with SOAP::Lite objects ! fixed documentation installation on Unix ! changed interface of schema() method. Use service() instead + added COM interface single dll (standalone or minimal version, downloadable separately) doesn't require ROPE.dll, MSXML.dll or listener.asp tested on Windows 98/2K, and should work on Windows 9x/Me/NT/2K ASP and daemon server implementations examples in VB/VBS, Excel/VBA, JavaScript, Perl and ASP + added parsing multipart/form-data SOAP server can accept SOAP requests directly from web form examples are provided (examples/forms/*) + added Map type for hash encoding. Tested with ApacheSOAP + added function that maps classes to URI (maptype) + allowed multiple ports in WSDL + tested object interoperability with Apache SOAP + optimized internal functions 0.45 Tue Jan 16 00:38:04 2001 ! fixed interoperability problem with incorrect Array prefix for Apache SOAP + added interoperability tests for Apache SOAP + added interoperability tests with MS SOAP, 4s4c and Lucin implementations + added attachment parsing (singlepart/multipart MIME) Content-ID and Content-Location are supported text/xml fragments are supported and parsed all implementations support MIME encoded messages + added IO server implementation (for pipes, mail handlers, FTP and file processing) + added FTP client implementation + added global settings, shareable between objects + allowed empty URI and non-prefixed method (documentation included) + added tests for xml, xml with headers, single and multipart MIME + updated documentation and added examples + more that 300 tests in test suite 0.44 Tue Dec 12 23:52:12 2000 ! fixed mod_perl server to return '500 Server Error' in case of error ! fixed CGI server to work under PerlIS and PerlEx (thanks to Murray Nesbitt) + tested publishing API for UDDI::Lite, examples provided (thanks to Petr Janata for access to UDDI server and provided help) + added bi-directional TCP client/server, examples and tests provided + enabled de/serializer overloading on server side (in addition to client) + added optimization for objects-by-reference + added ForkingDaemon server implementation (thanks to Peter Fraenkel) + added SOAP::Custom::XML for XML processing, examples and tests provided + added SOAP::Test as simple test framework + added documentation for UDDI publishing API + redesigned examples and tests (~240 tests for now) 0.43 Tue Nov 28 01:47:02 2000 ! fixed bug in UDDI interface that made UDDI client almost useless ! fixed Makefile.PL ! tests confirmed that memory leak is gone + changed syntax for UDDI client to more flexible/convenient + added limited support for WSDL schemas. Dynamic and stub access supported + added script for stub generation (stubmaker.pl) + optimized code on server side + object interface for SOAP, UDDI and schemas are supported consistently + allowed manipulation of method's attributes and namespaces + added attributes encoding ('&', '<' and '"' are encoded) + updated documentation (thanks to Robert Barta who basically did this work) + added more examples and tests (154 for now) 0.42 Tue Nov 14 23:14:18 2000 + added UDDI client (UDDI::Lite) with documentation + added M-POST functionality in HTTP::Client + added redirect (3??) functionality in HTTP::Client + added session cache for M-POSTs and redirects + added conversion of all objects to o-b-r in parameters + changed passing envelope into method + allowed \x0d and \x0a in strings (will not do base64 encode) + added die with object that allows to specify complex Fault detail + optimized XML encoding + allowed function call with autodispatch + improved syntax for 'use SOAP::Lite' + added soap.tcp example for TCP server implementation + added tests with Microsoft implementation + added documentation and tests (145 for now) 0.41 Tue Oct 31 01:24:51 2000 ! fixed memory leak on server side ! fixed die on absence of HTTP::* modules on server side ! fixed working with keep-alive connections (added test with Xmethods) + changed autotyping from double to float + added support for proxy authorization (thanks to Murray Nesbitt) + added TCP client/server implementation + added benchmark for all implementations except smtp/pop3 + added SOAP::Trace for detail logging on client/server side + added examples/tests for Apache::Registry implementations + added more examples, documentation and tests (127 for now) 0.40 Sun Oct 15 18:20:55 2000 ! fixed die in mailto: protocol if you don't have URI::URL installed ! fixed misbehavior on Mac platform (thanks to Carl K. Cunningham) + added default namespace processing [xmlns] (thanks to Petr Janata) + added objects-by-reference, simple garbage collection and activation + added full access to envelope on server side + added versionMismatch reaction + added local: protocol for local binding without any transport + added examples for objects-by-reference: persistent/session iterators and chat (40 lines on server and 25 lines on client side) 0.39 Sun Oct 8 22:55:20 2000 ! fixed incompatibility with Perl 5.005 + added interactive Makefile.PL for CPAN installation 0.38 Thu Oct 5 22:06:20 2000 ! fixed namespace for base64 encoding ! fixed security problem on server side, upgrade is highly recommended + added HTTPS/SSL support + added SMTP client implementation + added POP3 server implementation + added support for Basic/Digest server authentication + added support for header(s) on client/server side with SOAP::Header + added Array and SOAPStruct for interoperability with ApacheSOAP + added generic class for server support SOAP::Server + added Actor attribute + added more examples, documentation and tests (88 for now) 0.36 Sun Sep 24 20:12:10 2000 ! fixed output parameters autobinding + added mod_perl server implementation + added recognizing all simple types mentioned in specification + added support for 'hex' type + added more documentation (twice as much as before) + added more tests (74 for now) 0.35 Sun Sep 17 23:57:10 2000 ! fixed minors (Response instead of Respond, server will map client's URI) + cleaned HTTP::Server internals (will go to SOAP::Server in the future) + test.pl won't abort on transport errors. Failed test will be skipped + added daemon server implementation + added cgi/daemon server implementation examples + added deserialization into blessed reference + added dynamic/static class/method binding + added output parameters matching based on signature (name/type) + added real object transferring back and forth (see example of Chatbot::Eliza, fixed for CODE references) + added more interoperability with on_action on client and server side + added new events (on_action, on_fault, on_nonserialized) + added global class settings with 'use SOAP::Lite ...' + added code for returning application errors on server + added autodispatch + added SOAP prefix to method calls + added more documentation + added more tests (54 for now) + added more examples (Chatbot::Eliza, My::PingPong) 0.32 Sun Sep 10 23:27:10 2000 ! fixed warnings with -w ! fixed blessed reference serialization. Assigned type has top priority + added working with current node in SOAP::SOM + SOAP::SOM::valueof returns nodeset + SOAP::SOM::match returns boolean in boolean context + added raw xml accepting and output + added UserAgent parameters to SOAP::Transport (understands timeout) + added better diagnostic on transport errors in test.pl + added 'method', 'fault', 'freeform' types of Envelope + added server implementation + added CGI interface to server implementation + added My::Examples.pm as example of loaded class for SOAP server + added more tests (47 for now) 0.31 Wed Sep 6 00:36:15 2000 + added expressions to SOAP::SOM->match method + added deserialization of circular references + added tests for deserialization + added documentation 0.3 Mon Sep 4 00:59:04 2000 + first public beta version + added live SOAP calls + added test suite (25 tests) + added documentation + added interactive shell (SOAPsh.pl) 0.2 Mon Aug 24 19:34:24 2000 - next stable version; works with public test servers 0.1 Mon Aug 11 23:12:02 2000 - first version; serialization part only
|
http://opensource.apple.com/source/CPANInternal/CPANInternal-62/SOAP-Lite_new/Changes
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
.env; 20 21 /** 22 * Exception thrown when attempting to acquire an object of a required type and that object does not equal, extend, or 23 * implement a specified {@code Class}. 24 * 25 * @since 1.2 26 */ 27 public class RequiredTypeException extends EnvironmentException { 28 29 public RequiredTypeException(String message) { 30 super(message); 31 } 32 33 public RequiredTypeException(String message, Throwable cause) { 34 super(message, cause); 35 } 36 }
|
http://shiro.apache.org/static/1.2.2/xref/org/apache/shiro/env/RequiredTypeException.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
, "Quick Start for Using Semantic Data"
Section 1.10, "Semantic Data Examples (PL/SQL and Java)"
Section 1.11, .
Figure 1-2 Inferencing-21 in Section 1.10-21 in Section 1.10-21 in Section 1.10 SEM_M_ID NUMBER, -- Model ID RDF_S_ID NUMBER, -- Subject value ID RDF_P_ID NUMBER, -- Property value ID RDF_O_ID NUMBER) -- Object value ID
The SDO_RDF_TRIPLE_S type has the following methods that retrieve a triple or a part (subject, property, or object) of a triple:
GET_TRIPLE() RETURNS SDO_RDF_TRIPLE GET_SUBJECT() RETURNS VARCHAR2 GET_PROPERTY() RETURNS VARCHAR2 GET_OBJECT() RETURNS CLOB
Example 1-5 shows the SDO_RDF_TRIPLE_S methods.
Example 1-5 SDO_RDF_TRIPLE_S Methods-6 uses the first constructor format to insert a triple.
The following constructor formats are available for inserting triples referring to blank nodes into a model table. The only difference is that in the second format the data type for the fourth attribute is CLOB, to accommodate very long literals.
SDO_RDF_TRIPLE_S ( model_name VARCHAR2, -- Model name sub_or_bn VARCHAR2, -- Subject or blank node property VARCHAR2, -- Property obj_or_bn VARCHAR2, -- Object or blank node bn_m_id NUMBER) -- ID of the model from which to reuse the blank node RETURN SELF; SDO_RDF_TRIPLE_S ( model_name VARCHAR2, -- Model name sub_or_bn VARCHAR2, -- Subject or blank node property VARCHAR2, -- Property object CLOB, -- Object bn_m_id NUMBER) -- ID of the model from which to reuse the blank node RETURN SELF;
If the value of
bn_m_id is positive, it must be the same as the model ID of the target model.
Example 1-7 uses the first constructor format to insert a triple that reuses a blank node for the subject. enclosed in parentheses., following default namespaces (
namespace_id and
namespace_val attributes) are used by the SEM_MATCH table function:
(>). Hints used for influencing the results of queries include GET_CANON_VALUE(<set of aliases for variables>), which ensures that the values returned for the referenced variables will be their canonical lexical forms. These hints have the same format and basic meaning as hints in SQL statements, which are explained in Oracle Database SQL Language Reference.
Example 1-9 shows the HINT0 option used in a SEM_MATCH query.
INF_ONLY=T queries only the entailed graph for the specified models and rulebases..
Example 1-8 selects all grandfathers (grandparents who are male) and their grandchildren from the
family model, using inferencing from both the
RDFS and
family_rb rulebases. (This example is an excerpt from Example 1-21 in Section 1.10.2.)
Example 1-8 SEM_MATCH Table Function
SELECT x, y FROM TABLE(SEM_MATCH( '(?x :grandParentOf ?y) (?x rdf:type :Male)', SEM_Models('family'), SEM_Rulebases('RDFS','family_rb'), SEM_ALIASES(SEM_ALIAS('','')), null));
Example 1-9 is functionally the same as Example 1-8, but it adds the
HINT0 option.
Example 1-10 uses the Pathway/Genome
BioPax ontology to get all chemical compound types that belong to both
Proteins and
Complexes:
Example 1-10 SEM_MATCH Table Function
SELECT t.r FROM TABLE (SEM_MATCH ( '(?r rdfs:subClassOf :Proteins) (?r rdfs:subClassOf :Complexes)', SEM_Models ('BioPax'), SEM_Rulebases ('rdfs'), SEM_Aliases (SEM_ALIAS('', '')), NULL)) t;
As shown in Example 1-10,, or UNION
Example 1-11 is functionally the same as Example 1-8, but it uses the syntax with curly braces and a period to express a graph pattern in the SEM_MATCH table function.
Example 1-11 Curly Brace Syntax
SELECT x, y FROM TABLE(SEM_MATCH( '{?x :grandParentOf ?y . ?x rdf:type :Male}', SEM_Models('family'), SEM_Rulebases('RDFS','family_rb'), SEM_ALIASES(SEM_ALIAS('','')), null));
Example 1-12 uses the OPTIONAL construct to modify Example 1-11, so that it also returns, for each grandfather, the names of the games that he plays or null if he does not play any games.
Example 1-12-13 modifies Example 1-12 so that it returns, for each grandfather, the names of the games both he and his grandchildren play, or null if he and his grandchildren have no such games in common.
Example 1));
A single query can contain multiple OPTIONAL graph patterns, which can be nested or parallel. Example 1-14 modifies Example 1-14 a value is returned for
?game even if the nested OPTIONAL graph pattern
?y :plays ?game . ?y :age ?age is not matched.
Example 1-15 modifies Example 1-13 with a parallel OPTIONAL graph pattern. This example returns, for each grandfather, (1) the games he plays or null if he plays no games and (2) his e-mail address or null if he has no e-mail address. Note that, unlike nested OPTIONAL graph patterns, parallel OPTIONAL graph patterns are treated independently. That is, if an e-mail address is found, it will be returned regardless of whether or not a game was found; and if a game was found, it will be returned regardless of whether an e-mail address was found.
Example 1-16 uses the FILTER construct to modify Example 1-11, so that it returns grandchildren information for only those grandfathers who are residents of either NY or CA.
Example 1-16-17 uses the REGEX built-in function to select all grandfathers that have an Oracle e-mail address. Note that backslash (
\) characters in the regular expression pattern must be escaped in the query string; for example,
\\. produces the following: pattern
\.
Example 1-17-18 uses the UNION construct to modify Example 1-16, so that grandfathers are returned only if they are residents of NY or CA or own property in NY or CA, or if both conditions are true (they reside in and own property in NY or CA).
Example 1-18 using HINT0={<hint-string>} (explained in Section 1.6), should be constructed only on the basis of the portion of the graph pattern outside the OPTIONAL construct. For example, the only valid aliases for use in a hint specification for the query in Example 1-12 are
t0,
t1,
?x, and
?y.
Hints are not supported for queries involving UNION.
The FILTER construct is not supported for variables bound to long literals.
This section describes some recommended practices for using the SEM_MATCH table function to query semantic data. It includes the following subsections:
Section 1.6.3.1, "FILTER Constructs Involving xsd:dateTime, xsd:date, and xsd:time"
Section 1.6.3.2, "Function-Based Indexes for FILTER Constructs Involving xsd Data Types"
Section 1.6.3.3, "FILTER Constructs Involving Relational Expressions"
Section 1.6.3.4, "Virtual Models and Semantic Network Indexes", the following index may speed up evaluation of the filter
(?x < "1929-11-16Z"^^xsd:date):
CREATE INDEX v_date_idx ON MDSYS.RDF_VALUE$ (SEM_APIS.getV$DateTZVal(value_type, vname_prefix, vname_suffix, literal_type, language_type));
You can use the HINT0 hint to ensure that the created index is used during query evaluation, as shown in Example 1-19, which finds all grandfathers who were born before November 16, 1929.
Example 1-19 Using HINT0 to Ensure Use of Function-Based Index
SELECT x, y FROM TABLE(SEM_MATCH( '{?x :grandParentOf ?y . ?x rdf:type :Male . ?x :birthDate ?bd FILTER (?bd < "1929-11-16Z"^^xsd:date) }', SEM_Models('family'), SEM_Rulebases('RDFS','family_rb'), SEM_ALIASES(SEM_ALIAS('','')), null, null, 'HINT0={ LEADING(?bd) INDEX(?bd v_date_idx) } FAST_DATE_FILTER=T' ));
For optimal query performance, you may want to create function-based indexes on the following functions, which are described in Chapter 9:
SEM_APIS.GETV$DATETIMETZVAL.
If your workload contains many queries against virtual models, you can probably improve query performance if you create a semantic network index (see Section 1.8) with the
index_code 'PSCM' and drop the default index with
index_code 'PSCF'.
To load semantic data into a model, use one or more of the following options:
Bulk load using a SQL*Loader direct-path load to get data from an N-Triple format into a staging table and then use a PL/SQL procedure to load or append data into the semantic data store,.
Load into tables using SQL INSERT statements that call the SDO_RDF_TRIPLE_S constructor, as explained in Section 1.7.3.
To export semantic data, use the Java API, as described in Section 1.7.4.
You can load semantic data (and optionally associated non-semantic data) in bulk using a staging table. The data must first be parsed to check for syntax correctness and then loaded into the staging table. Then, you can call the SEM_APIS.BULK_LOAD_FROM_STAGING_TABLE procedure (described in Chapter 9).
The following example shows the format for the staging table, including all required columns and the required names for these columns:
CREATE TABLE stage_table ( RDF$STC_sub varchar2(4000) not null, RDF$STC_pred varchar2(4000) not null, RDF$STC_obj varchar2(4000) not null );.
Objects longer than 4000 bytes cannot be loaded. If you use the sample SQL*Loader control file, triples (rows) containing such long values will be automatically rejected and stored in a SQL*Loader "bad" file.
However, triples containing object values longer than 4000 bytes can be loaded using the following approach:
Use the SEM_APIS.BULK_LOAD_FROM_STAGING_TABLE procedure to load all rows that can be stored in the staging table.
Load the remaining rows (that is, the rejected rows when using SQL*Loader with the sample control file) from an N-Triple format file, as described in Section 1.7.2.;
You can perform a batch load operation.
To load an N-Triple file with a character set different from the default, specify the JVM property
-Dcharset=<charsetName>. For example,
-Dcharset="UTF-8" will recognize UTF-8 encoding. However, for UTF-8 characters to be stored properly in the N-Triple file, the Oracle database must be configured to use a corresponding universal character set, such as AL32UTF8.
The
BatchLoader class supports loading an N-Triple file in compressed format. If the
<N-TripleFile> has a file extension of .zip or .jar, the file will be uncompressed and loaded at the same time.
Batch loading is faster than loading semantic data using INSERT statements (described in Section 1.7.3). However, bulk loading (described in Section 1.7.1) is much faster than batch loading for large amounts of data. Batch loading is typically a good option when the following conditions are true:
The data to be loaded is less than a few million triples.
The data contains a significant amount long literals (longer than 4000 bytes). (the model_id for nsu is 4),', '<>', '<>', 4)); output semantic data to a file in N-Triple format, use the
NTripleConverter Java class. The
NDM2NTriple(String, int) method exports all the triples stored for the specified model.
For information about using the
NTriple converter class, see the
README.txt file in the
sdordf_converter.zip file, which you can download from the Oracle Technology Network.,
M,
O. These letters used in the index_code correspond to the following columns in the SEMM_* and SEMI_* views: P_VALUE_ID, CANON_END_NODE_ID, START_NODE_ID, MODEL_ID, and END_NODEM index with the following key: <P_VALUE_ID, START_NODE_ID, CANON_END_NODE_ID, MODEL_ID>.
EXECUTE SEM_APIS.ADD_SEM_INDEX('PSCM');M index usable for the
FAMILY model:
EXECUTE SEM_APIS.ALTER_SEM_INDEX_ON_MODEL('FAMILY','PSCM','REBUILD');
Also note the following:
Independent of any semantic network indexes that you create, when a semantic network is created, one of the indexes that is automatically created is an index that you can manage by referring to the
index_code as
'PSCF' when you call the subprograms mentioned in this section. (The "F" in PSCF stands for "function": a predefined function that returns NULL when
end_node_id is equal to
canon_end_node_id, and otherwise returns
end_node_id. PSCF is very close to PSCO. However, because most triples have the same
canon_end_node_id and
end_node_id, using NULL for those cases can save resources when the index is built.).10.
This section contains the following PL/SQL examples:
Section 1.10.1, "Example: Journal Article Information"
Section 1.10.2, "Example: Family Information"
In addition to the examples in this guide, see the "Semantic Technologies Code Samples and Examples" page on the Oracle Technology Network (). That page includes several Java examples.
This section presents a simplified PL/SQL example of model for statements about journal articles. Example 1-20 contains descriptive comments, refer to concepts that are explained in this chapter, and uses functions and procedures documented in Chapter 9.
Example 1-20 Using a Model for Journal Article Information
-- Basic steps: -- After you have connected as a privileged user and called -- SEM_APIS.CREATE_SEM_NETWORK to add consructors. -- Create the table to hold data for the model. CREATE TABLE articles_rdf_data (id NUMBER, triple SDO_RDF_TRIPLE_S); -- Create the model. EXECUTE SEM_APIS.CREATE_RDF-20-21 Using a Model for Family Information
-- Basic steps: -- After you have connected as a privileged user and called -- SEM_APIS.CREATE_SEM_NETWORK to enable constructors. -- Create the table to hold data for the model. CREATE TABLE family_rdf_data (id NUMBER, triple SDO_RDF_TRIPLE_S); -- Create the model. execute SEM_APIS.create_rdf-13:
|
http://docs.oracle.com/cd/E18283_01/appdev.112/e11828/sdo_rdf_concepts.htm
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
This is your resource to discuss support topics with your peers, and learn from each other.
05-26-2009 03:53 PM - edited 05-26-2009 03:53 PM
I've been learning BlackBerry development and I'm in the middle of writing my first major app. On my app's main menu (or on any screen extended from MainScreen, in fact) when I click on a part of the screen that does not contain a control a menu appears with the options Switch Application, Show Keyboard, Close, and Full Menu.
I do not want to remove these from the regular menu that appears when you press the BB key, I just want to stop that contextmenu from popping up when you click the screen. I've tried setting NO_SYSTEM_MENU_ITEMS in the screen's constructor but that does two things I don't want: the default menu items are removed from the BBKey menu, and the context menu still pops up with only the item "Full Menu".
Is there any way I can just disable the contextmenu from appearing? I've tried to override makeContextMenu but that doesn't seem to have any effect.
Thanks
Solved! Go to Solution.
05-26-2009 04:45 PM
Just override makeMenu.
protected void makeMenu( Menu menu, int instance ) { //will show the default menu super.makeMenu(menu, instance); }
Also see KB article:
Regards
Bikas
05-26-2009 04:58 PM - edited 05-26-2009 05:00 PM
Thanks for the help, anyway.
05-26-2009 06:28 PM
I think there might be an easier way to remove this "<Empty Menu>".
I can suggest you an alternative by implenting TrackwheelListener.
import net.rim.device.api.system.TrackwheelListener;
class TestScreen extends MainScreen implements TrackwheelListener { TestScreen() { } protected void makeMenu( Menu menu, int instance ) { //super.makeMenu(menu, instance);
}
public boolean trackwheelClick( int status, int time ) { return true; } public boolean trackwheelUnclick( int status, int time ) { return true; } public boolean trackwheelRoll(int amount, int status, int time) { return true; } }
Regards
Bikas
05-26-2009 08:30 PM
Just want to clarify a few things, because I'm worried we are not understanding your problem properly.
When one says contextmenu, I usually take that to mean the menu that is associated with the context, i.e. the thing in focus. So for example, when an Edit Field is in focus, a contextmenu will include 'Clear Field'.
However in this case, this is not what you seem to mean. The context menu that you are talking about here appears to the the menu that is invoked when you press the trackball. This is not a Field specific context menu.
I think you can suppress this menu in two ways:
1) Override navigationClick, but I'm not actually sure where you want to do this.
2) Have your makeMenu check the instance (the second parameter). The trackball lick invocation has a different value, sorry can't remember what it is. But the menu key invocation has a 0. So for non 0 instance, don't caller super.makeMenu, and this will suppress the default menu items.
I suspect you want to use navigationClick, but I'm really not sure how to implement this and don't have time to test. But stick one of these in your MainScreen and see when it gets invoked and what gets suppressed if you return true (which means that you have consumed the click).
Hope this helps and/or clarifies a little.
05-26-2009 09:25 PM
I apologize for not being more clear; you are correct, the menu I was referring to was the menu that appears when the trackball (or in my case, touchscreen) is clicked, with nothing in focus. I called it a ContextMenu in my post simply because that's the object I figured it was. Sorry for the confusion.
At the moment I have pretty much solved the problem by overriding navigationClick, it turns out it was I needed to do it anyway for something else unrelated. Your makeMenu solution makes sense, I figured the instance argument would have something to do with how the menu was invoked but I never actually bothered to check the different values for BBKey/screen click.
Thanks to both of you for taking the time to help answer my question, I appreciate it.
10-22-2009 03:03 AM
Hi, peter_strange,
i did what you said, i mean i overriden the navigationClick in my mainscreen to disable the FullMenu option, but for me all the fields events get disabled. no event is firing on any field on the screen.
this is my code
protected boolean navigationClick(int status,int time){
return true;
}
Note: i didnt tested with return false.
Hi, Andiamo how did u disable FullMenu? can you post your code snippet here.
i tried with trackWheelListener also but of no luck. when am using trackWheelListener i didnt find any difference.
10-24-2009 10:43 PM
I was never able to really get the menu to go away via code. This was my first ever BlackBerry app, and so eventually I figured out that the contextMenu does not appear if the user clicks on a part of your screen that contains a field, be it a labelfield, bitmapfield, etc.
Most of my screens are pretty full so this ended up not being a problem for me. I know this isn't really going to answer your question though, sorry. :/
10-27-2009 09:55 AM
Hi Andiamo,
as you said i tried to fill my screen with the fields, for that i choose bitmapfield, previously i used paint method to set the background image, but according to your suggestion i used bitmapfield to fill my screen with a field.
But no luck!!!!!!
That FullMenu is still hunting me..... anybody there to help me to get rid of this!!!!
04-07-2010 06:58 AM - edited 04-07-2010 09:01 AM
|
https://supportforums.blackberry.com/t5/Java-Development/How-can-I-disable-the-default-ContextMenu/m-p/477748/highlight/true
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
As XML becomes ubiquitous and mature, the problem of versioning is increasingly significant. While XML itself has a clear versioning scheme through the version attribute in the prolog, XML-based languages such as Atom do not have a standardized versioning mechanism. We propose namespace documents as the logical solution.
It could be claimed that such an approach is not needed by current markup practice in XML. However, as shown by the recent confusion the introduction of the xml:id name caused, at least seems some clarification of the gap between what W3C Recommendations actually define about namespaces and what many people think they define (as well as W3C good practice) is in order [Disposition of Names]. Once we understand namespaces, XMLVS shows that the namespace document is both an effective and practical solution for maintaining XML languages. The XML-based language XMLVS (XML Versioning System) makes the maintaining, documenting, and versioning of XML languages easier by automatically producing best-practice human and machine-readable namespace documents for XML languages.
In common parlance, an element or attribute name is "in a namespace" so that the creation of a new name, like xml:id, is "adding a name" to a namespace [xml:id Version 1.0]. In these discussions, the terms "language","namespaces", and even "versions" are separate terms with distinct meanings, although in particular cases they may be functionally the same in a given instance. Languages are in general created to serve as an "application" of XML, from displaying web-site updates to creating technical documents. What gives those names their semantics or "meaning" is their application. While various standards may give definitions to some names in XML such as xml:id, the vast majority of names in languages themselves have no semantics outside a given application. Sometimes names from other namespaces can be imported into other languages, such as the import of many RDF constructs that are used in OWL DL with additional constraints [OWL Guide].
To illustrate the complexities of languages and versioning, XHTML is a language whose application is the display of documents on the Web for human consumption. The semantics of each name is given to some extent in the XHTML standard, and is concretely defined in particular by applications that embody the standard [XHTML]. Using namespaces, one may mix XHTML 1.0 with other languages like MathML. XHTML 1.0 is the first language in the XHTML family of languages, and other related languages such as XHTML-Print are already appearing. The XHTML 1.0 language has a single namespace URI. However, three variants (Strict, Transitional, Frameset) all use the same namespace although they have distinct names, such that not all names valid in XHTML 1.0 Transitional are valid in XHTML 1.0 Strict. One could have easily imagined the case where each of the three variants had its own namespace URI as well. There is a new version of XHTML called XHTML 2.0 that has its own (perhaps temporary) separate namespace URI.
To systematize common parlance, a language is a set of names for things. XML by itself just defines a notation and structure for data in terms of the Infoset, and does not give any semantics above and beyond the very basics[Infoset]. An application gives particular XML document their application semantics by defining the preferred use of a language. A language may be given a namespace, which provides a syntactic mechanism used to disambiguate the names of things within a document. These names are often elements and attributes in XML, but do not have to be: Namespaces can disambiguate names of classes and roles given in the non-XML Semantic Web N3 notation. A namespace is given a unique identifier by its namespace URI. A namespace document "is a place for the language publisher to keep definitive material about a namespace" that can be accessed by dereferencing the namespace URI, and the definite material can consist of multiple resources [Berners-Lee, 1998]. In other words, a namespace document may just consist of links to other resources, with their own URIs. The sum total of all these resources, ranging from the namespace document to other as standards and APIs, define the application semantics. According to common use, a namespace should "not be a URN" (such as the namespace used by Microsoft Office) since you cannot retrieve a namespace document from a URN [Namespace Theses].
A final example should be informative. XSLT is a language whose application is to transform XML documents. Unlike XHTML that gives different versions different namespaces, XSLT has two versions (1.0 and 2.0) that share the same namespace (). This namespace document can be dereferenced to produce a single XML document that says coyly "Someday a schema for XSL Transforms will live here." The non-normative W3C Schema for XSLT 2.0 exists somewhere completely different () and is not linked to the namespace URI. The application semantics for XSLT are given by the W3C Recommendation for XSLT 1.0 [XSLT 1.0] and XSLT 2.0 [XSLT 2.0], respectively. Note that often the application semantics are dependent on different languages. While XSLT 1.0 is rather self-contained in its use of namespaces, the application semantics of XSLT 2.0 allows the import of semantics from other applications, such as XML Schema. The same is true for XQuery, which can be given an XML notation (XQueryX) that also uses XML Schema [XQueryX]. In the case of XQuery, the application semantics are formally defined, while most applications have informally defined semantics.
In conclusion, an application should not be confused with an XML language, since an application may use more than one XML language. A given application defines the semantics of an XML language. An XML language may or may not be given a namespace URI, and its namespace URI may or may not be different depending on different versions or variants. In a hopelessly ideal world, one would try to keep things simple so that a single application uses a single language that has only one version with one namespace, but in the wild world of the Web things are not always that simple.
There is also confusion, especially with people new to XML, as regards how namespaces in XML function. Namespaces exist so that names from multiple languages to be combined, even if they have the same name. In order to do this, every name must be qualified with a unique namespace so names from different languages can be disambiguated. For example, the name "class" is used for different purposes in RDF and HTML. This ambiguity results in "namespace collisions" that can be avoided through the use of namespaces as given by the "Namespaces in XML" specification [Namespaces]. This is done by adding to the front of the original name a namespace prefix followed by a colon, and the originally non-disambiguated name is now called the local name. The combination of the namespace prefix and the local name is called the Qualified Name (QName). Since they are separated by a colon, both the namespace prefix and the local name should be a "NCName" (A "No-Colon Name" is a string not containing any colons).The namespace prefix is associated with a namespace URI in the "namespace declaration" using an attribute with the namespace prefix xmlns. This attribute's local name is the namespace prefix associated with the namespace URI, which is given by the value of the xmlns attribute. For example, the namespace prefix xsl is associated with a namespace URI by the attribute xmlns:xsl="." So, the QName xsl:template has xsl as its namespace prefix, that maps to the namespace URI, with template being the local name. All QNames have set out to do is to solve the name disambiguation problem.
Rather surprisingly and contrary to popular belief, QNames are not a shorthand notation for URIs. In other words, given a QName one gets two things, a namespace URI and a local name, not a single URI. By saying that xsl:template is in the xsl namespace, what we saying is that its name is equal to the tuple (). This expanded name is created when the namespace prefix is replaced by the namespace URI. There is no default construction rule for creating a single URI out of an expanded name, and furthermore there is not even a standardized mechanism to map the two parts of an expanded name to a single string for processing purposes. For example, a processor simply concatenate the namespace URI and the local name together as, but they could also create or out of the expanded name. The specification is simply silent on this matter. Indeed, this subtle point has been often missed, as in the case where the CURIE specification states that a CURIE is "a Compact URI,and QNames are a subset of this " [CURIE Syntax]. This is a problem, as CURIEs wish to use the same syntactic colon as QNames. Like many, they seem to have mistakenly assumed QNames are just URIs in disguise, while in fact since QNames and CURIEs are for different things (qualifying or scoping a name with a URI versus abbreviating a URI), so it is logically impossible as the specifications now stand for QNames to be a subset of CURIEs. Also contrary belief, there is no "empty" or "blank" namespace, as an element not given a namespace through one of the two methods simply does not have a namespace and so is not a QName or an expanded name.
Still, the default behavior of some processors is simply to concatenate the namespace and qualified name. This has led to many using the "hash" convention (particularly in RDF and OWL), which is to append a hash to the end of their namespace declaration, as in having xmlns:rdf="" and rdf:about resolve to. In this manner it is still possible to concatenate the local name and namespace URI together and retrieve the name of a valid URI of a namespace document through the use of fragment identifiers. Many XML applications like XML Schema follow the HTML convention that the value of an attribute serves automatically as a fragment identifier reference of a URI, so that xmlns:xs="" and xs:int resolves to instead of. This led Jonathan Borden to suggest that "When the namespace URI ends in an alphanumeric character treat the local name as a fragment identifier, i.e. insert a '#' between the URI and localname" (). One might suppose that with Borden's rule, every "name in the namespace" maps to a distinct URI, but this can fail in two ways, since as pointed out by Henry Thompson, "Not all namespaces guarantee uniqueness for their identifiers" as given by the following production: URI(identifier in context of a namespace) = URI(nsid) #? identifier [Thompson, www-tag]. This is exemplified by the case of attributes and elements sharing the same name, which we will consider later. Worse, the namespace prefix could lose its namespace URI, as we demonstrate below.
As stated by the W3C Technical Architecture Group (TAG), since "there is no single, accepted way to convert a QName into a URI...the use of QNames to identify Web resources without providing a mapping to URIs is inconsistent with Web architecture," and as such QNames should not be used in attribute values in the place of a URI [WebArch]. Stated more directly by the W3C TAG, "Do not allow both QNames and URIs in attribute values or element content where they are indistinguishable" [WebArch]. Indeed, it is precisely in this context that some sort of compact URI notation like CURIE could be of use in order to avoid confusion with QNames [CURIE Syntax]. While this usage of QNames in attribute values may seem to be infrequent, it can be quite frequent in XSL transforms and XML Schemas. The reason to beware of this practice without explicit guidance from a standard is that XML processors only resolve QNames to expanded names when they are found as element and attribute names, and are not resolved when found in element content or attribute values. So, an XML processor will not resolve xsl to its namespace URI while it would resolve rdf in <rdf:Description rdf:. If a namespace prefix isn't resolved and there's a document transformation where its declaration is lost, the namespace URI could be lost such that the next XML processor may discover a QName such as xsl and not be able to finds its namespace URI.
Even assuming that the namespace declaration is preserved, there is still the chance of attributes and elements sharing the same name. A default namespace can be specified by using the ubiquitous xmlns attribute without any local name, so that it's value is the namespace of elements in the document, but that default namespace does not apply to attributes. This includes attributes of any element with the default namespace attribute! This has led to a gradual evolution in XML use, so while specifications like XSLT do not declare their attributes in a namespace, the RDF XML syntax specification mandates explicitly giving all attributes it uses a namespace [RDFXML]. Many find that having to give every attribute a namespace explicitly to be counter-intuitive, since it would seem that more natural behavior that the namespace of an element by default qualifies its attribute. The following example is instructive in this regard:
<xsl:stylesheet xmlns: <xsl:template <document> <ex:about <xsl:apply-templates /> </ex:about> </document> </xsl:template> </xsl:stylesheet>
With this intuition, each attribute would be given as their default namespace the namespace of their containing element. Under this mistaken reading, in the proceeding example, match would automatically be given the xsl namespace. This is not the case, it would only be given that namespace if it was declared xsl:match, regardless of its use in the XSLT application. Only attributes explicitly given namespace prefixes like the attribute ex:template are given namespaces. A namespaces in an attribute value, like ex:test, does not have a namespace because the prefix is used in an attributes value, not as an attribute name. Likewise, in the example match simply does not have a namespace. Furthermore, the default namespace applies to the document element name but does not apply to myattribute attribute name, which also has no namespace and so does not have an expanded name. The reasoning behind this state of affairs that does not give an attribute by default the namespace of its element would be that such behavior would prevent the same attribute from being used on multiple elements.
So, the insidious problem that truly dooms QName to URI mappings is that attributes and elements may have the same expanded name, and therefore they would have a URI collision despite being two different kinds of things. This happens with the ex:about element and attribute names in our previous example. One suggestion would be to consider that each attribute be naturally qualified by its element, especially if they share the same namespace. Therefore an attribute could have a unique URI minted for it by concatenating it to its element expanded name in a standardized manner, such that it's namespace would be namespace URI + element name + attribute name, so in our example the attribute ex:about could be thought to be the URI. This thinking in error since it prevents the use of the same attribute in multiple elements since it assumes each attribute belongs to a specific named element. The other option would be to preface attributes by some sort of constant in the creation of the URI, such as namespace URI + "attr" + attribute name, which would in our example produce:. This is a clunky solution at best. Although the argument flies in the face of the standards, one argument for creating URIs out of expanded names is that it allows resources to be associated with particular local names in the namespace in a principled manner. So one could do things like make a RDF statement about only a particular expanded name. This could be very useful in managing different versions of a language. Yet in final analysis a QName is just not a shorthand for a URI, and treating it as such is problematic at best, so the practice is best to be avoided unless explicitly licensed by the standard or the namespace document. Is there a way to explicitly license this mapping from QNames to URI in a namespace document? Unfortunately there is no current best practice for namespace documents to license QName to URI mappings in either a human or machine-readable form.
As exemplified by the coy XSLT namespace document, early thinking in namespaces believed that schemas, as given by W3C XML Schema or DTDs, to be the one and only namespace document. While schemas are appropriate to link from a namespace document, in general a namespace document should be more rich than only a schema, especially as there are multiple schema languages that one can use for often different purposes. Outside this, there is little agreement on what to actually dereference from a namespace URI. First, a human-readable description of the language should be there, as well as something machine-readable. Second, it would seem the obvious place to put schemas, transforms, and other resources associated with the language. The informal standard RDDL 1.0 (Resource Directory Description Language) fulfills this role [RDDL]. RDDL is descended from the XML Catalog Format ([XNCF]) and the XML Namespace Related-resource Language ([XNRL]) proposals. RDDL 1.0 is an XHTML format that could be considered an early use of "microformats," since it combines both human and machine-readable data by embedding machine-readable semantics in XLink. As regards the semantics of XLink, Tim Bray notes that RDDL 1.0 was "arguably abusing them pretty severely" [Bray, 2004]. RDDL 1.0 introduces a single new identifier, rddl:resource, that identifies the related resources by the xlink:role and xlink:arcrole attributes. Other XLink attributes, like xlink:title are allowed, although xlink:title is restricted to "simple." The rddl:resource is the parent of a XHTML href element linking to the resource. The xlink:role attribute is used to describe the nature of a related resource, which is usually the URI of the standard that defines the type of the related resource, although many things from mailboxes to software have their natures given at. W3C XML Schemas for a namespace should be linked from a RDDL 1.0 as xlink:role = "". As given by the value of the xlink:arcrole attribute, an optional purpose is "designed to convey the intended usage of the related resource" and a list of purposes is given by [RDDL]. W3C XML Schema, RELAX NG, and Schematron are all used for the "purpose" of validation, as given by xlink:arcrole="".
Calls for a more minimal syntax and less arbitrary use with role and arcrole, as well as a standard mapping to RDF, has led to the RDDL 2.0 standard that is as of yet incomplete [RDDL2]. Its main feature is a RDF serialization that features rddl:nature and rddl:purpose as RDF alternatives to XLink. It is still a work in progress and not adopted in practice. Current fashion for Semantic Web namespace URIs is to serve the RDF Schemas, often with no connection to human-readable documentation. Microformats, by taking advantage of XHTML attribute values (such as XFN's use of the rel attribute), interestingly enough seem to leave themselves out of the versioning and namespace document story, although it is conceivable future versions will probably attempt to implement some use of versioning and namespaces [XFN]. Despite the need for a standard, no standards bodies have not approved a namespace document policy.
Versioning is important: Different versions of a language may specify different application semantics. In practice, there are two general ways to do versioning in XML languages in a given document. The first is to mimic XML itself and use a version attribute on root or arbitrary elements, and the other is to provide a more rich mechanism with links to specify the previous versions. This rich approach is exemplified by the mechanism provided by OWL ontologies to specify prior versions (using the priorVersion predicate) and can specify backwards compatibility, incompatibility, and deprecated classes and properties as well. Note that in OWL, "if owl:backwardCompatibleWith is not declared, then compatibility should not be assumed" [OWL Guide]. While RDDL provides a prior-version purpose, it does not let one specify versions in detail. For example, the nature URI for XML Schema () does not distinguish if version 1.0 or 1.1 of XML Schema is being used. In fact, neither does the namespace document of XML Schema, as it has as related resources only 1.0 2nd Edition normative references.
The approach of using the value of the version attribute in the root element can become problematic. What about the case in which one wants to use names from two versions of a language that use the same namespace? Should one qualify both elements with differing version attributes? One could specify that every version has its own URI, but this is often not the case, and often minor revisions may want to use the same namespace, and only use namespaces for major revisions [Van der Vlist, 2001]. A case example that has attracted attention in the Web Services community, applications may want to revert to a previous version of a language if the they do not have a relevant schema or other resource to process the newest version of the language, even if the document specifies that the processor should use the latest version, in order to "scrape some information out" [Thompson, 2004].
One would hope you can just put the version number in the URI, perhaps by writing the year of the specification in the namespace URI. This is done by the W3C in the namespace of XHTML:. Regardless, the approach of trying to throw all the relevant versioning information in the URI does not solve the problem cleanly. This approach violates the rule of URI Opacity given by the W3C TAG: "Agents making use of URIs SHOULD NOT attempt to infer properties of the referenced resource"[WebArch]. The problem of figuring out more information about a language can be solved more easily by letting the URI dereference a namespace document that provides such information.
Before any suggestions are made as regards the shape of what should be in a namespace document, there should be some observation of what people are doing on the ground. Since there are no (even decentralized) namespace directories, in the following table we survey a number of namespace documents used by well-known languages:
RDDL 1.0 is used in new XML standards (and many Web Service standards). Semantic Web standards routinely deliver RDF Schemas. Some namespaces just serve plain XHTML or absolutely nothing.
The minimalist reading of namespaces states that anyone can mint a new name by just adding a local name in a namespace in a document they produce. The power of defining the "names in a namespace" is in the hands of the user, not the owner of the namespace URI. Against the intuitions of many people unfamiliar with XML, any namespace sets absolutely no constraints on the number and kinds of names in a language. XML parsers do not attempt to retrieve anything at all from a namespace URI. The number of distinct local names that may be attached to a single namespace is infinite. This is the reading sanctioned the XML Namespaces specification [Namespaces]. As noted by Henry Thompson, "The minimalist reading is the only one consistent with actual usage -- people mint new namespaces by simply using them in an expanded name or namespace declaration, without thereby incurring any obligation to define the boundaries of some set" [Thompson, 2005]. While there has been plenty of vigorous debate about namespaces, the minimalist interpretation is widespread precisely because there is no alternative. This interpretation does not manage the versioning of XML languages or take advantage of the use of namespace documents to check the correctness of the name use in a document.
A maximalist reading of namespaces would states that there is some finite number of names in a language and local names in a namespace with some standard usage. The number of names in a namespace is defined by the owner of the namespace URI as opposed to any user. Furthermore, a true maximalist would prefer each expanded name in a namespace should expand to a unique URI that denotes a secondary resource using some construction rule. Since a URI has a distinct owner, the owner would be the final arbitrator of the language, and as such any non-standard usage of the language, such as minting a new expanded name that wasn't given in the namespace document (or was a valid secondary resource of the namespace URI) would be wrong. While this is obviously very restrictive, it is more or less how names of core constructs work in programming languages. However, this is too stringent and incompatible with most existing work.
A balance can be struck between the maximalist and minimalist readings, creating a pragmatic reading of namespaces that gives the owner of XML languages a way of expressing more information about their language in the namespace document. This would give the user more options by allowing them to discover exactly how the owner of the language wants the language to be used. The owner should choose if they prefer a maximalist, minimalist, or some moderate reading of their namespace. The user does not have to compelled to follow the owner's guidelines, but can at least be aware of them if they so wish. So namespace documents should state whether or not the space of possible names is delimited and whether or not every name in the namespace has a unique expanded that name that maps to a URI. A namespace document should state the version of a language, and keep track of version changes over time. For a particular name, it should state what version ranges it can be used in, and attach a human readable description to each name. This would give the user some advantages in exchange for letting them restrict themselves. For example, instead of having to worry about documenting their usage of particular names, by sticking to the namespace document given by the owner of the namespace URI, a user could pass around XML documents to other applications and know that if those applications were not sure how to process a given name, the application could get a namespace document that would tell them how.
Henry Thompson proposed a language, which he tentatively called LDVL (Language Definition and Versioning Language) for describing names in a language [Thompson, 2005]. While previous work has tried to map all kinds of information into the namespace URI, the work presented in this paper places versioning and more information in a both machine and human-readable namespace documents. Putting the language description in a namespace document as opposed to the URI not only upholds the principle of URI opacity, but allows current namespaces to keep being used without any modification. It also allows the information stated about a namespace to be extended upon arbitrarily, without resulting in ridiculously long namespace URIs. It allows "approved" names to be both added and subtracted from a namespace. Finally, we allow elements and attributes to be listed as finite sets without mistakenly forcing one to interpret a QName to be a URI, and so breaking standards.
In LDVL, each name can also be considered to have at least four properties: a language, a version, a kind, and a definition. Since there are more than one name per kind, it is only the combination of kind and name that makes sense when retrieving version ranges and definitions. So, given a language version, a kind, and a name, one can get a definition. In more detail, each name in a language can contain the following name metadata:
One further option, in line with our pragmatic reading, should be:
These can interact or be redundant in many ways. In some instances the language and its version can be considered the same if there is only one version of the language. Versions and the namespace URI can be considered the same, if for each version of a language a new namespace URI is minted. Often determining the kind of an name can be difficult to determine without some human-readable documentation or the context of its use in a particular instance document. As put by Henry Thompson, "Therefore, (versions of) languages tell you, for every kind they care about, what names are used for things of that kind, and, for those that have definitions, what their definition is" [Thompson, www-tag]. Every local name in a language can be annotated with these five attributes. One could imagine this being done by a modification of a Post-Schema Validation Infoset. However, it can also be done by simply making this sort of information available in the namespace document in a interoperable manner by using RDF and XHTML.
A language itself would also have related language metadata. Obviously the human readable title would be included, as would the namespace URI and a preferred abbreviation for the namespace prefix for QNames, as well as a set of URIs that defined the application.
LDVL's information can be expressed as a BNF. Note that every item is strictly optional, but items explicitly marked as optional are ones that we expect will actually be optional and not used in common usage, while the others are recommended:
<Language>::= <Title> {<NamespaceURI>} {<NamespacePrefix>} {<Application>}* [<CurrentVersion>] {<PreviousVersion>}* [<Version>} [<Unique>] [<Restricted>] [<Owner>] [<ChangePolicy>] {<RelatedResource>}* {<NameMetadata>}* {<Definition>} <Title> ::= "xsd:string" <NamespaceURI> ::= "xsd:anyURI" | none <NamespacePrefix> ::= "xsd:string" <CurrentVersion> ::= <Version> <PreviousVersion> ::= <Version> <Version> ::= { "xsd:decimal" | "xsd:URI" | "xsd:string" } <Unique> ::= "xsd:boolean" | "xsd:string" <Restricted> ::= "xsd:boolean" <Owner> ::= "xsd:string" | "xsd:URI" <ChangePolicy> ::= "xsd:string" | "xsd:URI" <RelatedResource> ::= <Nature> {<Purpose>} <Location> {<Title>} {<Normative>} {<CurrentVersion>} {<PreviousVersion>} {<Version>}* <Normative> ::= "xsd:boolean" <Nature> ::= "xsd:anyURI" <Purpose> ::= "xsd:anyURI" <Location> ::= "xsd:anyURI" <NameMetadata> ::= <Localname> <Language>+ {<CurrentVersion>} {<PreviousVersion>} {<Version>}* {<Kind>} {<NamespaceURI>} {<NamespacePrefix>} {<Required>} {<Title>} {<ExpandedURI>} {<Definition>} <Localname> ::= "xsd:NCName" <Language> ::= {"xsd:anyURI" | "xsd:string"} <Kind> ::= "xsd:anyURI" | "xsd:string" <Required> ::= "xsd:boolean" <ExpandedURI> ::= "xsd:anyURI" <Definition> ::= "xsd:string" | "xsd:anyURI"
The ideas presented in LDVL can be serialized as a XML language, which we will call the XMLVS, the XML Versioning System language. W3C XML Schemas for this XML language is given at. A brief example of how we could handle language management for Atom is given in XML below, although this is only a subset of Atom used to show off some of the harder constructs the we can handle. First, note that we replicate in LDVL the functionality of RDDL 1.0, using constructs such as nature and purpose. We also for related resources provide versioning, which is not provided in either RDDL 1.0 or currently RDDL 2.0. To return to an earlier thorny point, if the unique attribute is set to true, then the namespace owner guarantees a unique URI can be constructed from each expanded name and can use the expandedUri element to give the exact URI for each name in the namespace.
First, there are two variants of the XMLVS language, the "strict" and the "lax." The division is simple: In the "strict" version, everything that one wants to make statements about (versions, kinds, even owners) must be given a URI, and the language must have a namespace (although the mapping from names to URIs does not have to be unique). This allows the "strict" version to be mapped to RDF and so be easily extensible. Also although every language may be given a URI, and this URI is usually the same as the namespace URI, it does not have to be, so that we can use different URIs for the namespace and the versions of the language. This allows different versions of the same language to use the same namespace. For example, both XSLT 1.0 and XSLT 2.0 can be given URIs and have different names in their version, but both can also state they use the same namespace URI.
The "lax" vocabulary allows aspects of LDVL given by the BNF where there is an option not to use a URI but instead use only a string or decimal for versions, and to also not specify a namespace URI. The "lax" version exists because there are many popular XML vocabularies, such as some versions of RSS and OPML, that do not use namespaces at all. Yet, they are going through version changes (RSS .93 to 2.0, and OPML 2.0 has been drafted), so it would make sense for any versioning system to be able to describe their versioning. However, note that the "strict" version allows any of the items given a URI to also have a more human-readable description as a text string or decimal. Here is a colloquial "lax" XML XMLVS document for part of Atom. Note that it is easy for humans to read and is lax because it does not give versions or kinds URIs, but only denotes them as strings or decimals.
The heart of the language information is given in the children and attributes of the currentVersion element, which fixes the namespace URI, namespace prefix, date of change, and whether the version has a shortcut identifier as either a string ("HTML Transitional") or number ("1.0"). We allow previous versions to referenced in the same manner. We finally also provide elements to keep track of rich information about the owner. For the sake of records, we also keep the dates of changes. Lastly, we provide a link to explicitly connect to whatever standard is being implemented through the application element and also any explicit policy for changing the namespace policy, as given by the changePolicy element. After the meta-data describing the entire language is dealt with, each name is given a verbal name (title), as well as at least one kind (kind) and at least one version (version). An optional yet crucial definition gives human or machine-readable definitions of the intended use of the name.
<xvs:language xmlns: <xvs:currentVersion xvs: <xvs:languageName>Atom Syndication Format</xvs:languageName> <xvs:namespacePrefix>atom</xvs:namespacePrefix> <xvs:changePolicy>Still draft,not sure yet...</xvs:changePolicy> <xvs:owner xvs: Atom-Enabled Alliance<:previousVersion> </xvs:currentVersion> <xvs:name xvs: <xvs:title>ID</xvs:title> <xvs:kind></xvs:kind> <xvs:version xvs:1.0</xvs:version> <xvs:version>0.3</xvs:version> >0.3</xvs:version> </xvs:name> <xvs:name xvs: <xvs:kind>element</xvs:kind> <xvs:version>1.0</xvs:version> <xvs:previousVersion>modified</xvs:previousVersion> </xvs:name> <xvs:name xvs: <xvs:kind>element</xvs:kind> <xvs:version>0.3</xvs:version> <xvs:newVersion>updated</xvs:newVersion> </xvs:name> <xvs:name xvs: <xvs:kind>attribute</xvs:kind> <xvs:version>0.3</xvs:version> <xvs:version>1.0</xvs:version> </xvs:name> ... <xvs:relatedResource xvs: <xvs:version>1.0</xvs:version> </xvs:relatedResource> <xvs:relatedResource xvs: <xvs:version>1.0</xvs:version> </xvs:relatedResource> <xvs:relatedResource xvs: <xvs:version>1.0</xvs:version> </xvs:relatedResource> </xvs:language>
However, the upgrade path from "strict" to "lax" colloquial XMLVS is easy: Just add URIs! However, there is one crucial difference. Since Atom does not use any namespace qualified attributes, but only elements, we cannot talk about the commonly used attributes in Atom using RDF with the "strict" XMLVS language, since the attributes cannot be given expanded URIs (although we can of course use kind to distinguish element from attribute names). Yet we can posit the xmlvs:uri attribute of names to be fundamentally arbitrary, and then use the expandedURI element to contain the preferred URI to be constructed out of the URI. However, most people who follow the pragmatic reading of namespaces would prefer if the URI used to talk about a name in a namespace mapped to a URI via the default concatenation of the namespace URI and the local name, and so use this URI as the xmlvs:uri (and marking this with a unique attribute set to true). We follow this second convention in the example below (although we show via the atom:title name the use of the expandedUri to implement the first convention). Also, since in the "strict" vocabulary every kind must have a URI and in Atom only elements are given namespaces, every name is the example is given the URI xsvt:element. Ideally, one could define kinds by pointing to a URI that defines URIs for every type of information item in the Infoset. The example of this strict colloquial XMLVS language for a fragment of Atom is below:
<xvs:language xmlns: <xvs:currentVersion xvs: <xvs:languageName>Atom Syndication Format</xvs:languageName> <xvs:application xvs: <xvs:namespacePrefix>atom</xvs:namespacePrefix> <xvs:changePolicy xvs: <xvs:owner xvs: <xvs:ownerName>Tim Bray</xvs:ownerName> <xvs:ownerRole>Atompub Co-Chair</xvs:ownerRole> <xvs:ownerEmail>[email protected]</xvs:ownerEmail> <xvs:ownerOrganization>Atom-Enabled Alliance</xvs:ownerOrganization> <:application xvs: </xvs:previousVersion> </xvs:currentVersion> <xvs:name xvs: <xvs:localname>id</xvs:localname> <xvs:namespace xvs: <xvs:namespacePrefix>atom</xvs:namespacePrefix> <xvs:expandedUri xvs: <xvs:kind xvs: <xvs:version xvs: <xvs:version xvs: :localname>title</xvs:localname> <xvs:kind xvs: <xvs:version xvs: <xvs:version xvs: </xvs:name> <xvs:name xvs: <xvs:localname>author</xvs:localname> <xvs:kind xvs: <xvs:version xvs: <xvs:version xvs: </xvs:name> ... <xvs:name xvs: <xvs:localname>info</xvs:localname> <xvs:kind xvs: <xvs:version xvs: </xvs:name> <xvs:name xvs: <xvs:localname>updated</xvs:localname> <xvs:kind xvs: <xvs:version xvs: <xvs:previousVersion xvs:modified</xvs:previousVersion> </xvs:name> <xvs:name xvs: <xvs:localname>modified</xvs:localname> <xvs:kind xvs: <xvs:version xvs: <xvs:newVersion xvs:updated</xvs:newVersion> </xvs:name> <xvs:name xvs: <xvs:localname>href</xvs:localname> <xvs:namespace>none</xvs:namespace> <xvs:kind xvs: <xvs:version xvs: <xvs:version xvs: </xvs:name> ... <xvs:name xvs: <xvs:kind xvs: <xvs:version xvs: <xvs:version xvs: </xvs:name> <xvs:relatedResource xvs: <xvs:version xvs: <xvs:definition>This Web Service validates Atom Feeds</xvs:definition> </xvs:relatedResource> <xvs:relatedResource xvs: <xvs:version xvs: </xvs:relatedResource> <xvs:relatedResource xvs: <xvs:version xvs: <xvs:definition>The IETF RFC 4287 document is the normative definition of Atom.</xvs:definition> </xvs:relatedResource> </xvs:language>
There are a few changes in this strict version. An explicit application pointing to the Atom standard is given. A title attribute no longer contains each name, as the localname element now denotes the name. A more human readable name can still be given via the title attribute, as shown by the id name in the XMLVS example. The possibility of giving a name a unique URI (as one would get from constructing one from the namespace prefix and the QName) is given by the expandedUri element. As shown in id name in the example, the local name and namespace URI are explicitly separated (as given by localname and namespaceUri elements). Lastly, notice how the naming conflict between elements and attributes can be easily resolved in XMLVS. Attributes and elements with the same name are distinguished first by being given unique URIs via the uri attribute, and this URI doe not have to follow a simple mapping rule that combines their local name and namespace URI. Secondly, as shown by the example href attribute in Atom, attributes are given a different kind than elements. Lastly, as also shown by href, if an attribute does not have a namespace but it is used by a language, the namespace element of that name can be set to none. Names from other namespaces can be imported by including their language and names in the XMLVS file, although this is not shown in this relatively straightforward example.
XMLVS easily handles the upgrading of Atom from 0.3 to 1.0, which includes several substantial modifications, and so XMLVS handles namespace documents for multiple versions within a single namespace document. Names are changed: The name modified is renamed to updated. The info name is deprecated from version 1.0. Yet our XMLVS file allows one to keep track of multiple versions for a name in a single file by associating the version with the name and letting names have multiple versions as given by their version elements. We also via the use of use newVersion and previousVersion elements can keep track of when something changes a name. However, note that if expandedUri, namespacePrefix, and namespace elements refer only to the current version of the language if it exists, then it makes sense to separate out names from different verisons under differing name elements, linking the URIs of the names with a previousVersion element.
As mentioned earlier, the problem of constructing URIs from expanded names is be solved by explicitly adding a constructed URI to each name in XMLVS via the expandedUri name. If the unique attribute of the language is set to true, then if an expandedUri is missing from a name then default constructed is given by the name's uri attribute. This is obviously applicable to every name in the Atom example except href(since it has a namespace set to none), therefore ruling the use of a expandedUri element redundant for this particular example. This example also shows how related resources, everything from an online feed validator to the upcoming normative IETF Atom RFC, can be linked to from the language and even associated with particular versions of the language in XMLVS.
The mapping from the "strict" XMLVS language to RDF is fairly straightforward. An RDF Schema is given here. Our previous example is translated into RDF/XML below:
<?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns: <xvsr:Language rdf: <xvsr:languageName>Atom Syndication Format</xvsr:languageName> <xvsr:languageNamespace rdf: <xvsr:changePolicy rdf: <xvsr:owner> <rdf:Description rdf: <xvsr:ownerName>Tim Bray</xvsr:ownerName> <xvsr:ownerRole>Atompub Co-Chair</xvsr:ownerRole> <xvsr:ownerEmail>[email protected]</xvsr:ownerEmail> <xvsr:ownerOrganization>Atom-Enabled Alliance</xvsr:ownerOrganization> </rdf:Description> </xvsr:owner> <xvsr:previousVersion> <xvsr:Language rdf: <xvsr:languageName>Atom Syndication Format (Draft)</xvsr:languageName> <xvsr:languageNamespace rdf: <xvsr:dateRelease>2003-12-02T09:30:10Z</xvsr:dateRelease> <xvsr:versionId>0.3</xvsr:versionId> <xvsr:application rdf: </xvsr:Language> </xvsr:previousVersion> <xvsr:dateRelease>2005-08-17T12:15:09Z</xvsr:dateRelease> <xvsr:dateChange>2005-07-12T17:32:01Z</xvsr:dateChange> <xvsr:dateChange>2004-03-20T16:31:02Z</xvsr:dateChange> <xvsr:versionId>1.0</xvsr:versionId> <xvsr:application rdf: <xvsr:restricted>true</xvsr:restricted> <xvsr:unique>true</xvsr:unique> </xvsr:Language> <xvsr:Name rdf: <xvsr:title/> <xvsr:version rdf: <xvsr:version rdf: <xvsr:localname>id</xvsr:localname> <xvsr:namespace rdf: <xvsr:namespacePrefix>atom</xvsr:namespacePrefix> <xvsr:expandedUri rdf: <xvsr:kind rdf: <xvsr:definition> Identifies the feed using a universally unique and permanent URI. If you have a long-term, renewable lease on your Internet domain name, then you can feel free to use your website's address.</xvsr:definition> </xvsr:Name> <xvsr:Name rdf: <xvsr:version rdf: <xvsr:version rdf: <xvsr:localname>title</xvsr:localname> <xvsr:kind rdf: </xvsr:Name> <xvsr:Name rdf: <xvsr:version rdf: <xvsr:version rdf: <xvsr:localname>author</xvsr:localname> <xvsr:kind rdf: </xvsr:Name> ... <xvsr:Name rdf: <xvsr:version rdf: <xvsr:localname>info</xvsr:localname> <xvsr:kind rdf: </xvsr:Name> <xvsr:Name rdf: <xvsr:version rdf: <xvsr:localname>updated</xvsr:localname> <xvsr:kind rdf: </xvsr:Name> <xvsr:Name rdf: <xvsr:version rdf: <xvsr:localname>modified</xvsr:localname> <xvsr:kind rdf: </xvsr:Name> <xvsr:Name rdf: <xvsr:version rdf: <xvsr:version rdf: <xvsr:localname>href</xvsr:localname> <xvsr:kind rdf: </xvsr:Name> ... <rdf:Description rdf: <xlink:title>Feed Validator</xlink:title> <rddl:nature rdf: <rddl:purpose rdf: <xvsr:version rdf: <xvsr:definition>This Web Service validates Atom Feeds</xvsr:definition> </rdf:Description> <rdf:Description rdf: <xlink:title>Atom4J Atom Java API</xlink:title> <rddl:nature rdf: <rddl:purpose rdf: <xvsr:version rdf: </rdf:Description> <rdf:Description rdf: <xlink:title>IETF RFC 4287</xlink:title> <rddl:nature rdf: <rddl:purpose rdf: <xvsr:version rdf: <xvsr:definition>The IETF RFC 4287 document is the normative definition of Atom.</xvsr:definition> <xvsr:normative>true</xvsr:normative> </rdf:Description> </rdf:RDF>
A RDF Schema for the RDF XMLVS language is provided at. The RDF version of the XMLVS namespace document provides itself with a few subtle changes, although it is extremely similar (by design) to the "strict" colloquial XML language provided. The primary difference is that, since RDF is URI-based, it absolutely needs URIs to talk about names in the language and different versions of the language. An alternative would have been to use numbers and strings as our primary way to denote versions instead of using URIs. However, if two XMLVS RDF graphs were merged the RDF graphs would assume statements about version 1.0 of one language could be mapped to statements about version 1.0 of a different language, which is almost always incorrect. So, the RDF approach using URIs is only valid if each name in the namespace has a unique URI, such as one that should be constructed from its expanded name. There are numerous advantages conferred by the use of RDF. Among them, RDF allows Semantic Web-enabled processors to use your namespace document to automatically locate related resources like XML Schemas and so pave the way for automated schema location, validation, and language transformation. Even on the level of just names, it allows machines achieve partial "understanding" of documents by determine if a document is using valid language names and by tracing the evolution of names. The RDF Schema of XMLVS also maps XMLVS to other commonly understood vocabularies such as Dublin Core, and so it correctly maps xvsr:dateChange as a rdfs:subPropertyOf of dc:modified and xvsr:version as a rdfs:subPropertyOf of dc:hasVersion.
In a nutshell, the "lax" colloquial XMLVS language handles versioning and for current existing XML languages, including those without namespaces, while the RDF ones require namespaces, unique names for names, and preferably different URIs for different versions. We imagine users can map existing languages to the "lax" XML format, and later upgrade to using the "strict" format and then easily map that to the RDF version. This allows namespaces to slowly be brought more in-line with W3C best practices.
XMLVS is work over and beyond the minimal standards, and as such, this work is currently unstandardized but offers advantages to early adopters. First, the "Architecture of the Web" of the W3C states that "An XML format specification SHOULD include information about change policies for XML namespaces,"[WebArch] and where better to put some information about change policy, and the current valid versions and even older deprecated versions, for specification than in namespace document? Also, in particular the W3C Technical Architecture Group further states that "The owner of an XML namespace name SHOULD make available material intended for people to read and material optimized for software agents in order to meet the needs of those who will use the namespace language," and XMLVS provides both a machine and human-readable namespace document [WebArch]. So while the minimalist reading of name-spaces is standard compliant, a pragmatic interpretation of namespaces is encouraged by the W3C.
Moreover, using consistent namespace documents gives a host of advantages. When encountering a language with a known mapping to another language, and if one wants information in the latter language for use in an application, a RDDL link to an XSLT stylesheet can be automatically used to translate. If a new version of a language is given to an application that uses an older one, one use the namespace document to discover mappings from the newer language to the older one, allowing the document to be processed gracefully (a use scenario that has received considerable attention from the Web Services community [Thompson, 2004]). One could use this to retrieve schemas for languages to be automatically validated an instance document. It could even allow the automatic upgrading of legacy XML languages to their newest version, and do checks of "namespace validity" to make sure one isn't minting new names in a namespace when for that particular language that sort of practice is discouraged. It also would prevent controversies, since it would allow a namespace owner to define whether or not the number of names in a namespace were restricted or not by the use of simple boolean variable in the XMLVS format, since as stated as good practice that "Specifications that define namespaces SHOULD explicitly state their policy with respect to changes in the names defined in that namespace." [Disposition of Names]. Furthermore, the exact additions and deprecations of "names from a namespace" can therefore be recorded in detail.
In order to make the management of XML languages as easy as possible, we have created a number of modular programs that allow one to manage XMLVS files. The entire package can be found at, although it is still experimental.
We first allow people to author XMLVS documents in colloquial XML or even RDF by hand or by the automated interface of their choice. Given any XMLVS colloquial XML file, we provide a XMLVS to RDDL 1.0 XSLT transformation. Given a "strict" colloquial XML XMLVS file, we provide XSLT to transform it into a valid XMLVS RDF file as given by the XMLVS RDF Schema. This allows both human and machine-readable namespace documents to be created using best practice standards.
Finally, we're working on using a versioning control system for XMLVS documents themselves, based on Simon Yuill's Social Versioning System[SVS]. This is provided for two reasons. First, the Social Versioning System, by allowing arbitrary (Python) code to be executed whenever a new version is checked in, allows a user to "check in" a new XMLVS document and then have SVS automatically invoke XSLT to create RDF and RDDL versions of the document. It can also upload those files (if the user supplies SVS the correct parameters such as username and password) to the server where the namespace document(s) is hosted. This simple functionality is provided, and users competent in Python can add other arbitrary functionality to SVS by passing SVS Python objects. Second, by providing version control, it allows all changes to a namespace to be tracked by date and version number, even if they are destroyed in the most current particular XMLVS file for a language. So if in your colloquial XMLVS XML document you decide to truncate all older versions, those older versions are maintained in the SVS archives of the XMLVS file. SVS also allows us to check in files other than colloquial XML XMLVS or RDF files, and can include (even as binaries) any related resource like JAR files and XSLT transformations, that are in turn given by the RDDL. These in turn can also be automatically uploaded to a server. In summary, SVS provides a CVS-like capability to check-in, check-out, and branch XML languages as embodied by both XMLVS files and whatever related resources exist, and also allows customizable execution of code, like XSLT transforms to RDDL and RDF, whenever a major change is checked in.
By using XMLVS, the namespace would be maintained itself as a collection of XHTML and RDF files and these would be knitted together and have version control maintained by a versioning control system like SVS. Although there may be other ways to maintain and version XML namespaces, this method allows users to scrap their boilerplate coding and get to the hard work of creating XML applications without having to worry about maintaining our namespaces any more than necessary for good practice. Furthermore, by providing easy-to-use vocabularies and tools to automate the creation of namespace documents, the likelihood of namespace documents actually being widespread and useful becomes much more likely.
There are a few "good practice" lessons that people managing XML languages might want to take home. Although these are common sense among many in the markup community, the amount of variance in the Web as regards the use of namespace documents is still very large.
There are also a few questions for various standards
The list of good practices and questions is doubtless ambitious, controversial, and also conflicting. For example, if either the behavior of namespace defaulting was changed or attributes inherited the namespace of their element, one would not have to explicitly use namespaces with attributes. Regardless of the particulars of the LDVL proposal and the XMLVS application, there are advantages to consistently using namespace documents and namespace URIs that the Web markup community should investigate. As it stands, only namespace documents can save namespaces and solve the versioning problem.
[Berners-Lee, 1998] Berners-Lee, Tim. Web Architecture from 50,000 feet.
[Bray, 2004] Bray, Tim. RDDL2 Background.
[CURIE Syntax] Birbeck, Mark. CURIE Syntax 1.0
[Disposition of Names] Walsh, Norm. The Disposition of Names in an XML Namespace. TAG Finding 9 January 2006..
[Infoset] World Wide Web Consortium (W3C). XML Information Set (Second Edition). 2004. Editors J. Cowan and R. Tobin.
[Namespace Theses] Bray, Tim. Architectural Theses on Namespaces and Namespace Documents
[Namespaces] World Wide Web Consortium (W3C). Namespaces in XML 1.1. 1999. Editors T. Bray, D. Hollander, A. Layman, and R. Tobin..
[OWL Guide] World Wide Web Consortium (W3C). OWL Web Ontology Language Guide.. 2004. Editors M. Smith, C. Welty, and D. McGuinness.
[RDDL] Borden, J. and Bray, T. Resource Directory Description Language (RDDL).
[RDDL2] Borden, J. and Bray, T. Resource Directory Description Language (RDDL) Version 2.0.
[RDFXML] World Wide Web Consortium (W3C). RDF/XML Syntax Specification (Revised). 2004. Editor D. Beckett.
[SVS] Yuill, Simon. The Social Versioning System. savannah.nongnu.org/projects/socversys/
[Thompson, 2004] Thompson, Henry. Versioning Made Easy with W3C XML Schema and Pipelines. XML Europe 2004, Amsterdam: The Netherlands.
[Thompson, 2005] Thompson, Henry. Names, Namespaces, XML Languages and XML Definition Languages. XML 2005, Atlanta, USA.
[Thompson, www-tag] Thompson, Henry. What is a namespace, anyways?
[Van der Vlist, 2001] van der Vlist, Eric. Best practices: Namespaces, versions and RDDL.
[WebArch] World Wide Web Consortium (W3C). Architecture of the World Wide Web, Volume One. 2004. Editors I. Jacobs and N. Walsh.
[XFN] Celik, T., Meyer, E. and Mullenweg, M. XHTML Friends Network (XFN)..
[XHTML] World Wide Web Consortium (W3C). XHTML 1.0 The Extensible HyperText Markup Language (Second Edition). 2002.
[XNCF] Borden, Jonathan. XML Namespace Catalog Format (XNCF).
[XNRL] Bray, Tim. XML Namespace Related-resource Language (XNRL).
[XQueryX] World Wide Web Consortium (W3C). XML Syntax for XQuery 1.0 (XQueryX). 2005. Editors J. Melton and S. Muralidhar.
[XSLT 1.0] World Wide Web Consortium (W3C). XSL Transformations (XSLT) 1.0 . 1999. Editor J. Clark.
[XSLT 2.0] World Wide Web Consortium (W3C). XSL Transformations (XSLT) 2.0 . 2005. Editor M. Kay.
[xml:id Version 1.0] World Wide Web Consortium (W3C). Namespaces in XML 1.1. 2005. Editors J. Marsh, D. Veillard, and N. Walsh..
|
http://www.ibiblio.org/hhalpin/homepage/notes/xvspaper.html
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
The Samba-Bugzilla – Bug 3262
In Samba from SVN does not work storing dos attributes to EA.
Last modified: 2005-11-21 23:04:18 UTC
Samba from SVN can not store dos attributes into extattr under FreeBSD.
This appeared after appling patch from bugzilla report #3218
3.0.20b without patch works OK, with patch works as described below.
I make sure that "store dos attributes = yes" with testparm.
...
comment = Test Share
path = /shared
read only = No
store dos attributes = Yes
...
There is a file in the share:
-rwxr--r-- 1 root wheel 0 15 ноя 14:07 1.txt
root@testbsd# getextattr user DOSATTRIB 1.txt
1.txt 0x22
(File is Archive & Hidden)
From WinXP computer:
C:\Program Files\Far>attrib V:\4\1.txt
A V:\4\1.txt
i.e. file attributes was gotten from permission bits (not eas).
In level 10 log:
[2005/11/15 14:12:12, 1] smbd/dosmode.c:get_ea_dos_attribute(200)
get_ea_dos_attributes: Cannot get attribute from EA on file .: Error = Result too large
Created attachment 1571 [details]
Level 10 Log of executing attrib V:\4\1.txt
It's strange but smbclient shows correct attributes:
root@testbsd# smbclient //localhost/share -U rc20
Domain=[TESTD] OS=[Unix] Server=[Samba 3.0.21pre3-SVN-build-11729]
smb: \> ls 4/1.txt
1.txt AH 0 Tue Nov 15 14:07:28 2005
33874 blocks of size 262144. 10513 blocks available
Ok - this is the code that got added in that bug report (#3218).
/*
+ * The BSD implementation has a nasty habit of silently truncating
+ * the returned value to the size of the buffer, so we have to check
+ * that the buffer is large enough to fit the returned value.
+ */
+ retval = extattr_get_file(path, attrnamespace, attrname, NULL, 0);
+ if(retval > size) {
+ errno = ERANGE;
+ return -1;
+ }
+
We are calling :
sizeret = SMB_VFS_GETXATTR(conn, path, SAMBA_XATTR_DOS_ATTRIB, attrstr, sizeof(attrstr));
Where attrstr is defined as an fstring (256 bytes). What I need from you is additional debug statements in your lib/system.c code in the FreeBSD specific part that prints out what retval and size in the above call. Then we need to figure out why 256 bytes isn't enough for FreeBSD to store the string "0x22" in an EA.
Jeremy.
(In reply to comment #3)
> Ok - this is the code that got added in that bug report (#3218).
> + retval = extattr_get_file(path, attrnamespace, attrname, NULL, 0);
>
> + if(retval > size) {
> + errno = ERANGE;
> + return -1;
> + }
> +
Looking again on this code I see that it misses check against retval < 0 - which in this case can give that (retval > size). It's my fault, I was concentrated on setxattr() code and missed retval check for this part of the functions....
With regards,
Timur.
Created attachment 1574 [details]
Level 10 log after appliing patch
(In reply to comment #4)
>...
I've added this code.
Now attrib shows correct attributes:
C:\>attrib V:\4\1.txt
A S V:\4\1.txt
alex@testbsd$ getextattr user DOSATTRIB /shared/4/1.txt
/shared/4/1.txt 0x24
-rwxr--r-- 1 root wheel 0 15 ноя 14:07 /shared/4/1.txt
Level 10 log attached.
This problem does not appear in SVN build 11739.
Thanks!
(In reply to comment #6)
> This problem does not appear in SVN build 11739.
> Thanks!
>
Hi Alex!
Can you try attached patch? It's supposed to do the same staff, just a bit more sane.
Jerremy, can yo uapply it later on if alex confirm it works ok for him?
Created attachment 1586 [details]
Additional sanity checks for FreeBSD EA emulation
Ok, I'll apply it once it gets the ok.
Jeremy.
Created attachment 1587 [details]
Level 10 log after last patch
Hello Timur, Jeremy!
After last patch from Timur storing dos attributes to EAs works OK.
I've attached level 10 Log to illustrate this.
Thanks again!
Applied thanks.
Jeremy.
|
https://bugzilla.samba.org/show_bug.cgi?id=3262
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
How to Display Activity Progress in Windows Phone Applications
Introduction
Certain type of operations in mobile applications take time. For good customer experience, it is suggested that the user be informed that the activity is in progress. Providing such an indication helps prevents users from guessing whether their action was registered with the application or not.
Displaying activity progress also allows a user to visualize how long it will take for the task to complete.
The ProgressIndicator class resides in the Microsoft.Phone.Shell namespace in the Microsoft.Phone.dll assembly.
The control can be declared in XAML as shown below:
<ProgressIndicator .../>
Note that the ProgressIndicator is visible in the status bar, but unlike desktop applications, the status bar is at the very top of the phone screen (next to where one would find the battery icon and the phone signal icon.
ProgressIndicator comes in two variants: determinate and indeterminate. If the IsIndeterminate property value is set to true, the progress indicator will only display “…….” to indicate progress of the activity.
To display the progress indicator on a XAML page, we need to specify the instance of the ProgressIndicator class as one of the properties on the SystemTray’s SetProgressIndicator API.
Hands-On
Let us build a simple Windows Phone application that demonstrates how to display activity progress using ProgressIndicator. We’ll call this project WPProgressIndicatorDemo.
New Project
We’ll add a TextBlock on the MainPage as well as a button. In our demo application, we will start a long running activity on the click of the button and indicate completion of the activity using the TextBlock. While the activity is in progress, we will use ProgressIndicator to indicate.
After adding the two controls, our XAML is shown below.
<phone:PhoneApplicationPage x: <Button x: <TextBlock x: </Grid> <!-" />--> </Grid> </phone:PhoneApplicationPage>
Now, we will add a click event handler for the button. In this method, we will create an instance of the ProgressIndicator class and specify it as indeterminate and set the progress indicator of the system tray to the instance.
private void buttonStart_Click(object sender, RoutedEventArgs e) { ProgressIndicator progressIndicator = new ProgressIndicator(); progressIndicator.IsVisible = true; progressIndicator.IsIndeterminate = true; progressIndicator.Text = "Downloading..."; SystemTray.SetProgressIndicator(this, progressIndicator); Deployment.Current.Dispatcher.BeginInvoke(() => { textBlockStatus.Text = "Async runner started. Please wait..."; }); AsyncRunner(DateTime.Now); }
Note that in the above method, we used a delegate to update the UI thread, and also called an asynchronous runner method. This async runner method will create a new background worker object, which will do the work.
public void AsyncRunner(DateTime dumpDate) { BackgroundWorker worker = new BackgroundWorker(); worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(worker_RunWorkerCompleted); worker.DoWork += new DoWorkEventHandler(worker_DoWork); worker.RunWorkerAsync(dumpDate); }
The above method introduces two new methods we would need to implement – one which will be called when the worker completes the activity (we will use this to signal MainPage.xaml to update the TextBlock to say “Done”). The second method will actually do the hard work (in our case, looping over all the possible values from 0 to int.MaxValue.
private void worker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; worker.RunWorkerCompleted -= new RunWorkerCompletedEventHandler(worker_RunWorkerCompleted); worker.DoWork -= new DoWorkEventHandler(worker_DoWork); Deployment.Current.Dispatcher.BeginInvoke(() => { textBlockStatus.Text = "Done."; }); SystemTray.ProgressIndicator.IsVisible = false; }
Note that when the worker has completed the work, we set the progress indicator to become invisible.
private void worker_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = new BackgroundWorker(); //Thread.Sleep(1000); for (int i = 0; i < int.MaxValue; i++) ; //Thread.Sleep(1000); }
The above code snippet is a representative of a time consuming task (For simplicities purpose, we are looping here.
Our demo is complete. If you run the application now, you will notice the following screenshots.
Task not started
Task in progress
Task complete
You can see how simple it is to display activity progress using the ProgressIndicator class.
You can download the sample code used in this walkthrough below.
Summary
In this article, we learned how to display activity progress using the ProgressIndicator class.!
|
http://www.codeguru.com/win_mobile/phone_apps/how-to-display-activity-progress-in-windows-phone-applications.htm
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
FYI, this is done.
I just reimported the documentation draft into the GDOCv1 space.
I also renamed MAIN to GERONIMO, and DEV to GDEV, to support a more
flexible namespace to allow sub-projects a space if they want.
I will see about getting email notifications enabled for this instance.
--jason
|
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200512.mbox/%[email protected]%3E
|
CC-MAIN-2016-40
|
en
|
refinedweb
|
Introduction
Technologies that are relevant to Web 2.0, such as Asynchronous JavaScript™ XML (Ajax), Web remoting, Web messaging, and others, have become increasingly prevalent in today’s Web applications. Compared with traditional Web applications, Ajax-based applications make it possible to provide much more responsiveness and interactivity. In those Web applications incorporating Ajax architecture, users don’t need to wait for the entire Web page to be reloaded before seeing new results from the server, and they can often complete tasks with fewer steps on a single Web page that is presented in an incremental or on-demand fashion.
To address the growing need for rapid development and delivery of Ajax-enabled solutions, the IBM WebSphere Application Server Feature Pack for Web 2.0 provides a rich set of components that enables you to build Ajax-based applications easily and efficiently. It also provides an open standards-based framework for integrating existing services or solution assets into rich Internet applications.
The major components of the feature pack include:
- Ajax client/proxy runtime
- RPC (Remote Procedure Call) adapter
- Web messaging service
- JSON4J (JavaScript Object Notation for Java) library
- IBM SOAP library
- IBM Atom library
- IBM OpenSearch library
- IBM Gauge widget.
This article describes the steps for building an Ajax-based chart application using the Web 2.0 feature pack. By following this example, you will be able to see how the components included in the feature pack can be used together to construct a complete Web 2.0 solution with a rich user experience.
Prerequisites
This exercise assumes a fundamental knowledge of Web application development and familiarity with Eclipse or IBM Rational® Application Developer. To follow these steps, you will need to have the WebSphere Application Server Feature Pack for Web 2.0 installed successfully in a properly operating WebSphere Application Server (V6.0, 6.1 or 7.0) environment.
About the sample dynamic application
The sample application included with this article is intended to demonstrate possible ways you can use the major components of the Web 2.0 feature pack to build an Ajax-based application while still being able to address changing business requirements. This sample application uses dynamic charts to report the sales quantities of automobile brands within a give time period (in a bar chart), and also lets users select a specific brand to view the sales distribution by area (in pie chart). Further, when the back-end data changes, the updated data is automatically presented to users in these charts.
The sample application, DynamicCharts, has these features:
- Provides a chart view for car sales of multiple automobile brands.
- Provides a chart view for drilling down to sales distribution by region for a specific brand.
- Updates charts displayed in a Web browser automatically at a configurable interval (initially 15 seconds).
- Provides a flexible layout that enables users to adjust the size of both the master view and the detail view.
The DynamicCharts application is built with these feature pack facilities:
- Ajax client runtime
- RPC adapter
- Web messaging service
- JSON4J.
Figure 1 illustrates the major functions of the DynamicCharts application, and Figure 2 depicts the application's overall structure and flow.
Figure 1. Functions of DynamicCharts example
Figure 2. Overall structure and flow of DynamicCharts example
In Figure 2, you can see that the client logic implemented by the Ajax client runtime:
- Retrieves the data from the facade interface of the chart data service -- which could be an existing service or solution asset -- and is exposed by the RPC adapter.
- Creates or updates the charts.
- Handles the events triggered by users and by the Web messaging service.
The invocations from client to RPC adapter are all based on JSON RPC, and the returned data is encapsulated in JSON objects, which can be parsed on the client side very quickly. The chart data updater simulates other applications or services that will update the chart data under certain circumstances, publishing the data change event to clients through the WebSphere Application Server Service Integration Bus (SIBus) and the Web messaging service. Both the chart data service and the chart data updater depend on the chart data accessor to load data from or store data to a data store. (For the purpose of this article, the chart data accessor in the sample application does not connect to an actual data store.)
The next sections will walk you through the process to create the DynamicCharts application. The major steps involved will be to:
- Import essential components
- Create the chart application using the Dojo Toolkit
- Expose ChartService to JavaScript clients via the RPC adapter
- Publish data updates to the Web browser with Web messaging
The entire sample application, including WAR and source files, and included with this article for you to download and deploy.
Import essential components
- Create a Dynamic Web Project and EAR named "DynamicCharts" in your development environment (meaning Eclipse or Rational Application Developer). Figure 3 shows the project directory structure.
Figure 3. DynamicCharts project directory structure
- If you are using Eclipse, import the Ajax client runtime and utility JARs to the WebContent folder.
- Implement a set of Java™ classes in the dataservice package.
- Create the dynamic-chars.html page.
If you’re using Rational Application Developer V7.5 or later, then you don’t have to explicitly import the Ajax client runtime and Web 2.0 utility JARs, because they will be available in your project when you enable Web 2.0 in Project Faces.
If you’re using Eclipse to create this sample application, a few more steps will be needed:
- Copy the ajax-rt_1.X folder from the Web 2.0 feature pack installation root directory (usually <app_server_root>/web2fep) to the WebContent folder of your newly created project (Figure 4).
Figure 4. Ajax client runtime
- Locate these five JAR files under the optionalLibraries folder in the Web 2.0 feature pack installation root directory and copy them to the WebContent/WEB-INF/lib folder of your project (Figure 5):
- commons-logging-1.0.4.jar
- RPCAdapter.jar
- RPCAdapter-annotation.jar
- retroweaver-rt-2.0.jar
- webmsg_applib.ja
Figure 5. Web 2.0 utility JARs
When the required libraries have been imported into your project, you can begin with your application code.
Create the chart application using the Dojo Toolkit
The Dojo Toolkit is a powerful, flexible, modular, open source Ajax software development kit that enables you to easily build dynamic capability into Web pages. The Web 2.0 feature pack incorporates Dojo Toolkit 1.1 and IBM extensions, such as the Ajax client runtime. In this section, you will see how easy it is to use the Dojo Toolkit to implement a chart page that has user interactions and back-end services.
- Create an HTML page and load the base Dojo script (ajax-rt_1.X/dojo/dojo.js) with the
<script/>tag in the head section. This provides the core functions of Dojo and as well as the access to all other Dojo facilities.
- Import the Dojo styles within the
<style/>tag and declare the Dojo packages that will be referred to on this Web page with
dojo.require(…), as shown in Listing 1.
Listing 1. Declaring Dojo components and themes
<style type="text/css"> @import "ajax-rt_1.X/dojo/resource/dojo.css"; @import "ajax-rt_1.X/dijit/themes/tundra/tundra.css"; @import "dynamic-charts.css"; </style> <script type="text/javascript" src="ajax-rt_1.X/dojo/dojo.js" djConfig="isDebug: false, parseOnLoad: true, usePlainJson: true"></script> <script type="text/javascript"> dojo.require("dojox.charting.Chart2D"); dojo.require("dijit.layout.BorderContainer"); dojo.require("dijit.layout.ContentPane"); dojo.require("dojo.rpc.JsonService"); dojo.require("dojox.cometd"); dojo.require("dojox.charting.themes.PlotKit.orange"); </script>
- After that, define the page layout in the body section of the HTML file (Listing 2):
- The Dojo layout widget
dijit.layout.BorderContaineris used here for the page layout, split into two
dijit.layout.ContentPanes, one for the car sale chart and the other for the distribution chart.
- The
splitterattribute of the second
dijit.layout.ContentPaneis set to true, which enables the user to change and adjust the widths of those two regions.
- In each region, one or two
<div>tags are embedded as the nodes where you will create a chart or radio button group.
Be aware that the original dijit.layout.SplitContainer has been deprecated in Dojo 1.1 and that dijit.layout.BorderContainer is introduced as a replacement.
Listing 2. Defining the chart page layout
<div dojoType="dijit.layout.BorderContainer" design="sidebar" id="main"> <div dojoType="dijit.layout.ContentPane" region="center" id="sale_pane"> <p>Car Sale</p> <div id="car_sale_chart"></div> </div> <div dojoType="dijit.layout.ContentPane" region="trailing" splitter="true" id="distribution_pane"> <p>Choose a brand to view sale distribution by area</p> <div id="brand_picker"></div> <div id="distribution_by_area"></div> </div> </div>
- You are now ready to work out the JavaScript code. The JavaScript functions to be implemented are summarized in this table:
Next, you need to load and parse the car sales data from the back-end service. The JavaScript fragment in Listing 3 illustrates how to invoke the chart data service through the JSON RPC call and RPC adapter. You just need to create a Dojo JSON service instance with the given URL exposed by RPC adapter.
The first parameter:
/DynamicCharts/RPCAdapter/jsonrpc/ChartDataService/getCarSale
means to call the getCarSale() method of the ChartDataService class (this will be explained more later) and then register the callback function showCarSale(...) to the JSON service instance, where the returned JSON-formatted data is parsed, and the brand and quantity data are put to arrays respectively.
There is another option to perform a JSON RPC call -- using the Dojo XMLHTTPRequest wrapper (dojo.xhr) -- but that requires more coding and is not as straightforward as what is shown in Listing 3.
Listing 3. Loading and parsing of the car sale data
// Retrieve the sale data via JSON RPC invocation and make // the charts. function getSaleData() { // Call the json service to retrieve the car sale data var json_svc = new dojo.rpc.JsonService( "/DynamicCharts/RPCAdapter/jsonrpc/ChartDataService/getCarSale"); json_svc.getCarSale().addCallback(showCarSale); } // Parse the sale data and create the sale chart and the brand picker // function showChart(data, ioArgs) { function showCarSale(sale_data) { // Parse the data returned from the server var sale_brands = new Array(); var sale_quantities = new Array(); for (var i=0;i<sale_data.length;i++) { sale_brands[i] = {value:(i+1), text:sale_data[i].brand}; sale_quantities[i] = sale_data[i].quantity; } // Make the auto sale chart makeSaleChart(sale_brands, sale_quantities); // Make the brand picker makeBrandPicker(sale_brands); }
- Once the chart date is loaded and parsed, the makeSaleChart function will be invoked to create the sales chart (Listing 4). The Dojo Chart2D object usually consists of four parts: theme, plot, axis and series (data). The addPlot call determines what types of charts you’re going to produce. There are a variety of plot types available. The most commonly used are lines, bars, columns, areas, grid, markers, pie, stacked, and so on. Here, "Columns" is used as the vertical bar chart for the car sales. Both addPlot and addAxis functions take two parameters: a name and an argument array. The addSeries function accepts additional parameter: an array of data series.
Listing 4. Creating the car sales chart
// Create the sale chart function makeSaleChart(brands, quantities) { if (sale_chart == undefined) { // Make the sale chart for the first request of a given user sale_chart = new dojox.charting.Chart2D("car_sale_chart"); sale_chart.setTheme(dojox.charting.themes.PlotKit.orange); sale_chart.addPlot("default", {type:"Columns", gap:2}); sale_chart.addAxis("x", {labels:brands}); sale_chart.addAxis("y", {vertical:true, includeZero:true, max:50}); sale_chart.addSeries("Auto Sale", quantities, {stroke:{color:"black"}, fill:"lightblue"}); sale_chart.render(); } else { // Update the sale chart sale_chart.updateSeries("Auto Sale", quantities); sale_chart.render(); } }
The brand picker radio group is created in Listing 5, based on the returned brands while making the sales chart above.
Listing 5. Creating the brand picker
// Create the brand picker function makeBrandPicker(brands) { var picker = dojo.byId("brand_picker"); var pickerNode = dojo.byId("picker_node"); if (pickerNode == undefined) { pickerNode = document.createElement("div"); dojo.attr(pickerNode, "id", "picker_node"); for (var i=0;i<brands.length;i++) { var option; // Different code for IE and FF respectively if (dojo.isIE) { option = document.createElement("<input name='auto_brand'>"); } else { option = document.createElement("input"); dojo.attr(option, "name", "auto_brand"); } // Create the radio option dojo.attr(option, "type", "radio"); dojo.attr(option, "id", brands[i].text); dojo.attr(option, "value", brands[i].text); dojo.attr(option, "dojoType", "dijit.form.RadioButton"); // connect onclick to the picker handler dojo.connect(option, "onclick", brandPickerHandler); pickerNode.appendChild(option); // Create the label var lbl = document.createElement("label"); dojo.attr(lbl, "for", brands[i].text); var txt = document.createTextNode(brands[i].text); lbl.appendChild(txt); pickerNode.appendChild(lbl) var nl = document.createElement("br"); pickerNode.appendChild(nl); } picker.appendChild(pickerNode); } else { // if the picker node exists, just refresh the selection brandPickerHandler(); } }
There are only general HTML DOM (Document Object Model) operations in the makeBrandPicker function. The only thing to be aware of is whether the brandPickerHander function is registered as the onclick handler using dojo.connect, which will locate the user selected brand and trigger the creation or update of the distribution chart, as shown in Listing 6.
Listing 6. Processing brand selection event
// Brand picker event handler function brandPickerHandler() { // Find out the selected brand var pickerNode = dojo.byId("picker_node"); var children = pickerNode.childNodes; var selected_brand; for (var i=0;i<children.length;i++) { if (children[i].checked != undefined && children[i].checked == true) { selected_brand = children[i].value; // Retrieve the updated distribution data and refresh the distribution // chart based on the selected brand getDistributionData(selected_brand); break; } } }
- The loading of the distribution data and the creation of the distribution chart are similar to those of the sales data and chart, except that the plot type Pie is used, as shown in Listing 7.
Listing 7. Loading the distribution data and creating the distribution chart
// Retrieve the distribution data via JSON RPC invocation and // make the distribution chart. function getDistributionData(brand) { // Call the json service to retrieve the distribution data var json_svc = new dojo.rpc.JsonService( "/DynamicCharts/RPCAdapter/jsonrpc/ChartDataService/getDistributionByArea"); json_svc.getDistributionByArea(brand).addCallback(makeDistritubtionChart); } // Create the distribution chart function makeDistritubtionChart(dist_data) { // Parse the distribution data var dist_percentage = new Array(); for (var i=0;i<dist_data.length;i++) { var data = dist_data[i] dist_percentage[i] = { y:data.percent,text:data.region+"("+Math.round(data.percent)+"%)", color:data.color}; } if (dist_chart == undefined) { // Make the distribution chart for the first request of a given user dist_chart = new dojox.charting.Chart2D("distribution_by_area"); dist_chart.setTheme(dojox.charting.themes.PlotKit.orange); dist_chart.addPlot("default", {type:"Pie",font:"normal normal bold 6pt sans-serif",fontColor:"white"}); dist_chart.addSeries("Area Distribution", dist_percentage); dist_chart.render(); } else { // Update the distribution chart dist_chart.updateSeries("Area Distribution", dist_percentage); dist_chart.render(); } }
- The last part of the JavaScript implementation is to define the init function that will be called right after the page load. This function takes care of the global initialization, such as initializing the Web messaging servlet, registering the Web messaging handler, and subscribing the /charttopic topic, as well as calling other functions to load the sales data. (More on Web messaging in the next sections.)
Listing 8. Initializing the Web messaging handler
// Web Messaging handler. function webmsgHandler(msg) { if (msg.data == "UPD") { // Get the updated sale data from the backend getSaleData(); } } // Perform initialization when loading the page. function init() { // Use the "tundra" style dojo.addClass(dojo.body(), "tundra"); // Get the sale data from the backend getSaleData(); // Initalize the web messaging client dojox.cometd.init("webmsgServlet"); dojox.cometd.subscribe("/charttopic", window, "webmsgHandler"); }
Expose ChartService to JavaScript clients via the RPC adapter
The RPC adapter (Web remoting) provides the ability for JavaScript or client-side code to directly invoke server-side logic via a JSON RPC call. That means that POJO methods can be easily invoked by Ajax applications without restructuring the existing implementations for lightweight clients.
Listing 9 shows the Java bean ChartData that will provide the chart data service exposed through the RPC adapter. Two methods are available in this Java class:
- getCarSale() for retrieving the sales data for all automobile brands.
- getDistributionByArea(...) for retrieving the distribution data for the selected brand through the chart data access component ChartDataAccessor.
Listing 9. Implementing the chart service
public class ChartData{ private static final Logger logger = Logger.getLogger("dataservice.ChartData"); /** * Gets the sale data of all auto brands * @return sale data for all brands */ public CarSale[] getCarSale() { // Retrieve the sale data with the chart DAO. logger.log(Level.INFO, "Retrieving car sale data for all brands."); return ChartDataAccessor.getInstance().loadSaleData(); } /** * Gets the distribution data by area for a given auto brand. * @param brand * @return distribution data for the given brand */ public AreaDistribution[] getDistributionByArea(String brand) { // Retrieve the distribution data with the chart DAO. logger.log(Level.INFO, "Retrieving distribution data for " + brand + "."); return ChartDataAccessor.getInstance().loadDistributionDataByBrand(brand); } }
To publish these methods to Ajax clients, you need to create the RpcAdapterConfig.xml configuration file in the WebContent/Web-INF folder in the project:
- If using Rational Application Developer V7.5, navigate to Project => Services => Expose RPC Adapter Service and you won’t need to manually create the configuration file. Specify <default-format /> to JSON because you want the returned data in JSON format to be processed by JavaScript directly and quickly.
- Set the service name and implementation to
ChartDataServiceand
dataservice.ChartDatarespectively, and expose the two methods getCarSale() and getDistributionByArea() in <methods />.
In addition to defining the exposed methods in RpcAdapterConfig.xml, there is another way to expose the RPC adapter service by having the ChartData class implement the com.ibm.webphere.rpcadapter.SelfBeanInfo interface, and provide the information of exposed methods in the getBeanDescriptorInfo() method.
Listing 10. Publishing the chart service in RpcAdapterConfig.xml
<?xml version="1.0" encoding="UTF-8"?> <rpcAdapter xmlns="" xmlns: <default-format>json</default-format> <services> <pojo> <name>ChartDataService</name> <implementation>dataservice.ChartData</implementation> <description>The facade for the chart data service.</description> <methods> <method> <name>getCarSale</name> <description>Gets all the car sale data.</description> <http-method>GET</http-method> </method> <method> <name>getDistributionByArea</name> <description>Gets the distribution data.</description> <http-method>GET</http-method> <parameters> <parameter> <name>brand</name> <description>a specific brand</description> </parameter> </parameters> </method> </methods> </pojo> </services> </rpcAdapter>
- The last step for publishing the chart data service is to configure the web.xml file so that the com.ibm.websphere.rpcadapter.RPCAdapter servlet is exposed under this Web address:
http://<host>:<port>/DynamicCharts/RPCAdapter/*
To do this, add the servlet configuration in Listing 11 to the web.xml file.
Listing 11. Configuring RPC adapter Servlet in web.xml
<servlet> <display-name>RPCAdapter</display-name> <servlet-name>RPCAdapter</servlet-name> <servlet-class>com.ibm.websphere.rpcadapter.RPCAdapter</servlet-class> </servlet> <servlet-mapping> <servlet-name>RPCAdapter</servlet-name> <url-pattern>/RPCAdapter</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>RPCAdapter</servlet-name> <url-pattern>/RPCAdapter/*</url-pattern> </servlet-mapping>
- You can now test the chart service via RPC adapter. Open these URLs in your Web browser and download the results:
http://<host>:<port>/DynamicCharts/RPCAdapter/httprpc/ChartDataService/getCarSale http://<host>:<port>/DynamicCharts/RPCAdapter/httprpc/ChartDataService/ getDistributionByArea?brand=BMW
Publish data updates to Web browsers with Web messaging
The Web messaging service connects a browser to the WebSphere Application Server SIBus for server-side event push via a publish/subscribe implementation, making it easy for you to build Web applications with real-time data updates, such as stock price quotes, auction bids, and automatic news updates.
Client/server communication is achieved through an HTTP-based message
routing protocol called the Bayeux protocol. Cometd is a Dojo Foundation
project to provide implementations of the Bayeux protocol in JavaScript
and other languages. The code in Listing 8 uses Dojo Cometd to initialize
the Web messaging servlet, subscribe the /charttopic topic, and register
the Web message handler at the client side. There is little code that
needs to be written for enabling the Web messaging service at the server
side, except for some configurations of Web messaging and SIBus. (The
deployment instructions in the next section explain how to set the
webmsgenabled parameter and create the chartbus SIBus on the application
server.
There is also a Web messaging configuration file webmsg.json in the
WebContent/WEB-INF of the DynamicCharts project (Listing 12). In this
file, busName is set to
chartbus. For the purpose of this example,
clientCanPublish is set to false because the server-side publish is
sufficient here.
Listing 12. Configuring the Web messaging service in webmsg.json
{ "WebMsgServlet": { "busName": "chartbus", "destination": "Default.Topic.Space", "clientCanPublish": false, "longPollTimeout": 30 } }
Let’s take a close look at the implementation of the chart data updater and how to publish a server event to clients with the JMS topic connection factory.
The ChartDataUpdater class implements the TimerListener interface CommonJ
Timer (see Resources). When the timer expires, the timerExpired method of
the ChartDataUpdater object will be run to simulate the chart data
updates by other solutions or services and then publish a message to Web
clients. To assist in publishing to Web messaging clients, a publishing
API is provided in the Web messaging application utility library. Listing
13 demonstrates usage of the publishing API; that is, publishing the
UPD through the Bayeux channel /charttopic encapsulated in
BayeuxJmsTextMsg.
Listing 13. Implementing the timerExpired interface for the chart data updater
/** * Perform the scheduled task at the given interval, implementing * the interface of TimerListener. */ public void timerExpired(Timer timer) { logger.log(Level.INFO, "Entering timer listener."); try { // Update the data logger.log(Level.INFO, "Updating chart data."); ChartDataAccessor cda = ChartDataAccessor.getInstance(); cda.storeSaleData(simulateSaleData()); cda.storeDistributionData(simulateDistributionData()); // Notify the clients of the chart data updates logger.log(Level.INFO, "Notifying Web browser clients of data updates"); publisher.publish(new BayeuxJmsTextMsg("/charttopic", "UPD")); } catch (PublisherException e) { e.printStackTrace(); logger.log(Level.SEVERE, e.getMessage()); } logger.log(Level.INFO, "Exiting timer listener."); }
The start, schedule, and stop of the chart data updater are done by calling TimerManager of CommonJ Timer (Listing 14).
Listing 14. Implementing the start/stop methods of the chart data updater
/** * Starts the chart data updater. * */ public void startUpdater() { try { if (!timerRunning) { InitialContext ctx = new InitialContext(); tmgr = (TimerManager)ctx.lookup("java:comp/env/tm/default"); tmgr.schedule(this, 5, UPDATE_INTERVAL); timerRunning = true; logger.log(Level.INFO, "The chart data updater is started."); } } catch (IllegalArgumentException e) { e.printStackTrace(); logger.log(Level.SEVERE, e.getMessage()); } catch (IllegalStateException e) { e.printStackTrace(); logger.log(Level.SEVERE, e.getMessage()); } catch (NamingException e) { e.printStackTrace(); logger.log(Level.SEVERE, e.getMessage()); } } /** * Stops the chart data updater. * */ public void stopUpdater() { if (timerRunning) { if (tmgr != null) { tmgr.stop(); logger.log(Level.INFO, "The chart data updater is stopped."); } } }
Additionally, you need to get the Publisher instance, initialize ChartDataUpdater with the publisher, and start ChartDataUpdater in the init method of the startup servlet. Furthermore, you need to shut down ChartDataUpdater in the servlet context listener when the servlet context is destroyed (Listing 15).
Listing 15. Implementing lifecycle management of the chart data updater
/* (non-Javadoc) * @see javax.servlet.GenericServlet#init() */ public void init() throws ServletException { // Call the init() of the super class super.init(); // Get and initialize the instance of ChartDataUpdater (pass in the publisher) ServletContext servletContext = getServletConfig().getServletContext(); Publisher publisher = (Publisher) servletContext.getAttribute( JmsPublisherServlet.PUBLISHER_SERVLET_CONTEXT_KEY); ChartDataUpdater cdu = new ChartDataUpdater(publisher); cdu.startUpdater(); // Keep the chart data updater, and clean it up when the context is destroyed servletContext.setAttribute(ChartDataUpdater.UPDATER_KEY, cdu); } /* (non-Java-doc) * @see javax.servlet.ServletContextListener#contextDestroyed(ServletContextEvent arg0) */ public void contextDestroyed(ServletContextEvent arg0) { // Stop the timer when context is destroyed ChartDataUpdater cdu = (ChartDataUpdater) arg0.getServletContext() .getAttribute(ChartDataUpdater.UPDATER_KEY); if (cdu != null) { cdu.stopUpdater(); } }
Finally, put in the servlet configurations for JMS Publisher and Web messaging, and the resource references for CommonJ Timer and the JMS Topic Connection Factory (Listing 16).
Listing 16. Configuring Web messaging in web.xml
<servlet> <description></description> <display-name>Publisher</display-name> <servlet-name>Publisher</servlet-name> <servlet-class> com.ibm.websphere.webmsg.publisher.jndijms.JmsPublisherServlet </servlet-class> <init-param> <description></description> <param-name>CONNECTION_FACTORY_JNDI_NAME</param-name> <param-value>java:comp/env/jms/ChartPublish</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet> <description/> <display-name>WebMsgServlet</display-name> <servlet-name>WebMsgServlet</servlet-name> <servlet-class>com.ibm.websphere.webmsg.servlet.WebMsgServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>WebMsgServlet</servlet-name> <url-pattern>/webmsgServlet</url-pattern> </servlet-mapping> <resource-ref <description></description> <res-ref-name>tm/default</res-ref-name> <res-type>commonj.timers.TimerManager</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Unshareable</res-sharing-scope> </resource-ref> <resource-ref <description></description> <res-ref-name>jms/ChartPublish</res-ref-name> <res-type>javax.jms.TopicConnectionFactory</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Shareable</res-sharing-scope> </resource-ref>
You have completed the development of the DynamicCharts sample. The next section explains how you can export the entire WAR file, and how to deploy and run this example.
Download and deploy the example
The DynamicCharts WAR file, including the source code, is provided with this article for download. These files have been verified on both Internet Explorer and Firefox with WebSphere Application Server V6.1.0.13 with PK56881. You must be sure to import all the dependent JAR files and the Ajax runtime prior to deployment.
To install the DynamicCharts example:
- Verify that the WebSphere Application Server Feature Pack for Web 2.0 is installed and operating properly.
- Enable the Web messaging service:
- Log onto the WebSphere Application Server administrative console and navigate to Servers => Application servers. Select the server to which DynamicChartsEAR.ear will be deployed.
- Expand Web Container Settings, select Web container transport chains, and then select the WCInBoundDefault transport chain.
- Select Web container inbound channel and then select Customer Properties.
- Click New and enter
webmsgenabledfor the name property and true for the value.
- Click Apply and then Save to save the repository information.
- Restart the application server.
- Create and configure the chartbus SIBus:
- Log on to the admin console and navigate to Service integration => Buses.
- Click New and enter
chartbusfor the name, and accept all the remaining defaults.
- Click Next and then Finish.
- Click Bus members under Topology on the chartbus detail page.
- Click Add and then select the server where you want DynamicChartsEAR.ear installed. Click Next.
- Accept the defaults and click Next, then Next again, and then Finish.
- Save the changes to repository.
- Restart the application server.
- Create the topic connection factory for DynamicCharts:
- Log on to the admin console and navigate to Resources => JMS => Topic connection factories.
- Select a server level scope.
- Click New. Select Default messaging provider and click OK.
- Enter
DynamicChartsfor name,
jms/ChartPublishfor JNDI name, and
chartbusfor Bus name, and keep all the remaining defaults.
- Click Apply and then click Save to save the repository information.
- Install DynamicChartsEAR.ear:
- Log on to the admin console and navigate to Applications => Install New Application.
- Browse your file system and select DynamicChartsEAR.ear and click Next.
- Accept the defaults and click Next, then Next again, and then Finish.
- Click Save to save the master configuration.
- Start DynamicChartEAR.
- Launch this URL:
http://<host>:<port>/DynamicCharts/dynamic-charts.html.
Conclusion
The WebSphere Application Server Feature Pack for Web 2.0 provides a full solution for the most common requirements of building Ajax-based applications. This article explained how to create dynamic charts with the Dojo Toolkit, how you can reuse an existing service with the RPC adapter, and how you can integrate other applications and publish data changes to clients with Web messaging and the SIBus. The considerations and tips presented here will help you get a better understanding of the Web 2.0 feature pack so you can quickly and successfully build or enhance your own Ajax-based applications.
Download
Resources
Learn
- A look at the WebSphere Application Server Feature Pack for Web 2.0
- WebSphere Application Server documentation for Feature Pack for Web 2.0
- Installing Feature Pack for Web 2.0 on distributed operating systems
- The Book of Dojo
- Service Data Objects, WorkManager, and Timers: IBM and BEA Joint Specifications Overview
- IBM developerWorks WebSphere
Get products and technologies
- Download IBM WebSphere Application Server Feature Pack for Web 2.0, includes product.
|
http://www.ibm.com/developerworks/websphere/library/techarticles/0909_chen/0909_chen.html?ca=dgr-jw22WS-DojoChartsdth-W&S_TACT=105AGY83&S_CMP=grjw22
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
27 July 2011 07:13 [Source: ICIS news]
By Nurluqman Suratman
?xml:namespace>
SINGAPORE
The fire, the fifth such incident at the complex this year, broke out at a section of a hydrogen pipeline in the vicinity, sources said.
“There was a small fire, but it did not damage any plants,” a spokesperson from Formosa Petrochemical Corp (FPCC) said, adding that the firm’s 1.03m tonne/year No 2 and 1.2m tonne/year No 3 crackers at the complex are running at full capacity.
However, Formosa Plastics Corp (FPC) had to shut its polyethylene (PE) and ethylene vinyl acetate (EVA) units at the complex as a cautionary measure, an FPC source said.
“Plant operators are very careful nowadays because there’ve been so many fires over the past months,” the FPC source said.
The affected facilities comprise a 264,000 tonne/year linear low density PE (LLDPE) plant, a 350,000 tonne/year high density PE (HDPE) unit and a 240,000 tonne/year low density PE (LDPE)/EVA swing plant, the source added.
“We have no idea how long the plant will remain shut because the situation is unclear,” the source said.
China-based traders said there may be some impact on the EVA market if the outage at FPC’s EVA plant is prolonged beyond a week.
“A week-long outage should not have any impact on the market because demand for EVA from the downstream footwear and hot melt adhesives industries lacks force,” the source said.
Among other units at the complex, production at Formosa BP Chemicals Corp’s (FBPC) 300,000 tonne/year acetic acid plant at Mailiao was unaffected by the fire and is operating at full capacity, a company official said.
Formosa BP Chemicals (FBPC) is an equally owned joint venture between BP and Formosa Chemicals and Fibre Corporation.
Nan Ya Plastics’ four monotheylene glycol (MEG) units in Mailiao, which have a combined capacity of 1.8m tonnes/year, are also running "normally" at 85-90% following the fire, sources said.
“There has been no impact at all on our factories [from the fire], but we still need to check with FPCC if their plants will be running normally,” said David Tsou, a spokesperson at the investor relations department of Nan Ya Plastics.
FPC's ethylene dichloride (EDC), vinyl chloride monomer (VCM), polyvinyl chloride (PVC) and caustic soda plants are also unaffected by the fire and the company has no plans to shut any of the units, according to a company source.
The company’s 98,000 tonne/year methyl methacrylate (MMA) acetone cyanohydrin-based unit in Mailiao is also running at full tilt, a company source said.
FPC shut its 100,000 tonne/year ECH unit at the site immediately after the fire, but has scheduled to restart it on 27 July, according to a company source.
A pipeline fire at the complex on 12 May had forced FPCC to shut its 700,000 tonne/year No 1 cracker and downstream 109,000 tonne/year butadiene (BD) extraction unit for inspection, while the local government in Yunlin county ordered Nan Ya Plastics to shut five units at its nearby Haifung factory for safety review.
While Nan Ya Plastics has gained approval to restart its 360,000 tonne/year No 3 and 720,000 tonne/year No 4 MEG plants at Haifung, three other units remain shut pending approval from the local government, according to Tsou.
Formosa Chemicals & Fibre Corp (FCFC) was also ordered to shut its No 1 aromatics unit, which can produce 150,000 tonnes/year of benzene, 100,000 tonnes/year of isomer-grade mixed xylenes and 270,000 tonnes/year of paraxylene (PX), following the blaze on 12 May.
Earlier this week, Yunlin county officials were expected to give FCFC permission to restart the No 1 unit soon, but 26 July fire may potentially delay the approval process, sources said.
The company’s No 2 and No 3 aromatics unit at the site are unaffected by the 26 July fire.
FPCC was also originally scheduled to restart its No 1 cracker in Mailiao this week, but the incident may potentially derail the company’s restart plans, sources said.
FPCC had earlier said it planned to restart the No 1 cracker before the turnaround at its No 3 cracker to prevent feedstock shortage for its derivative facilities.
The No 3 cracker is scheduled to be shut for a 40-45 day turnaround in the middle of August, but it is not clear if this will be postponed, sources said.
Traders and end-users said it is still too early to say whether there will be any impact on BD pricing because details on the impact of the fire is still unclear.
“Maybe Formosa may delay or cancel some BD cargoes and BD prices may go up, but it is too early to say,” a Japanese trader said.
“Even if Formosa were to cancel or delay their BD shipments, there is a lot of supply from China and we don’t see any serious shortage or impact on the BD market,” an end-user said.
Asia BD prices fell to $4,100-4,150/tonne (€2,829-2,864/tonne) CFR (cost & freight) NE (northeast) Asia on 22 July, down by $150/tonne from an all-time high of $4,250-4,300/tonne CFR NE Asia on 15 July.
Additional reporting by Peh Soo Hwee, Chow Bee Lin, Feliana Widjaja, Loh Bohan, Mahua Chakravarty, Helen Lee, Helen Yan, Gabriel Yip, Judith Wang and Junie Lin
($1 = €0
|
http://www.icis.com/Articles/2011/07/27/9480137/taiwans-formosa-restart-plans-may-get-delayed-by-fire.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Ruby:Tutorial
Getting Ruby
For Windows or Mac, you will want to head over to the official Download page, grab the appropriate installer, and go for it. On OSX, install it via DarwinPorts.
Linux
(Yeah, it gets its own section, it sucks, blah blah blah.)
Easy version: Install it from the packages supplied by the distributor.
Full version:
- Many distributions should come with packages that you can easily install with the proper package management tool.
- Debian, Ubuntu, or other debian-based distributions:
sudo apt-get install ruby irb rdoc
- If you want to install gems, you may need to install separate packages like e.g.
rubygems, Ruby development packages, or other header packages for native extensions (database interfaces, etc.)
- Otherwise see the Ruby download section for instructions
How to Run Ruby
There are several methods to run ruby code. The most straightforward is, of course, to write a script file, and then just run
ruby myscript.rb some optional arguments
On *nix, you can also make the script file executable, and use the hash-bang notation, so a script might look like this:
#!/usr/bin/ruby
puts 'This could be your code!'
For quick experimentation, there is also an interactive shell-type interface, called irb (you may need to install it separately). An example session would look like this:
$ irb irb(main):001:0> puts 'Hello, world!' Hello, world! => nil irb(main):002:0>
Basic Concepts
Hello World
puts "Hello World"
There is not much more to be said about it, so let us try a more involved example, showing some actual features of the language.
#!/usr/bin/ruby
- You guess what it does.
def bottles(n)
if n>0 then "#{n} bottle#{n>1 ? 's' : } of beer" else "no more bottles of beer" end
end
number_of_bottles = 99 number_of_bottles = ARGV[0].to_i if ARGV.size > 0
number_of_bottles.downto(1) do |n|
puts "#{bottles(n)} on the wall, #{bottles(n)};" puts " take one down, pass it around, #{bottles(n-1)} on the wall."
end
This small snippet demonstrates several features:
- Function declarations, and return values. The last statement called in a function defines that function's return value.
- String interpolation,
- Command-line arguments,
- Iterators
Classes
The following example demonstrates a Person class:
class Person
attr_accessor :name, :age, :height, :weight
def initialize(name, age, height, weight) @name = name @age = age @height = height @weight = weight end
def information print "Name: #{name}\nAge: #{age}\nHeight: #{height}\nWeight: #{weight}\n" end
end
One could use the class like so:
smith = Person.new("Mr Smith", 20, 5.11, 13.5)
smith.information
Would output:
Name: Mr Smith Age: 20 Height: 5.11 Weight: 13.5
Setters/Getters (or: Love the Assignment)
Consider again the above example of a class. Now, what if we wanted to have a way to store the weight in kg, but provide setters/getters in US pounds? Just add the following methods to the class:
def weight_in_lbs
@weight / 0.4536
end
def weight_in_lbs=(w)
@weight = w * 0.4536
end
Now you have created transparent access to Person#weight, doing unit conversion on the fly by accessing and assigning Person#weight_in_lbs as you would any "normal" attribute. (Alternatively, you could of course extend the numeric classes to provide #kg_to_lbs and other necessary methods doing the conversion for you...)
|
http://content.gpwiki.org/index.php/Ruby:Tutorial
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
30 November 2011 20:35 [Source: ICIS news]
HOUSTON (ICIS)--US propylene inventories rose by 6% in the last week of November, extending its latest uptrend into the eleventh week, Energy Information Administration (EIA) data showed on Wednesday.
The increase put stockpiles at 4.359m bbl, which the highest figure on the EIA series since 4.374m bbl during the week that ended on 3 February 2006.
EIA figures refer to non-fuel refinery-sourced propylene.
The increase last week came despite a drop in US refinery operating rates, which fell to 84.6% of capacity against 85.5% a week earlier, EIA data showed.
Refinery-grade propylene (RGP) was offered on Wednesday at 44 cents/lb ($970/tonne, €728/tonne) against a 40.50 cent/lb bid, but no deals were heard.
RGP was last heard traded at 44.00-44.50 cents/lb.
?xml:namespace>
(
|
http://www.icis.com/Articles/2011/11/30/9512979/us-propylene-inventories-up-6-11th-weekly-increase.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
> [larry]
> Though you do see 24*60*60 pretty often.
No argument; "doesn't buy much" isn't "doesn't buy anything".
The saving grace in Python is that typical Python users tend to package
functions in modules more than typical Perl users tend to modularize
functions in packages <wink>, and so in Python you'll _usually_ see
something like
import time
_SECONDS_PER_DAY = 24*60*60 # a module-level binding
_SECONDS_PER_WEEK = _SECONDS_PER_DAY * 7 # ditto
def full_days_in_epoch(): # a function supplied by the module
return int(time.time() / _SECONDS_PER_DAY)
The constant expressions aren't optimized away, but module-level code is
executed only once per job (when the module is first imported), so the
constant expressions here are evaluated only once per job.
It's true that a user might pay a significant price for writing
return int(time.time()/(24*60*60))
today; seems rare in Python, though.
not-to-say-that-all-possible-optimizations-shouldn't-be-done-at-
once<wink>-ly y'rs - tim
Tim Peters [email protected]
not speaking for Kendall Square Research Corp
|
http://www.python.org/search/hypermail/python-1994q2/0279.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
16 August 2007 14:53 [Source: ICIS news]
LONDON (ICIS news)--Shares in coatings major Akzo Nobel fell 6% on Thursday amid market fears that its partner in the proposed ICI acquisition, Henkel, would have trouble financing the deal.?xml:namespace>
German adhesives producer Henkel had agreed to pay ₤2.7bn ($5.6bn/€4.1bn) in cash to acquire ICI’s adhesives and electronic materials businesses as part of the proposed ₤8bn takeover from Netherlands-based Akzo Nobel.
Henkel said it was still considering all options to fund the deal, including a combination of equity and debt and/or the divestment of non-core assets.
“We are absolutely sure we are able to finance the acquisition,” said a Henkel spokesman.
“We are looking at all areas before determining the financial structure of the transaction,” he added.
The deal was expected to close in the first half of 2008.
At 15:18 local time?xml:namespace>
The European chemicals sector suffered a general downturn as markets continued to feel the effects of continued turmoil
|
http://www.icis.com/Articles/2007/08/16/9053432/akzo-nobel-shares-slide-on-ici-financing-worries.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
30 January 2012 13:16 [Source: ICIS news]
LONDON (ICIS)--Indorama Ventures expects to start up its joint venture purified terephthalic acid (PTA), polyethylene terephthalate (PET) and fibres project in ?xml:namespace>
The size of the $700m (€532m) project is still under negotiation but it is expected to have the capacity to produce approximately 1m tonnes/year of PTA and 500,000 tonnes/year of PET.
It will be fully integrated with a third-party paraxylene (PX) producer, and will also produce polyester staple fibre (PSF).
Aloke Lohia, Indorama Ventures’ founder and CEO, said the plant will supply the fast-growing Indian market.
It will be a joint venture with polyester fibres company Indo Rama Synthetics (India), which owns an approximately 2% stake in Indorama Ventures and is led by Aloke Lohia’s brother, OP Lohia.
“We are working on the ownership structure,” Aloke Lohia said. The location of the project, and precise capacity details, will be revealed when negotiations have been
|
http://www.icis.com/Articles/2012/01/30/9527777/indorama-ventures-to-start-up-india-ptapet-joint-venture-in-2015.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Font Handling for Visual Basic 6.0 Users
This topic compares font-handling techniques in Visual Basic 6.0 with their equivalents in Visual Basic 2005.
Conceptual Differences
Fonts in Visual Basic 6.0 are handled in two different ways: as font properties of forms and controls, or as a stdFont object.
In Visual Basic 2005, there is a single Font object: System.Drawing.Font. The Font property of a form or control takes a Font object as an argument.
Setting Font Properties
In Visual Basic 6.0, font properties can be set at run time, either by assigning a stdFont object or by setting the properties directly on the control; the two methods can be interchanged.
In Visual Basic 2005, the Font property of a control is read-only at run time—you cannot set the properties directly. You must instantiate a new Font object each time you want to set a property.
Font Inheritance
In Visual Basic 6.0, font properties have to be set individually for each control or form; using a stdFont object simplifies the process but still requires code.
In Visual Basic 2005, font properties are automatically inherited from their parent unless they are explicitly set for the child object. For example, if you have two label controls on a form and change the font property of the form to Arial, the label control's font also changes to Arial. If you subsequently change the font of one label to Times Roman, further changes to the form's font would not override the label's font.
Font Compatibility
Visual Basic 6.0 supports raster fonts for backward compatibility; Visual Basic 2005 supports only TrueType and OpenType fonts.
Enumerating Fonts
In Visual Basic 6.0, you can use the Screen.Fonts collection along with the Screen.FontCount property to enumerate the available screen fonts.
In Visual Basic 2005, the Screen object no longer exists; in order to enumerate available fonts on the system, you should use the System.Drawing.FontFamily namespace.
Code Changes for Fonts
The following code examples illustrate the differences in coding techniques between Visual Basic 6.0 and Visual Basic 2005.
Code Changes for Setting Font Properties
The following example demonstrates setting font properties at run time. In Visual Basic 6.0, you can set properties directly on a control; in Visual Basic 2005, you must create a new Font object and assign it to the control each time you need to set a property.
' Visual Basic 6.0 ' Set font properties directly on the control. Label1.FontBold = True ' Create a stdFont object. Dim f As New stdFont ' Set the stdFont object to the Arial font. f.Name = "Arial" ' Assign the stdFont to the control's font property. Set Label1.Font = f ' You can still change properties at run time. Label1.FontBold = True Label1.FontItalic = True
' Visual Basic 2005 ' Create a new Font object Name and Size are required. Dim f As New System.Drawing.Font("Arial", 10) ' Assign the font to the control Label1.Font = f ' To set additional properties, you must create a new Font object. Label1.Font = New System.Drawing.Font(Label1.Font, FontStyle.Bold Or FontStyle.Italic)
Code Changes for Enumerating Fonts
The following example demonstrates filling a ListBox control with a list of the fonts installed on a computer.
Upgrade Notes
When a Visual Basic 6.0 application is upgraded to Visual Basic 2005, any font-handling code is modified to use the new Font object.
Font inheritance in Visual Basic 2005 can cause unintended changes in the appearance of your application. You should check your converted application for any code that explicitly sets a font at the form or container level and, if necessary, change the font for any child controls that should not inherit that font.
During upgrade, raster fonts are converted to the default OpenType font, Microsoft Sans Serif. Formatting such as Bold or Italic is not preserved. For more information, see Only OpenType and TrueType fonts are supported.
If your application contains code that enumerates fonts, raster fonts will not be enumerated in the upgraded application, and font families are enumerated rather than individual character-set versions.
See Also
ReferenceFont Class
FontFamily.Families Property
|
http://msdn.microsoft.com/en-us/library/3essdeyy(v=vs.80).aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
't really care what the something else is, as long as we've pulled out the series inductance L.
The H-bridge is called an H-bridge because it looks like the letter H. (And if you don't see it, move your mouse over the image.) The DC link is called the DC link because it's supposed to be a DC voltage. The usual methodology here is to group the power switches into two half-bridges, one for each of nodes A and B, and pretend that the half-bridges act like switching multipliers.
The voltage on node A is equal to VDC for some fraction of the time DA, and is equal to 0 for the rest of the time, so we can treat node A like a voltage source VA equal to DAVDC. Similarly, node B is like a voltage source VB equal to DBVDC. And therefore the voltage across the load is just VAB=(DA - DB)VDC. From here on, proper control over the load voltage and current uses standard control systems techniques with the Laplace transform domain and Z-transform domain.
Whoa! That sidesteps a number of important issues:
We're not going to talk about the first two of these, at least not right now, but we will talk today about what happens to the load current waveform during the switching period.
First, let's have some fun creating some pulse-width modulation waveforms in Python. digitalplotter(t,*signals): '''return a plotting function that takes an axis and plots digital signals (or other signals in the 0-1 range)''' def f(ax): n = len(signals) for (i,sig) in enumerate(signals): ofs = (n-1-i)*1.1 plotargs = [] for y in sig[1:]: if isinstance(y,basestring): plotargs += [y] else: plotargs += [t,y+ofs] ax.plot(*plotargs) ax.set_yticks((n-1-np.arange(n))*1.1+0.55) ax.set_yticklabels([sig[0] for sig in signals]) ax.set_ylim(-0.1,n*1.1) return f
t = np.arange(0,4,0.0005) sigA = ('A',pwm(t,0.2,centeralign=True)) sigB = ('B',pwm(t,0.5,centeralign=False)) sigC = ('C',pwm(t,0.9,centeralign=False)) fig = plt.figure() ax = fig.add_subplot(1,1,1) digitalplotter(t, sigA+(sawtooth(t),'k:'), sigB+(ramp(t),'k:'), sigC+(ramp(t),'k:'))(ax)
See, that wasn't so hard.
Now we need to think a bit. What voltage really appears across the load? Well, it is Va - Vb, but here we're going to use functions of time rather than DC values. For now, let's ignore the DC link voltage; just assume it's 1.0 volt, so we can look at the raw PWM signals. Also, note that our PWM period is 1.0 rather than 50 μsec or 100μsec. This is called normalizing the equations, and we'll add back these factors of DC link voltage and PWM period later.
pwmA = pwm(t,0.5,centeralign=False) pwmB = pwm(t,0.2,centeralign=False) fig = plt.figure() ax = fig.add_subplot(1,1,1) digitalplotter(t, ('pwmA',pwmA), ('pwmB',pwmB), ('pwmA-pwmB',pwmA-pwmB))(ax)
There, the difference between two PWM signals is just a signal that goes between either 0 and 1, or -1 and 0, or -1 and 1, depending on the timing between the signals.
How can we figure out what the current looks like, if we don't know what's in the '?' box of the inductive load?
Here's the trick: as long as the circuit's dominant electrical time constant is long compared to the PWM period, we can confine our interest to high frequency content of the load current. The electrical time constant is approximately L/R, where R is the series resistance of the load; it's exact if there are no other load impedances. If you have R = 0.2 ohm and L = 100 μH, that gives you a time constant of 500 μs, which is quite a bit longer than the usual PWM periods of 50 μs or 100 μs (corresponding to 20 kHz and 10 kHz). For typical motors, the electrical time constant is in the 200 μs - 20 ms range. Other power electronic loads (transformers, etc.) may have lower time constants.
The other part of the trick is that if the electrical time constant is not long compared to the PWM period, we are probably not switching at a high enough frequency. That's because the dynamics of the load current are so fast that we can't ignore what happens in the timescale of pulse-width modulation, and any control system will have to be more complicated.
So in many cases, the frequency content of the load current can be nicely segregated into disjoint pieces: the high-frequency content of ripple current, and the low-frequency content of whatever's in the '?' box. Stated another way:
The ripple current of the load is determined by the content of a PWM waveform at harmonics of the PWM frequency. The average-value model current of the load is determined by the content of a PWM waveform below the PWM frequency. Average-value models are commonly used to design control systems in power electronics, and they assume that the PWM frequency components of the load voltages are absent, and the voltage on node A is equal to DA(t) × VDC where DA(t) is the input to the PWM generator.
So what's the high-frequency content of a PWM waveform? It's just the PWM waveform minus its average value. We can simulate this in Python, too: showripple(fig,t,Va,Vb,titlestring): '''plot ripple current as well as phase duty cycles and load voltage''' axlist = [] Iab = calcripple(t,Va-Vb) margin = 0.1 ax = fig.add_subplot(3,1,1) digitalplotter(t,('Va',Va),('Vb',Vb))(ax) ax.set_ylabel('Phase duty cycles') axlist.append(ax) ax = fig.add_subplot(3,1,2) ax.plot(t,Va-Vb) ax.set_ylim(createLimits(margin,Va-Vb)) ax.set_ylabel('Load voltage') axlist.append(ax) ax = fig.add_subplot(3,1,3) ax.plot(t,Iab) ax.set_ylim(createLimits(margin,Iab)) ax.set_ylabel('Ripple current') axlist.append(ax) fig.suptitle(titlestring, fontsize=16) # annotate with peak values tlim = [min(t),max(t)] tannot0 = tlim[0] + (tlim[1]-tlim[0])*0.5 tannot1 = tlim[0] + (tlim[1]-tlim[0])*0.6 for y in [min(Iab),max(Iab)]: ax.plot(tlim,[y]*2,'k:') # see: # ax.annotate('%.5f' % y, xy=(tannot0,y), xytext=(tannot1,y*0.3), bbox=dict(boxstyle="round", fc="0.9"), arrowprops=dict(arrowstyle="->", connectionstyle="arc,angleA=0,armA=20,angleB=%d,armB=15,rad=7" % (-90 if y > 0 else 90)) ) return axlist def showpwmripple(fig,t,Da,Db,centeralign=False,titlestring=''): return showripple(fig,t, pwm(t,Da,centeralign), pwm(t,Db,centeralign), titlestring='%s-aligned pwm, $D_a$=%.3f, $D_b$=%.3f' % ('Center' if centeralign else 'Edge', Da, Db))
fig = plt.figure(figsize=(8, 6), dpi=80) showpwmripple(fig,t,0.4,0.2,centeralign=False);
fig = plt.figure(figsize=(8, 6), dpi=80) showpwmripple(fig,t,0.6,0.2,centeralign=True);
If we want to know what the ripple looks like in general, and not just for a specific case, we have to do some algebra. (Luckily, if you're clumsy like me you can use Python's
sympy library to help.)
The edge-aligned case is fairly easy to analyze. Let's say the PWM period is T, and let's define D = Da - Db. If D > 0, then the load voltage is Vdc for a period of time DT and 0 otherwise. If D < 0, then the load voltage is -Vdc for a period of time -DT and 0 otherwise.
This means the average load voltage is DVdc, whether D > 0 or not.
Let's just consider the D > 0 case for now.
Remember: the average or low-frequency load voltage appears across the noninductive components of the load, and the high-frequency load voltage appears across the load inductance.
This means that during the time interval DT, when the load voltage is Vdc, the voltage across the load inductance is Vdc - (average load voltage) = Vdc - DVdc = (1-D)Vdc, and the inductor current increases by \( \Delta I = \int V/L \ dt = \int_0^{DT} (1-D)V_{dc}/L \ dt = D(1-D)V_{dc}T/L \).
During the rest of the PWM period, an interval (1-D)T, when the load voltage is 0, the voltage across the load inductance is 0 - (average load voltage) = 0 - DVdc = -DVdc, and the inductor current increases by \( \Delta I = \int V/L \ dt = \int_{DT}^{T} -DV_{dc}/L \ dt = -(1-D)DV_{dc}T/L \).
Note that the increases in inductor currents have equal amplitudes and opposite signs; the net increase in inductor current over a PWM period, due to PWM voltage harmonics, is exactly 0. Changes in average inductor current are created by low-frequency components of load voltage, where the applied voltage is not equal to the load's DC voltage (whether it's I*R from resistive voltage drops, or something else from motor back-emf).
The last part of this equation, VdcT/L, appears in any ripple current calculation, so let's define a normalized ripple current IR0 = VdcT/L, in which case the peak-to-peak ripple current is just D(1-D)IR0. (Alternatively, the zero-to-peak ripple current is half of that, or D(1-D)IR0/2.) If we go through the same analysis for D<0, we'll end up with a peak-to-peak ripple current of \( |D|(1-|D|)I_{R0} \) handling both cases. (zero-to-peak ripple current = \( |D|(1-|D|)I_{R0}/2 \))
The maximum ripple current is IR0/4 peak-to-peak, when D = ±0.5. Here's what the ripple current looks like as a function of D:
D = np.arange(-1,1,0.005) plt.plot(D,abs(D)*(1-abs(D))) plt.xlabel('net duty cycle $D = D_a - D_b$',fontsize=16) plt.ylabel('ripple current / $I_{R0}$',fontsize=16)
Let's double check this point of maximum ripple current, just to make sure it makes sense. With Da = 0.6 and Db = 0.1, we have D = Da-Db = 0.5 and the peak-to-peak current normalized to IR0 should be 0.25 (namely from -0.125 to +0.125):
fig = plt.figure(figsize=(8, 6), dpi=80) showpwmripple(fig,t,0.6,0.1,centeralign=False);
That's pretty close! The error here is numerical and we can improve it by using a smaller timestep:
tfine = np.arange(0,4,0.0001) fig = plt.figure(figsize=(8, 6), dpi=80) showpwmripple(fig,tfine,0.6,0.1,centeralign=False);
The center-aligned case is a little more difficult to analyze, because there are 4 time intervals per PWM period rather than 2.
If you don't like yucky algebra, skip ahead to the center-aligned results. I don't usually like to put algebraic derivations in blog posts, since I hated sitting through college classes waiting for professors to finish writing long equations on the blackboard, but here I will, since the algebra's not so bad, and it'll give you a sense of how to understand the results if you want, as well as how to use
sympy as a tool for assisting with this kind of task.
For Da > Db, these are the four time intervals [A]-[D]:
fig = plt.figure(figsize=(8, 6), dpi=80) Da,Db = 0.6, 0.1 intcodes = 'ABCD' axlist = showpwmripple(fig,t,Da,Db,centeralign=True) for (i,tcenter) in enumerate([0.5, 1-(Da+Db)/4, 1, 1+(Da+Db)/4]): y = (i/2) * 1.1 + 0.55 intcode = '[%s]' % intcodes[i] axlist[0].annotate(intcode,xy=(tcenter,y), horizontalalignment='center', verticalalignment='center') axlist[2].annotate(intcode,xy=(tcenter,0.05), horizontalalignment='center', verticalalignment='center')
Let's focus on one period and look at the results. The average load voltage is still DVDC = (Da-Db)VDC. If we go through a similar type of analysis as the edge-aligned case, let's look at the values of the normalized inductor current (factoring out a VDCT/L) at some strategically-chosen times:
def show1period(Da,Db): t1period = np.arange(-0.5,1.5,0.001) pwmA = pwm(t1period,Da,centeralign=True) pwmB = pwm(t1period,Db,centeralign=True) fig = plt.figure() ax = fig.add_subplot(1,1,1) Ir = calcripple(t1period,pwmA-pwmB) ax.plot(t1period,Ir) ax.set_xlim(-0.1,1.1) ax.set_ylim(min(Ir)*1.2,max(Ir)*1.2) # now annotate (D1,D2,V) = (Db,Da,1) if Da > Db else (Da,Db,-1) tpts = np.array([0, D1/2, D2/2, 0.5, 1-(D2/2), 1-(D1/2), 1]) dtpts = np.append([0],np.diff(tpts)) vpts = np.array([0, 0, V, 0, 0, V, 0]) - (Da-Db) ypts = np.cumsum(vpts*dtpts) ax.plot(tpts,ypts,'.',markersize=8) for i in range(7): ax.annotate('$t_%d$' % i,xy=(tpts[i]+0.01,ypts[i]), fontsize=16) show1period(0.6,0.1) show1period(0.9,0.8)
For D = Da - Db > 0, here's algebraic equations for the ripple current values at these points t[0] - t[6]:
The algebra is starting to get a bit yucky, so let's simplify this with
sympy. The algebra trick we'll use is to rewrite Da and Db in terms of their differential-mode component D and common-mode component D0:
import sympy %load_ext sympy.interactive.ipythonprinting D, D0 = sympy.symbols("D D_0") Da = D/2 + D0 Db = -D/2 + D0 # now let's verify that we've defined D and D0 properly: display('Da-Db=',Da-Db) display('(Da+Db)/2=',(Da+Db)/2)
Da-Db=
(Da+Db)/2=
# average voltage at the load = D I = [0,0,0,0,0,0,0] I[1] = I[0] + Db/2*(0-D) I[2] = I[1] + (Da-Db)/2*(1-D) I[3] = I[2] + (1-Da)/2*(0-D) I[4] = I[3] + (1-Da)/2*(0-D) I[5] = I[4] + (Da-Db)/2*(1-D) I[6] = I[5] + Db/2*(0-D) # let's use pandas to display as a table import pandas as pd itable1 = pd.DataFrame([sympy.simplify(Ik) for Ik in I], index=['I[%d]' % d for d in range(7)]) itable1
It's always good to see 0 values when you're simplifying algebra. Also note that I[1] = -I[5] and I[2] = -I[4]. If we were to repeat this exercise for D < 0 (Da < Db), we'd get:
# average voltage at the load = D; # load voltage alternates between 0 and -1 I = [0,0,0,0,0,0,0] I[1] = I[0] + Da/2*(0-D) I[2] = I[1] + (Db-Da)/2*(-1-D) I[3] = I[2] + (1-Db)/2*(0-D) I[4] = I[3] + (1-Db)/2*(0-D) I[5] = I[4] + (Db-Da)/2*(-1-D) I[6] = I[5] + Da/2*(0-D) itable2 = pd.DataFrame([sympy.simplify(Ik) for Ik in I], index=['I[%d]' % d for d in range(7)]) itable2
The forms are the same, and we can generalize to this:
I[0] = I[3] = I[6] = 0
I[1] = -I[5] = D*(abs(D) - 2*D0)/4
I[2] = -I[4] = D*(2-abs(D) - 2*D0)/4
The maximum amplitude of ripple current is just the maximum amplitude of I[1] or I[2]:
I[peak] = abs(D)*max(abs(abs(D) - 2*D0), abs(2 - abs(D) - 2*D0))/4
We can plot this in a contour plot in Python:
Da_vector = np.arange(0,1,0.005) Db_vector = np.arange(0,1,0.005) Da, Db = np.meshgrid(Da_vector, Db_vector) D = Da-Db D0 = (Da+Db)/2 Ipk = abs(D)*np.maximum(abs(abs(D)-2*D0),abs(2-abs(D)-2*D0))/4 cs = plt.contour(Da,Db,Ipk,numpy.arange(0,0.125,0.0125),colors='black') plt.clabel(cs) plt.xlabel('Da') plt.ylabel('Db') plt.title('Peak ripple current $I_{pk}/(V_{dc}T/L)$',fontsize=16)
Text(0.5,1,'Peak ripple current $I_{pk}/(V_{dc}T/L)$')
It may be a bit more insightful to make a contour plot relative to the common-mode and differential-mode duty cycles:
cs = plt.contour(D,D0,Ipk,numpy.arange(0,0.125,0.0125), colors='black') plt.clabel(cs) plt.xlabel('D = Da - Db') plt.xticks(np.arange(-1,1.25,0.25)) plt.yticks(np.arange(0,1.1,0.1)) plt.ylabel('common-mode duty cycle D0') plt.title('Peak ripple current $I_{pk}/(V_{dc}T/L)$', fontsize=16)
Text(0.5,1,'Peak ripple current $I_{pk}/(V_{dc}T/L)$')
Why is this insightful? Because the minimum amplitude ripple occurs when we keep the common-mode duty cycle equal to 0.5! In this case, we pick our differential duty cycle D, and set Da = (1 + D)/2 and Db = (1 - D)/2.
Not only do we get ripple current of minimum amplitude for a given differential duty cycle D, but the irregular-looking waveform for ripple current disappears, and we're left with a straightforward sawtooth at twice the PWM frequency:
fig = plt.figure(figsize=(8, 6), dpi=80) Da,Db = 0.25, 0.75 axlist = showpwmripple(fig,t,Da,Db,centeralign=True)
fig = plt.figure(figsize=(8, 6), dpi=80) Da,Db = 0.85, 0.15 axlist = showpwmripple(fig,t,Da,Db,centeralign=True)
If we take that somewhat ugly equation for peak ripple current:
I[peak] = abs(D) * max(abs(abs(D) - 2*D0),abs(2 - abs(D) - 2*D0))/4
and substitute D0 = 0.5, we get:
I[peak] = abs(D) * max(abs(abs(D) - 1),abs(2 - abs(D) - 1))/4 = abs(D) * max(abs(abs(D) - 1),abs(1 - abs(D)))/4
But abs(abs(D) - 1) and abs(1 - abs(D)) are both equal to 1 - abs(D), so we have
I[peak] = |D| * (1-|D|) / 4.
This is exactly 1/2 the ripple current of the edge-aligned case! We get half the ripple at twice the frequency!
If you take away only two things from this article, and forget everything else, they should be:
If you follow these two simple rules, you will double the ripple frequency content, and reduce the ripple amplitude by a factor of 2, compare to edge-aligned PWM.
There are valid reasons to deviate from these rules significantly, but they're rare. If you have a system with very high switching losses (where you may not want to keep switching all 4 switches on and off during a PWM period), it may be important to create switching waveforms that allow you to keep some of the transistors either open or closed during the entire PWM cycle. (You can do this with 3 out of the 4 switches, at the cost of higher ripple current, and diode conduction rather than transistor conduction in one of the switches.) This removes switching losses in those transistors.
A more common situation is a system with bootstrap capacitor gate drives, where the upper transistors can't be turned on at 100% duty cycle, or the bootstrap capacitor will deplete its charge. The maximum duty cycle you can reach depends on the detailed characteristics of the gate drive circuitry, but typically is in the 90-98% range. (If someone working on a gate drive design came to me and said they can only reach 90% duty cycle on the upper transistors, I'd ask why, and send them back to the drawing board to do better.) The lower transistors, on the other hand, can be kept on at 100% duty cycle. So this creates a slight asymmetry, and the best solution is to come as close as you can to the ideal case, which means to reduce the common-mode duty cycle slightly at large modulation indices. Maybe that wasn't clear, so here's a concrete example:
Suppose you have a system with an H-bridge, but it has gate drive circuitry which limits your effective duty cycle on each half-bridge between 0 and 90%. And you're working with a control systems guru, who wants to see a particular duty cycle D applied across the load. It's your job to pick half-bridge duty cycles Da and Db, so here's what you might run into:
The ideal case isn't achievable in real life; you can't get an output duty cycle across the load of more than 90%, and once you reach 80% output duty cycle, you have to move the common-mode duty downward slightly to ensure that the upper transistors aren't on more than 90% of the time. In this case, ripple current increases slightly at high load voltages.
OK, we have the zero-to-peak ripple current formula for center-aligned PWM in an H-bridge, with duty cycles centered around 50%:
$$D_a = \frac{1+D}{2}$$
$$D_b = \frac{1-D}{2}$$
$$I_{pk} = \frac{|D| (1-|D|)}{4} I_{R0}$$
where $$I_{R0} = \frac{V_{DC}T}{L}$$
but what about RMS and the harmonic content of the ripple current waveform?
Our ripple current waveform is piecewise linear. For a time period = DT/2, the ripple current rises linearly from -Ipk to +Ipk, and then for a time period = (1-D)T/2, the ripple current falls linearly from +Ipk to -Ipk.
RMS current is defined as the square root of the mean value of the squared current. Let's figure out how we can analyze this:
t2 = np.arange(0,1,0.001) D = 0.7 t0 = (1-D)/4 t1 = (1+D)/4 y1 = abs(D)*abs(1-D)/4 Vload = (pwm(t2,(1.0+D)/2,centeralign=True) -pwm(t2,(1.0-D)/2,centeralign=True)) I = calcripple(t2,Vload) plt.plot(t2,I) plt.ylim(-0.07,0.07) plt.plot([t0,t1],[-y1,y1],'.',markersize=8) plt.annotate('$(t_0,y_0)$',xy=(t0+0.01,-y1),fontsize=16) plt.annotate('$(t_0+h,y_1)$',xy=(t1+0.01,y1),fontsize=16)
Annotation(0.435,0.0525,'$(t_0+h,y_1)$')
t,t0,y0,y1,h = sympy.symbols('t t0 y0 y1 h') m = (y1-y0)/h y = m*(t-t0)+y0 integral_ysquared = sympy.integrate(y**2,(t,t0,t0+h)) display('integral of y^2 = ',sympy.simplify(integral_ysquared))
integral of y^2 =
What did we just do here? We defined y as a linear function of t, where \( y\vert_{t=t0} = y_0 \), and \( y\vert_{t=t0+h} = y1 \). The slope is just \( m=\frac{y_1-y_0}{h} \). The integral of y2 over this range is a very simple expression.
How does this help us with the RMS of piecewise linear waveforms? For any function \( f(t) \) defined over an interval between t = T0 and t = T1, the root-mean-square of \( f(t) \) over that interval is defined as:
$$RMS(f(t),T_0,T_1) = \sqrt{\frac{\int_{T_0}^{T_1} f(t)^2 dt}{T_1-T_0}}$$
So let's apply this to our sawtooth with amplitude Ipk.
The integral of Iripple^2 is equal to \( \frac{1}{3} * (DT/2) * ((-I_{pk})^2 + (-I_{pk})*(I_{pk}) + (I_{pk})^2) \) for the first time period, DT/2, and then \( \frac{1}{3} * ((1-D)T/2) * ((I_{pk})^2 + (I_{pk}) * (-I_{pk}) + (-I_{pk})^2) \) for the rest of the time period, (1-D) * T/2.
Then we have to take the average over the time T/2, and the square root:
Too much math for you? Let sympy do the work:
Ipk,D,T = sympy.symbols('I_{pk} D T')
def f3(y0,y1,h): return h*(y0*y0 + y0*y1 + y1*y1)/3 integral_squared_ripple = (f3(-Ipk,Ipk,D*T/2) + f3(Ipk,-Ipk,(1-D)*T/2)) sympy.simplify(integral_squared_ripple/(T/2))
If we take the square root of this, sympy won't oblige, because it doesn't know that Ipk is non-negative. But we do, so the RMS value of the ripple current is just \( I_{pk}/\sqrt{3} \). Substitute in our equation for Ipk as a function of D, and we get:
$$I_{ripple,rms} = I_{R0}\frac{|D|(1-|D|)}{4\sqrt{3}}$$
Need to perform Fourier analysis? Then sympy can help:
def piecewiseHarmonic(flist,tlist,k,T=1,t=None): ''' calculates piecewise integration of the kth harmonic of a given function flist is a list of N functions tlist is a list of N+1 points in time piecewiseHarmonic() calculates the sum of the integral of flist[i]*exp(2j*pi*k*t/T) between t=tlist[i] and tlist[i+1] iterating over i = 0:N-1 ''' S = 0 t = sympy.symbols('t') if t is None else t pi = sympy.pi for (i,f) in enumerate(flist): dt_range = (t,tlist[i],tlist[i+1]) S += 2*sympy.integrate(f*sympy.cos(2*pi*k*t/T),dt_range)/T S += 2j*sympy.integrate(f*sympy.sin(2*pi*k*t/T),dt_range)/T return sympy.simplify(S) t=sympy.symbols('t') def cosk(k): return sympy.cos(2*sympy.pi*k*t) def sink(k): return sympy.sin(2*sympy.pi*k*t) display(piecewiseHarmonic([cosk(3)],[0,1],3)) display(piecewiseHarmonic([cosk(3)],[0,1],4)) display(piecewiseHarmonic([sink(3)],[0,1],3)) display(piecewiseHarmonic([sink(3)],[0,1],4))
To analyze the harmonics of a sawtooth, we need to define the sawtooth in terms of piecewise linear functions:
def sawtooth_up(D,t): return (1-D)*(4*t-1)/4 def sawtooth_down(D,t): return (-D)*t D = 0.7 t_down = numpy.arange(-(1-D)/2,(1-D)/2,0.001) t_up = numpy.arange((1-D)/2,(1+D)/2,0.001) plt.plot(t_down/2,sawtooth_down(D,t_down/2), t_up/2,sawtooth_up(D,t_up/2)) plt.grid('on')
D,t = sympy.symbols('D t') for k in range(6): display('harmonic #%d' % k,piecewiseHarmonic( [sawtooth_down(D,t/2), sawtooth_up(D,t/2)], [-(1-D)/2,(1-D)/2,(1+D)/2],k))
harmonic #0
harmonic #1
harmonic #2
harmonic #3
harmonic #4
harmonic #5
If you try to get sympy to handle this for general integers k, it doesn't seem to figure out a simplified answer, so I just printed out the first few terms for fixed integers k.
The "i" in the answer tells us that the coefficients are pure imaginary and therefore the Fourier series is only sine terms, not cosine terms. The general form of these harmonics has a real part of 0 and an imaginary part of $$A_k = \frac{(-1)^k \sin k\pi D}{k^2\pi^2}$$
At first I thought I must have made a mistake; sawtooth waveforms should have only odd harmonics, right? But let's plot the results, and you'll see that they're right on the money; the even harmonics only disappear when D = 0.5.
def sawtoothApprox(D,kmax,printHarmonics=False): pi = numpy.pi def f(t): S = 0 for k in range(1,kmax+1): a_k = (-1)**k / 2.0 / k / k A_k = sin(k*pi*D)*a_k/pi/pi if printHarmonics: print 'for k=%d, a_k=%9.6f, A_k=%9.6f' % (k,a_k,A_k) S += A_k*sin(2*pi*k*t) return S return f t2 = np.arange(0,1,0.001) D = 0.7 Vload = (pwm(t2,(1.0+D)/2,centeralign=True) - pwm(t2,(1.0-D)/2,centeralign=True)) I = calcripple(t2,Vload) Iapprox = sawtoothApprox(D,6,printHarmonics=True)(2*t2) # show exact sawtooth, approximation of 1st 6 harmonics, # and then the approximation error plt.plot(t2,I,t2,Iapprox,t2,(I-Iapprox),':k')
for k=1, a_k=-0.500000, A_k=-0.040985 for k=2, a_k= 0.125000, A_k=-0.012045 for k=3, a_k=-0.055556, A_k=-0.001739 for k=4, a_k= 0.031250, A_k= 0.001861 for k=5, a_k=-0.020000, A_k= 0.002026 for k=6, a_k= 0.013889, A_k= 0.000827
[Line2D(_line0), Line2D(_line1), Line2D(_line2)]
In a nutshell: the bulk of the energy in a sawtooth waveform is in the 1st and 2nd harmonic. (For center-aligned PWM, this translates into the 2nd and 4th harmonic of the PWM frequency.)
The above analysis is for PWM which creates a simple sawtooth waveform, which is the case for edge-aligned PWM, and for center-aligned PWM with the common-mode duty cycle fixed at 50%.
For center-aligned PWM with common-mode duty cycle other than 50%, the ripple current waveform is "uglier". Remember the waveform shown below?
show1period(0.6,0.1)
We used
sympy and some "human-directed" algebra to figure out the currents at each of the time instants \( t_0 \) - \( t_6 \). Here it is again in a more concise form:
$$I_L(t_0) = I_L(t_3) = I_L(t_6) = 0$$
$$I_L(t_1) = -I_L(t_5) = D \times (|D| - 2D_0)/4$$
$$I_L(t_2) = -I_L(t_4) = D \times (2-|D| - 2D_0)/4$$
$$D = D_a - D_b, D_0 = (D_a + D_b) / 2 $$
$$I_{pk} = \max(|I_L(t_1)|,|I_L(t_2)|)$$
Calculating RMS of this waveform is a little more difficult than in the other cases, but it's not horrible. There are seven points here, but \( t_0, t_3, t_6 \) are all in the middle of line segments so we're really left with four intervals of linear segments:
And that's enough information to calculate RMS current:
D,D0,D0dev,T = sympy.symbols('D D_0 D_{0dev} T') def f3(y0,y1,h): return h*(y0*y0 + y0*y1 + y1*y1)/3 I1 = D*(abs(D)-2*D0)/4 I2 = D*(2-abs(D)-2*D0)/4 integral_squared_ripple = ( f3(I1,I2,abs(D)/2*T) + f3(I2,-I2,(1-D0-abs(D)/2)*T) + f3(-I2,-I1,abs(D)/2*T) + f3(-I1,I1,(D0-abs(D)/2))*T) mean_squared_ripple = sympy.simplify(integral_squared_ripple/T) mean_squared_ripple
We have to play a little trick on
sympy to get it to clean this up a little bit. The "ugliness" of this waveform disappears at \( D_0 = \tfrac{1}{2} \), so let's substitute \( D_{0dev} = D_0 - \tfrac{1}{2} \), simplify, and substitute back:
Int1 = sympy.Integer(1) msr2 = sympy.simplify(mean_squared_ripple.subs(D0, D0dev + Int1/2)).subs(D0dev,D0-Int1/2) display('mean squared ripple = ',msr2)
mean squared ripple =
Looks like
sympy still needs some help, so let's finish the simplification by hand. The easiest thing here is that \( |D|^2 - 2|D| + 1 = (1-|D|)^2 \). Also, remember that we are looking for RMS ripple, which is the square root of mean squared ripple:
$$RMS(I_L) = \frac{|D|\sqrt{12(D_0-\tfrac{1}{2})^2 + (1-|D|)^2}}{4\sqrt{3}}$$
For the simplified case where \( D_0 = \tfrac{1}{2} \), the RMS equation simplifies to the same expression we obtained earlier.
While we're at it, let's take a whack at simplifying the peak ripple current. It looks like
sympy can't help us get there, but we can spot check my algebra at a few values. Here's what I get when I simplify things:
$$|I_L(t_1)| = \frac{|D|}{4} |1-|D| + 2(D_0 - \tfrac{1}{2})|$$ $$|I_L(t_2)| = \frac{|D|}{4} |1-|D| - 2(D_0 - \tfrac{1}{2})|$$ $$\max(|I_L(t_1)|,|I_L(t_2)|) = \frac{|D|(1-|D|)}{4} + \frac{|D| |D_0 - \tfrac{1}{2}|}{2}$$
Let's verify whether this is correct:
def check_jasons_algebra(D,D0): # here's what we know for certain I1 = D*(abs(D)-2*D0)/4 I2 = D*(2-abs(D)-2*D0)/4 # here's what I got with algebra by hand Ipk = abs(D)*(1-abs(D))/4 + abs(D)*abs(D0-0.5)/2 print("D=%f,D0=%f,I1=%f,I2=%f" % (D,D0,I1,I2)) print("max(abs(I1),abs(I2))=%f, Jason's calc=%f" % (max(abs(I1),abs(I2)), Ipk)) for (D,D0) in ( (0.5,0.5), (0.5,0.4), (0.5,0.6), (0.1,0.5), (0.1,0.4), (0.1,0.6), (-0.5,0.5), (-0.5,0.4), (-0.5,0.6) ): check_jasons_algebra(D,D0) D=0.100000,D0=0.500000,I1=-0.022500,I2=0.022500 max(abs(I1),abs(I2))=0.022500, Jason's calc=0.022500 D=0.100000,D0=0.400000,I1=-0.017500,I2=0.027500 max(abs(I1),abs(I2))=0.027500, Jason's calc=0.027500 D=0.100000,D0=0.600000,I1=-0.027500,I2=0.017500 max(abs(I1),abs(I2))=0.027500, Jason's calc=0.027500
Looks good to me!
The extension of this type of analysis for 3-phase PWM is more difficult, and the result is more complex. Describing the ripple current in 3-phase PWM is something I may do at work, in the form of a Microchip application note. Watch for it someday, at.
Here's a summary of ripple current statistics for H-bridge PWM. Remember, \( D = D_a - D_b \) is the effective load duty cycle, \( D_0 = (D_a + D_b)/2 \) is the common-mode duty cycle, and \( I_{R0} = V_{DC}T/L \) is a reference current that simplifies calculation of these statistics.
Let's also define \( I_R = |D|(1-|D|)I_{R0} \), and \( I_{R2} = 2|D||D_0-\tfrac{1}{2}|I_{R0} \) since they will appear several times in the following table.
To keep the ripple current minimized, use center-aligned PWM whenever possible, and keep the common-mode duty cycle as close to \( \frac{1}{2} \) as possible, by choosing duty cycles as close as practical to \( D_a = (1+D)/2 \), \( D_b = (1-D)/2 \).
Happy switching!
© 2013 Jason M. Sachs
This post is available in an IPython Notebook no comments yet!
Add a Comment
|
http://www.embeddedrelated.com/showarticle/421.php
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
On Tue, Feb 28, 2012 at 1:53 AM, Simon Marlow <marlowsd at gmail.com> wrote: > I don't see how we could avoid including -D, since it might really affect > the source of the module that GHC eventually sees. We've never taken -D > into account before, and that was incorrect. I can't explain the behaviour > you say you saw with older GHC's. unless your CPP flags only affected the > imports of the module. In fact, that's what I do. I put system specific stuff or expensive stuff into a module and then do #ifdef EXPENSIVE_FEATURE import qualified ExpensiveFeature #else import qualified StubbedOutFeature as ExpensiveFeature #endif I think this is a pretty common strategy. I know it's common for os-specific stuff, e.g. filepath does this. Although obviously for OS stuff we're not interested in saving recompilation :) > Well, one solution would be to take the hash of the source file after > preprocessing. That would be accurate and would automatically take into > account -D and -I in a robust way. It could also cause too much > recompilation, if for example a preprocessor injected some funny comments or > strings containing the date/time or detailed version numbers of components > (like the gcc version). By "take the hash of the source file" do you mean the hash of the textual contents, or the usual hash of the interface etc? I assumed it was the latter, i.e. that the normal hash was taken after preprocessing. But suppose it's the former, I still think it's better than unconditional recompilation (which is what always including -D in the hash does, right?). Unconditionally including -D in the hash either makes it *always* compile too much--and likely drastically too much, if you have one module out of 300 that switches out depending on a compile time flag, you'll still recompile all 300 when you change the flag. And there's nothing you can really do about it if you're using --make. If you try to get around that by using a build system that knows which files it has to recompile, then you get in a situation where the files have been compiled with different flags, and now ghci can't cope since it can't switch flags while loading. If your preprocessor does something like put the date in... well, firstly I think that's much less common than switching out module imports, since for the latter as far as I know CPP is the only way to do it, while for dates or version numbers you'd be better off with a config file anyway. And it's still correct, right? You changed your gcc version or date or whatever, if you want a module to have the build date then of course you have to rebuild the module every time---you got exactly what you asked for. Even if for some reason you have a preprocessor that nondeterministically alters comments, taking the interface hash after preprocessing would handle that. And come to think of it, these are CPP flags not some arbitrary pgmF... can CPP even do something like insert the current date without also changing its -D flags?
|
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-March/022065.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
![endif]-->
Reference Language | Libraries | Comparison | Changes.
servo.writeMicroseconds(uS)
servo: a variable of type Servo
uS: the value of the parameter in microseconds (int)
#include <Servo.h> Servo myservo; void setup() { myservo.attach(9); myservo.writeMicroseconds(1500); //.
|
http://arduino.cc/en/Reference/ServoWriteMicroseconds
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
20 January 2010 11:17 [Source: ICIS news]
LONDON (ICIS news)--Bioamber has started up and commissioned the first bio-based succinic acid plant in Pomacle, France, and plans to begin selling to customers immediately, the speciality chemical company said on Wednesday.
The company wanted to build market demand as it negotiates the sale of licences for large-scale plants, which would produce renewable succinic acid from wheat-derived glucose, it said.
The plant, with an initial annual capacity of 2,000 metric tons, has been producing since December 2009, according to the company.
,” Piot added.
Bioamber, a joint venture between private US-based DNP Green Technology and France-based ARD (Agro-industrie Recherches et Développements), claimed renewable succinic acid offers higher purity than petroleum-derived succinic acid, with the added benefit of consuming carbon dioxide in the production process.
Piot announced at the 4th European Bioplastics Conference in ?xml:namespace>
|
http://www.icis.com/Articles/2010/01/20/9327224/bioamber-commissions-first-renewable-succinic-acid-plant.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
API Access in KACE1000 Version 12.0.149
Hi everyone,
until the last versions of KACE 1000 we used WSAPI to connect our linux based machines to KACE.
Now we got the information, that this way is out of date and we need to use the "regular" API.
I think there is a fundamental problem because when we run the url:
It shows: {"error": "API disabled."}
But I can not find a way to enable the API support.
Can you help me with this?
We use the version:
Model: K1000
Hardware Model: Virtual (VMware/Hyper-V/Nutanix)
Version: 12.0.149
-> we also set up two organisations
Thanks
Alex
Answers (2)
You start with a POST request to authenticate to the SMA server.
See the API documentation for more details.
Save the x-kace-csrf token to be used in subsequent API calls.
You may want to search previously posted questions regarding the API, as many have code examples.What programming language do you plan to use with the API?
- Thanks for your help!
We used Python in the past with WSAPI, so we would like to keep Python :-)
I saw your other examples and tried it with these but still get error messages.
I created a new user in KACE for the API-usage with admin rights in /systemui/settings_users_list.php (Settings-> Administrators).
Error messages:
{
"errorCode": -1,
"errorDescription": "Invalid CSRF Token"
}
{
"errorCode": -1,
"errorDescription": "Invalid user or password"
}
The code is:
import requests
import json
from requests.sessions import Session
session = Session();
# Start of POST to authenticate to SMA server.
# The x-dell-csrf-token is saved to be used in subsequent API calls.
# Change the URL IP and credentials for your environment.
url = "
payload = json.dumps({ "userName": "...",
"organizationName": "..."
})
headers = { 'Accept': 'application/json',
'Content-Type': 'application/json',
'x-dell-api-version': '5'
}
#To avoid SSL-error messages:
session.verify = False
response = session.post(url, headers=headers, data=payload)
# The following line will display the JSON returned from the response.
print(json.dumps(response.json(), indent = 3))
csrf_token = response.headers.get('x-dell-csrf-token')
# You should see the x-dell-csrf-token in the print output
print(csrf_token)
# The following API call will retrieve the list of devices on the SMA
url = "
headers = { 'Accept': 'application/json',
'Content-Type': 'application/json',
'x-dell-api-version': '5',
'x-dell-csrf-token': csrf_token
}
response = session.get(url, headers=headers)
print(json.dumps(response.json(), indent = 3))
session.close() - Alex-2k19 3 months ago
- ok.... the script is good!
It's all about my user!
KACE doesn't use the user I created under "/systemui/settings_users_list.php (Settings-> Administrators)".
Even with a new user I created under "/adminui/user_list.php (Settings -> User)" it was not working!
We do authenticate the KACE users against Active Directory so I tried it with an AD user and it was working like a charm! - Alex-2k19 3 months ago
I would suggest that you download the latest API guide here
If, once you have verified the commands that you are using are correct and still come back with an error, then I suggest you log a call with Support
Let's just hope that Quest are not returning to the Bad old days from a few years ago when they didn't bother top keep the API up to date......... Good Luck
|
https://www.itninja.com/question/api-access-in-kace1000-version-12-0-149
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Filtering, Sorting and Grouping are three important tools you can give your users to help them parse the data presented in a KendoReact Data Grid. Learn how to implement these configurations, as well as which use cases each one is most suited for!
When you’ve got a grid with a lot of data, then Filtering, Sorting and Grouping are key features for your users to be able to make sense of all that information. If you just want to display your data in a way that’s easily readable, then a good ol’ HTML
<table> is probably all you need—but if you’re looking at React Data Grid components, then it’s because you already know you have a complex use case that requires more than what a basic table has to offer. The KendoReact Data Grid is an extremely powerful component for displaying and organizing data, but first you’ll need to configure it in order to allow your users to make the most of the available features. Luckily, that’s why we’re here today, so let’s get started!
In this situation, we’ll be working from the assumption that your Data Grid is already installed, placed in your React application and populated with data. If that’s not the case yet, check out our docs for guidance on getting started, and come back here when you’re ready to take a deeper dive!
Or, if you’re just looking for an example to play with, you’re welcome to clone and play around with our
kendo-demo app, LKARS Menu System—you can find the repo here. It should be noted that this app has been heavily themed to look like the Star Trek ship menu system, so colors and fonts will look different than they do in our docs. If you’re interested in applying custom theming to your KendoReact components, take a look at this walkthrough, as well as this blog about using SASS for custom component styling.
If you’d like to code along, check out the companion video, React Data Grid (Table) Sorting, Filtering and Grouping: KendoReact Grid Demo.
We’re going to add and configure various filtering and sorting features for the Data Grid component used in the Astrometrics section of our app, so the crew members of our fake starship can easily review all the recently logged astrological objects of interest. So, without further ado, let’s set our course for Data Grid expertise, and... engage!
The basic version of the React Data Grid in your JSX will look like this:
<Grid style={{ height: "90%" }} data={astroData} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid>
In our Data Grid component, I’ve specified a height of
90% (so that the component will scroll instead of overflowing) and populated it with data from a
.json file I imported. I’ve also specified that the Grid should have three columns: Name, Object Type and Location. Otherwise, this Data Grid doesn’t look too different from your average table.
And this is okay, I suppose, but one of my fictional crew members needs to run a report on recently encountered M-Class planets. Right now, she’d have to scroll all the way through the entire Data Grid content to do so, but it would be a lot easier if she could sort the Object Type column so all the M-Class type planets were together. And hey, we have the technology, so let’s do it!
Step one is to set the
sortable prop on our React Data Grid component to
true.
<Grid style={{ height: "90%" }} data={astroData} sortable={true} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid>
Next, we’ll want to set up a hook that will handle our state management for the current state of the sorted Grid. At the top of our component, I’ve added a new hook that sets
sort as the current sort state,
setSort as what we’ll be calling when the user updates the sort method, and
initialSort as the default configuration for sorting on load.
const [currentSort, setSort] = React.useState(initialSort);
I’ve set
initialSort to sort the Name column in ascending alphabetical order.
const initialSort = [ { field: "name", dir: "asc", }, ];
Once that’s ready, let’s add it to our component. We’ll use the
sort prop to tell the Grid that we want it sorted according to the
sort we defined above. And we’ll use the
onSortChange prop to update the state every time the user changes the sort method.
<Grid style={{ height: "90%" }} data={astroData} sortable={true} sort={sort} onSortChange={(e) => {setSort(e.sort)}} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid>
Now, if we look over at our application, we can see that when we click on the Column headers, we get an arrow to indicate the current sort status... however, the data itself isn’t actually being sorted, yet. That’s because we need to actually, well, do the sorting!
To do this, we’ll need to
import { orderBy } from "@progress/kendo-data-query" as well as update our
data prop to call
orderBy and pass in our base data along with our
sort.
So our final code for the Table component looks like this!
<Grid style={{ height: "90%" }} data={orderBy(astroData, sort)} sortable={true} sort={sort} onSortChange={(e) => {setSort(e.sort)}} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid>
And now, our Ensign can quickly sort all the M-Class type planets to the top and compile her list. Watch out bridge crew, this girl’s about to be promoted!
There are also a few ways you can customize the way your React Table can be sorted. You can disable unsorting of columns by setting
sortable.allowUnsort to
false, and you can allow the user to sort multiple columns at the same time by setting
sortable.mode to
multiple. Our
sortable.mode will accept either
multiple or
single as options, and defaults to
single.
<Grid style={{ height: "90%" }} data={orderBy(astroData, sort)} sortable={{ allowUnsort: false, mode: "multiple" }} sort={sort} onSortChange={(e) => {setSort(e.sort)}} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid>
When users can sort multiple columns at the same time, a number will appear in the column UI to indicate the sorting preference order.
Right now, our Ensign can sort the grid in order to move all the M-Class planets to the top of the list, but it sounds like what she really needs is not to sort, but rather to filter the grid contents in order to remove every astrological object that’s not an M-Class planet. Here’s how we enable React Data Grid filtering in our React app:
First, we’ll add a
filterable prop to our Grid component and set it to be
true.
As soon as you do this, you’ll see that there’s a new section at the top of each Column in your Data Grid, with a field for user input and a button to change the filter type based on how they want to structure their filter. You’ll also notice that it’s not working yet—that’s because we still need to hook it into our state and handle the changes from the user’s choices.
In order to do that, we’ll need to create a hook that allows us to set the filter based on the user’s choice:
const [filter, setFilter] = React.useState(initialFilter);
Then, we’ll define that
initialFilter to return whatever filter state we want on component load. In this case, I’ve set it to be blank:
const initialFilter = { logic: "and", filters: [ { field: "name", operator: "contains", value: "", }, ], };
Then, we’ll connect that to our Grid component by setting the
filter and
onFilterChange props. We’ll set
filter to our
filter variable, and use
onFilterChange to call
setFilter to update the state whenever the user changes the filtering method.
<Grid style={{ height: "420px", }} data={filterBy(sampleProducts, filter)} filterable={true} filter={filter} onFilterChange={(e) => setFilter(e.filter)} >
Now, when we check back in on our application, we can test the filter input and see the contents of the Grid immediately start filtering the data as we type. Now, our crewmate can quickly and easily filter the Grid to only return those M-Class planets she was looking for.
The default setting for the Data Grid Filtering UI is to add that user input field immediately below the Column header. However, if you’re trying to conserve space as much as possible in your UI, there’s another layout you can choose that nests the Filtering inputs in a dropdown menu. Although it is worth noting that this method does change the UX slightly, in that it will no longer filter as the user types—instead, the user must click the "Filter" button before the Grid updates.
In order to set up the menu, we’ll want to import
GridColumnMenuFilter from
@progress/kendo-react-grid and use it to create a new component. We’ll call this component
ColumnMenu, and it should should look like this:
import { Grid, GridColumn as Column, GridColumnMenuFilter } from "@progress/kendo-react-grid"; export const ColumnMenu = (props) => { return ( <div> <GridColumnMenuFilter {...props} expanded={true} /> </div> ); };
Then, we’ll adjust our Table component to add the new menu to each column where we want it to appear:
<Grid style={{ height: "90%" }} data={filterBy(astroData, filter)} filter={filter} onFilterChange={(e) => setFilter(e.filter)} > <Column columnMenu={ColumnMenu} <Column columnMenu={ColumnMenu} <Column columnMenu={ColumnMenu} </Grid>
Sometimes you know in advance how your users will need to filter the information in your Data Grid. In these cases, you can improve the UX of your application by removing the filter selection step from the process, and having the Grid only display the the filter type relevant to that Column. For example, if you have a Column displaying the number of times a particular Astronomical Object has been encountered, you can specify in the Column component
filter={"numeric"} , and the filter cell will be updated to specify numeric input.
<Grid style={{ height: "90%" }} data={filterBy(astroData, filter)} filter={filter} onFilterChange={(e) => setFilter(e.filter)} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> <Column field="encounters" title="Encounters" filter={"numeric"}/> </Grid>
Finally, if you know that your users will want to filter in specific ways (like our Ensign who would always be filtering from a list of preset types) you can optimize your UI to make that process clearer to your users by using the
filterCell prop in the
<Column> child component. This will allow you to replace the default filtering UI beneath the Category header with fully custom content—anything you like.
To do this, you’ll use the same approach as above, where you create a new component to handle the filtering request. But then, instead of passing that into
columnMenu, you’ll use
filterCell instead. Note that your custom component will need to handle all user inputs (
onClick events, etc.) since this is an addition to the standard KendoReact component functionality.
<Column filterCell={MyCustomFilter}
For the third time, we’re approached by this crewmate. “Please,” they say, “I know before I said I wanted to sort, and then to filter, but what I really need is a way to group the data by type but still be able to see all of it!” Well, why didn’t you say so in the first place? Of course we can do Data Grid Grouping with KendoReact!
First, we’ll set the
groupable prop of the Grid component to
true
<Grid style={{ height: "90%" }} data={filterBy(astroData, filter)} groupable={true} onFilterChange={(e) => setFilter(e.filter)} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid>
Once you’ve done that, you’ll see the change reflected in our UI, with a new line above the Column headers with instruction on how to group. It won’t work yet, but let’s fix that!
In order to get that drag-and-drop working, we’ll need to set up our
onGroupChange handler to update the state. This follows the same pattern as the Filtering and Sorting approaches, so you’re probably familiar with it by now! So, let’s create our hook with
group ,
setGroup and
initialGroup. In this case, I’m going to set up my
initialGroup to start by grouping the Grid contents by Object Type.
const initialGroup = { field: "astronomicalObjectType", }; const [group, setGroup] = React.useState(initialGroup);
Now, we’ll use those with
onGroupChange to update the group settings when the user drags and drops those Column headers.
<Grid style={{ height: "90%" }} data={groupBy(astroData, group)} groupable={true} group={group} onGroupChange={(e) => setGroup(e.group)} > <Column field="name" title="Name" /> <Column field="astronomicalObjectType" title="Object Type" /> <Column field="location.name" title="Location"/> </Grid> </div>
As you can see in the example gif, all Columns can be grouped multiple times. The order in which the Columns are grouped is based on the order in which the user drags them into the header section. You can disable any Column from being able to be grouped by setting
grouped={false} in the Column component. When this is set, the user will not be able to drag and drop that specific Column, but can still group based on the others.
Sometimes, we need more than one method of organization enabled on our Grid together. When this is the case, there are some changes that need to be made to the code in order to handle these multiple forms of processing.
To begin, we’ll need to change out the process-specific pieces we were using. Instead of
sort/
setSort or
filter /
setFilter, you’ll want to use the more generic
dataState and
setDataState. We also need to add another state management piece for
resultState and
setResultState. These handle two different aspects of the Grid:
dataState handles the current configuration of the Grid settings, and
resultState handles what’s actually being displayed in the Grid currently.
const [dataState, setDataState] = React.useState(); const [resultState, setResultState] = React.useState( process(astroData, initialDataState) );
If you were previously importing
sortBy or
filterBy, you’ll need to replace that with
process, a more general method that can handle updating all 3 types of organization.
import { process } from "@progress/kendo-data-query";
Now, we need to add a new function to handle when users change the filtering, grouping or sorting settings. I’ve called mine
onDataStateChange, and it will update both the
dataState and the
resultState when called.
const onDataStateChange = React.useCallback((e) => { setDataState(e.dataState); setResultState(process(astroData, e.dataState)); }, []);
Now, we take a look at our Grid component. We should still have
sortable ,
filterable and
groupable set to true (assuming you want them all turned on at once), but we should replace the
filter and
sort properties with the more generic
data and set it to
{ data: resultState.data } . We also need to add our
onDataStateChange function, so that gets called anytime the user updates the state of the Grid configurations. You can see that at the bottom, set to
{onDataStateChange}{...dataState}.
<Grid style={{ height: "90%" }} data={{ data: resultState.data }} filterable={true} sortable={true} groupable={true} onDataStateChange={onDataStateChange}{...dataState} >
And there you have it! Now your Data Grid can handle any combination of Sorting, Filtering and Grouping settings input by your users. If you want to see it all together, check out this StackBlitz example. But how do you decide which combination of these features is right for your application?
In that last example, we walked through three very powerful features—Sorting, Filtering and Grouping—and enabled all of them. However, this all-in approach isn’t always the best UX for your application.
While it can be tempting to see a list of features like this and say, “Turn everything on!!” I’d actually encourage you to enable only those features that will be the most beneficial for your users and leave out the ones you think would be less used. Enabling every feature (and every configuration of every feature) can be an overwhelming experience for your users, and could create a complex UI.
If you know your userbase is made up of “power users” who will feel comfortable manipulating complex Data Grids like this, then absolutely give them full freedom! But if the majority of your users aren’t at that level, you can improve their experience by being thoughtful about how you configure your Grid component.
Sorting is ideal for situations when your users will need to compare your data, or see all of it in a specifically organized way. For example, being able to compare the prices on different offerings by sorting cost from low to high, or looking through all your employees alphabetically organized by name. This is a great way to organize data that’s already all in a similar category.
Filtering is best for when your users only need to see a certain subset of your data, and not all of it at once. For example, only showing the products within a certain category, or only the employees with a specific title. This is good when you have several different subsets of data included in your Grid, but your users won’t need to view all of it at once. This can be especially powerful when combined with Sorting, allowing your users to filter down to a specific subset of data, and then organize it in a progressive way.
Grouping should be used when your users need to see the entirety of the data, but broken up into smaller categories. It’s kind of a blend between the filtering and sorting features, from a UX perspective. It allows your users to create those same subsets as filtering, but without removing the data from the view the way filtering does. This allows your users to still see the other categories for comparison purposes, but in a more visually differentiated way than a sorted list may offer. This is especially good when you’ve got a lot of data, but it all needs to remain in the view. Breaking it up into smaller categories makes it easier for your users to parse through, but ensures that the entirety of the data is still available to them in one view.
I recommend taking a little time to think about what your users will be doing with the data in your Grid. What are their goals? What conclusions are they trying to draw? What problems are they trying to solve? What kinds of connections are they attempting to make? The answers to these questions can help guide you toward whether Sorting, Filtering, Grouping or some combination thereof is the best fit for your application.
We provide everything in one component for your convenience as a developer, so that you can use the same KendoReact Data Grid in multiple different contexts and scenarios within your application—but this doesn’t necessarily mean that your users will also benefit from an all-in-one solution in the UI. When you combine your knowledge and expertise about your own userbase with the power of the KendoReact Data Grid, the possibilities are truly endless!.
|
https://www.telerik.com/blogs/sorting-filtering-grouping-kendoreact-data-grid
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Importing data from fNIRS devices#
fNIRS devices consist of two kinds of optodes: light sources (AKA “emitters” or “transmitters”) and light detectors (AKA “receivers”). Channels are defined as source-detector pairs, and channel locations are defined as the midpoint between source and detector.
MNE-Python provides functions for reading fNIRS data and optode locations from several file formats. Regardless of the device manufacturer or file format, MNE-Python’s fNIRS functions will internally store the measurement data and its metadata in the same way (e.g., data values are always converted into SI units). Supported measurement types include amplitude, optical density, oxyhaemoglobin concentration, and deoxyhemoglobin concentration (for continuous wave fNIRS), and additionally AC amplitude and phase (for frequency domain fNIRS).
Warning
MNE-Python stores metadata internally with a specific structure, and internal functions expect specific naming conventions. Manual modification of channel names and metadata is not recommended.
Standardized data#
SNIRF (.snirf)#
The Shared Near Infrared Spectroscopy Format
(SNIRF)
is designed by the fNIRS community in an effort to facilitate
sharing and analysis of fNIRS data. And is the official format of the
Society for functional near-infrared spectroscopy (SfNIRS).
The manufacturers Gowerlabs, NIRx, Kernel, and Cortivision
export data in the SNIRF format, and these files can be imported in to MNE.
SNIRF is the preferred format for reading data in to MNE-Python.
Data stored in the SNIRF format can be read in
using
mne.io.read_raw_snirf().
Note
The SNIRF format has provisions for many different types of fNIRS recordings. MNE-Python currently only supports reading continuous wave or haemoglobin data stored in the .snirf format.
Specifying the coordinate system#
There are a variety of coordinate systems used to specify the location of
sensors (see Source alignment and coordinate frames for details). Where possible the
coordinate system will be determined automatically when reading a SNIRF file.
However, sometimes this is not possible and you must manually specify the
coordinate frame the optodes are in. This is done using the
optode_frame
argument when loading data.
The coordinate system is automatically detected for Gowerlabs SNIRF files.
Continuous Wave Devices#
NIRx (directory or hdr)#
NIRx produce continuous wave fNIRS devices.
NIRx recordings can be read in using
mne.io.read_raw_nirx().
The NIRx device stores data directly to a directory with multiple file types,
MNE-Python extracts the appropriate information from each file.
MNE-Python only supports NIRx files recorded with NIRStar
version 15.0 and above and Aurora version 2021 and above.
MNE-Python supports reading data from NIRScout and NIRSport devices.
Hitachi (.csv)#
Hitachi produce continuous wave fNIRS devices.
Hitachi fNIRS recordings can be read using
mne.io.read_raw_hitachi().
No optode information is stored so you’ll need to set the montage manually,
see the Notes section of
mne.io.read_raw_hitachi().
Frequency Domain Devices#
BOXY (.txt)#
BOXY recordings can be read in using
mne.io.read_raw_boxy().
The BOXY software and ISS Imagent I and II devices are frequency domain
systems that store data in a single
.txt file containing what they call
(with MNE-Python’s name for that type of data in parens):
- DC
All light collected by the detector (
fnirs_cw_amplitude)
- AC
High-frequency modulated light intensity (
fnirs_fd_ac_amplitude)
- Phase
Phase of the modulated light (
fnirs_fd_phase)
DC data is stored as the type
fnirs_cw_amplitude because it
collects both the modulated and any unmodulated light, and hence is analogous
to what is collected by continuous wave systems such as NIRx. This helps with
conformance to SNIRF standard types.
These raw data files can be saved by the acquisition devices as parsed or
unparsed
.txt files, which affects how the data in the file is organised.
MNE-Python will read either file type and extract the raw DC, AC,
and Phase data. If triggers are sent using the
digaux port of the
recording hardware, MNE-Python will also read the
digaux data and
create annotations for any triggers.
Custom Data Import#
Warning
This method is not supported and users are discouraged to use it.
You should convert your data to the
SNIRF format using the tools
provided by the Society for functional Near-Infrared Spectroscopy,
and then load it using
mne.io.read_raw_snirf().
fNIRS measurements may be stored in a non-standardised format that is not supported by MNE-Python and cannot be converted easily into SNIRF. This legacy data is often in CSV or TSV format, we show here a way to load it even though it is not officially supported by MNE-Python due to the lack of standardisation of the file format (the naming and ordering of channels, the type and scaling of data, and specification of sensor positions varies between each vendor). You will likely have to adapt this depending on the system from which your CSV originated.
import os.path as op import numpy as np import pandas as pd import mne
First, we generate an example CSV file which will then be loaded in to MNE-Python. This step would be skipped if you have actual data you wish to load. We simulate 16 channels with 100 samples of data and save this to a file called fnirs.csv.
pd.DataFrame(np.random.normal(size=(16, 100))).to_csv("fnirs.csv")
Warning
The channels must be ordered in haemoglobin pairs, such that for a single channel all the types are in subsequent indices. The type order must be ‘hbo’ then ‘hbr’. The data below is already in the correct order and may be used as a template for how data must be stored. If the order that your data is stored is different to the mandatory formatting, then you must first read the data with channel naming according to the data structure, then reorder the channels to match the required format.
Next, we will load the example CSV file.
data = pd.read_csv('fnirs.csv')
Then, the metadata must be specified manually as the CSV file does not contain information about channel names, types, sample rate etc.
Warning
In MNE-Python the naming of channels MUST follow the structure
S#_D# type where # is replaced by the appropriate source and
detector numbers and type is either
hbo,
hbr or the
wavelength.
ch_names = ['S1_D1 hbo', 'S1_D1 hbr', 'S2_D1 hbo', 'S2_D1 hbr', 'S3_D1 hbo', 'S3_D1 hbr', 'S4_D1 hbo', 'S4_D1 hbr', 'S5_D2 hbo', 'S5_D2 hbr', 'S6_D2 hbo', 'S6_D2 hbr', 'S7_D2 hbo', 'S7_D2 hbr', 'S8_D2 hbo', 'S8_D2 hbr'] ch_types = ['hbo', 'hbr', 'hbo', 'hbr', 'hbo', 'hbr', 'hbo', 'hbr', 'hbo', 'hbr', 'hbo', 'hbr', 'hbo', 'hbr', 'hbo', 'hbr'] sfreq = 10. # in Hz
Finally, the data can be converted in to an MNE-Python data structure.
The metadata above is used to create an
mne.Info data structure,
and this is combined with the data to create an MNE-Python
Raw object. For more details on the info structure
see The Info data structure, and for additional details on how continuous data
is stored in MNE-Python see The Raw data structure: continuous data.
For a more extensive description of how to create MNE-Python data structures
from raw array data see Creating MNE-Python data structures from scratch.
Creating RawArray with float64 data, n_channels=16, n_times=101 Range : 0 ... 100 = 0.000 ... 10.000 secs Ready.
Applying standard sensor locations to imported data#
Having information about optode locations may assist in your analysis. Beyond the general benefits this provides (e.g. creating regions of interest, etc), this is may be particularly important for fNIRS as information about the optode locations is required to convert the optical density data in to an estimate of the haemoglobin concentrations. MNE-Python provides methods to load standard sensor configurations (montages) from some vendors, and this is demonstrated below. Some handy tutorials for understanding sensor locations, coordinate systems, and how to store and view this information in MNE-Python are: Working with sensor locations, Source alignment and coordinate frames, and Plotting EEG sensors on the scalp.
Below is an example of how to load the optode positions for an Artinis OctaMon device.
Note
It is also possible to create a custom montage from a file for
fNIRS with
mne.channels.read_custom_montage() by setting
coord_frame to
'mri'.
montage = mne.channels.make_standard_montage('artinis-octamon') raw.set_montage(montage) # View the position of optodes in 2D to confirm the positions are correct. raw.plot_sensors()
To validate the positions were loaded correctly it is also possible to view the location of the sources (red), detectors (black), and channels (white lines and orange dots) in a 3D representation. The ficiduals are marked in blue, green and red. See Source alignment and coordinate frames for more details.
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects') mne.datasets.fetch_fsaverage(subjects_dir=subjects_dir) brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast') brain.add_head() brain.add_sensors(raw.info, trans='fsaverage') brain.show_view(azimuth=90, elevation=90, distance=500)
0 files missing from root.txt in /home/circleci/mne_data/MNE-sample-data/subjects 0 files missing from bem.txt in /home/circleci/mne_data/MNE-sample-data/subjects/fsaverage Using fsaverage-head-dense.fif for head surface. 1 BEM surfaces found Reading a surface... [done] 1 BEM surfaces read Channel types:: hbo: 8, hbr: 8
Total running time of the script: ( 0 minutes 13.630 seconds)
Estimated memory usage: 63 MB
Gallery generated by Sphinx-Gallery
|
https://mne.tools/dev/auto_tutorials/io/30_reading_fnirs_data.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
MicroPython on microcontrollers¶
MicroPython is designed to be capable of running on microcontrollers. These have hardware limitations which may be unfamiliar to programmers more familiar with conventional computers. In particular the amount of RAM and nonvolatile “disk” (flash memory) storage is limited. This tutorial offers ways to make the most of the limited resources. Because MicroPython runs on controllers based on a variety of architectures, the methods presented are generic: in some cases it will be necessary to obtain detailed information from platform specific documentation.
Flash memory¶
On the Pyboard the simple way to address the limited capacity is to fit a micro SD card. In some cases this is impractical, either because the device does not have an SD card slot or for reasons of cost or power consumption; hence the on-chip flash must be used. The firmware including the MicroPython subsystem is stored in the onboard flash. The remaining capacity is available for use. For reasons connected with the physical architecture of the flash memory part of this capacity may be inaccessible as a filesystem. In such cases this space may be employed by incorporating user modules into a firmware build which is then flashed to the device.
There are two ways to achieve this: frozen modules and frozen bytecode. Frozen modules store the Python source with the firmware. Frozen bytecode uses the cross compiler to convert the source to bytecode which is then stored with the firmware. In either case the module may be accessed with an import statement:
import mymodule
The procedure for producing frozen modules and bytecode is platform dependent; instructions for building the firmware can be found in the README files in the relevant part of the source tree.
In general terms the steps are as follows:
Clone the MicroPython repository.
Acquire the (platform specific) toolchain to build the firmware.
Build the cross compiler.
Place the modules to be frozen in a specified directory (dependent on whether the module is to be frozen as source or as bytecode).
Build the firmware. A specific command may be required to build frozen code of either type - see the platform documentation.
Flash the firmware to the device.
RAM¶
When reducing RAM usage there are two phases to consider: compilation and execution. In addition to memory consumption, there is also an issue known as heap fragmentation. In general terms it is best to minimise the repeated creation and destruction of objects. The reason for this is covered in the section covering the heap.
Compilation phase¶
When a module is imported, MicroPython compiles the code to bytecode which is then executed by the MicroPython virtual machine (VM). The bytecode is stored in RAM. The compiler itself requires RAM, but this becomes available for use when the compilation has completed.
If a number of modules have already been imported the situation can arise where there is insufficient RAM to run the compiler. In this case the import statement will produce a memory exception.
If a module instantiates global objects on import it will consume RAM at the time of import, which is then unavailable for the compiler to use on subsequent imports. In general it is best to avoid code which runs on import; a better approach is to have initialisation code which is run by the application after all modules have been imported. This maximises the RAM available to the compiler.
If RAM is still insufficient to compile all modules one solution is to precompile modules. MicroPython has a cross compiler capable of compiling Python modules to bytecode (see the README in the mpy-cross directory). The resulting bytecode file has a .mpy extension; it may be copied to the filesystem and imported in the usual way. Alternatively some or all modules may be implemented as frozen bytecode: on most platforms this saves even more RAM as the bytecode is run directly from flash rather than being stored in RAM.
Execution phase¶
There are a number of coding techniques for reducing RAM usage.
Constants
MicroPython provides a
const keyword which may be used as follows:
from micropython import const ROWS = const(33) _COLS = const(0x10) a = ROWS b = _COLS
In both instances where the constant is assigned to a variable the compiler
will avoid coding a lookup to the name of the constant by substituting its
literal value. This saves bytecode and hence RAM. However the
ROWS value
will occupy at least two machine words, one each for the key and value in the
globals dictionary. The presence in the dictionary is necessary because another
module might import or use it. This RAM can be saved by prepending the name
with an underscore as in
_COLS: this symbol is not visible outside the
module so will not occupy RAM.
The argument to
const() may be anything which, at compile time, evaluates
to an integer e.g.
0x100 or
1 << 8. It can even include other const
symbols that have already been defined, e.g.
1 << BIT.
Constant data structures
Where there is a substantial volume of constant data and the platform supports
execution from Flash, RAM may be saved as follows. The data should be located in
Python modules and frozen as bytecode. The data must be defined as
bytes
objects. The compiler ‘knows’ that
bytes objects are immutable and ensures
that the objects remain in flash memory rather than being copied to RAM. The
struct module can assist in converting between
bytes types and other
Python built-in types.
When considering the implications of frozen bytecode, note that in Python strings, floats, bytes, integers, complex numbers and tuples are immutable. Accordingly these will be frozen into flash (for tuples, only if all their elements are immutable). Thus, in the line
mystring = "The quick brown fox"
the actual string “The quick brown fox” will reside in flash. At runtime a
reference to the string is assigned to the variable
mystring. The reference
occupies a single machine word. In principle a long integer could be used to
store constant data:
bar = 0xDEADBEEF0000DEADBEEF
As in the string example, at runtime a reference to the arbitrarily large
integer is assigned to the variable
bar. That reference occupies a
single machine word.
Tuples of constant objects are themselves constant. Such constant tuples are optimised by the compiler so they do not need to be created at runtime each time they are used. For example:
foo = (1, 2, 3, 4, 5, 6, 100000, ("string", b"bytes", False, True))
This entire tuple will exist as a single object (potentially in flash if the code is frozen) and referenced each time it is needed.
Needless object creation
There are a number of situations where objects may unwittingly be created and destroyed. This can reduce the usability of RAM through fragmentation. The following sections discuss instances of this.
String concatenation
Consider the following code fragments which aim to produce constant strings:
var = "foo" + "bar" var1 = "foo" "bar" var2 = """\ foo\ bar"""
Each produces the same outcome, however the first needlessly creates two string objects at runtime, allocates more RAM for concatenation before producing the third. The others perform the concatenation at compile time which is more efficient, reducing fragmentation.
Where strings must be dynamically created before being fed to a stream such as a file it will save RAM if this is done in a piecemeal fashion. Rather than creating a large string object, create a substring and feed it to the stream before dealing with the next.
The best way to create dynamic strings is by means of the string
format()
method:
var = "Temperature {:5.2f} Pressure {:06d}\n".format(temp, press)
Buffers
When accessing devices such as instances of UART, I2C and SPI interfaces, using pre-allocated buffers avoids the creation of needless objects. Consider these two loops:
while True: var = spi.read(100) # process data buf = bytearray(100) while True: spi.readinto(buf) # process data in buf
The first creates a buffer on each pass whereas the second re-uses a pre-allocated buffer; this is both faster and more efficient in terms of memory fragmentation.
Bytes are smaller than ints
On most platforms an integer consumes four bytes. Consider the three calls to the
function
foo():
def foo(bar): for x in bar: print(x) foo([1, 2, 0xff]) foo((1, 2, 0xff)) foo(b'\1\2\xff')
In the first call a
list of integers is created in RAM each time the code is
executed. The second call creates a constant
tuple object (a
tuple containing
only constant objects) as part of the compilation phase, so it is only created
once and is more efficient than the
list. The third call efficiently
creates a
bytes object consuming the minimum amount of RAM. If the module
were frozen as bytecode, both the
tuple and
bytes object would reside in flash.
Strings Versus Bytes
Python3 introduced Unicode support. This introduced a distinction between a
string and an array of bytes. MicroPython ensures that Unicode strings take no
additional space so long as all characters in the string are ASCII (i.e. have
a value < 126). If values in the full 8-bit range are required
bytes and
bytearray objects can be used to ensure that no additional space will be
required. Note that most string methods (e.g.
str.strip()) apply also to
bytes
instances so the process of eliminating Unicode can be painless.
s = 'the quick brown fox' # A string instance b = b'the quick brown fox' # A bytes instance
Where it is necessary to convert between strings and bytes the
str.encode()
and the
bytes.decode() methods can be used. Note that both strings and bytes
are immutable. Any operation which takes as input such an object and produces
another implies at least one RAM allocation to produce the result. In the
second line below a new bytes object is allocated. This would also occur if
foo
were a string.
foo = b' empty whitespace' foo = foo.lstrip()
Runtime compiler execution
The Python funcitons
eval and
exec invoke the compiler at runtime, which
requires significant amounts of RAM. Note that the
pickle library from
micropython-lib employs
exec. It may be more RAM efficient to use the
json library for object serialisation.
Storing strings in flash
Python strings are immutable hence have the potential to be stored in read only memory. The compiler can place in flash strings defined in Python code. As with frozen modules it is necessary to have a copy of the source tree on the PC and the toolchain to build the firmware. The procedure will work even if the modules have not been fully debugged, so long as they can be imported and run.
After importing the modules, execute:
micropython.qstr_info(1)
Then copy and paste all the Q(xxx) lines into a text editor. Check for and remove lines which are obviously invalid. Open the file qstrdefsport.h which will be found in ports/stm32 (or the equivalent directory for the architecture in use). Copy and paste the corrected lines at the end of the file. Save the file, rebuild and flash the firmware. The outcome can be checked by importing the modules and again issuing:
micropython.qstr_info(1)
The Q(xxx) lines should be gone.
The heap¶
When a running program instantiates an object the necessary RAM is allocated
from a fixed size pool known as the heap. When the object goes out of scope (in
other words becomes inaccessible to code) the redundant object is known as
“garbage”. A process known as “garbage collection” (GC) reclaims that memory,
returning it to the free heap. This process runs automatically, however it can
be invoked directly by issuing
gc.collect().
The discourse on this is somewhat involved. For a ‘quick fix’ issue the following periodically:
gc.collect() gc.threshold(gc.mem_free() // 4 + gc.mem_alloc())
Fragmentation¶
Say a program creates an object
foo, then an object
bar. Subsequently
foo goes out of scope but
bar remains. The RAM used by
foo will be
reclaimed by GC. However if
bar was allocated to a higher address, the
RAM reclaimed from
foo will only be of use for objects no bigger than
foo. In a complex or long running program the heap can become fragmented:
despite there being a substantial amount of RAM available, there is insufficient
contiguous space to allocate a particular object, and the program fails with a
memory error.
The techniques outlined above aim to minimise this. Where large permanent buffers or other objects are required it is best to instantiate these early in the process of program execution before fragmentation can occur. Further improvements may be made by monitoring the state of the heap and by controlling GC; these are outlined below.
Reporting¶
A number of library functions are available to report on memory allocation and
to control GC. These are to be found in the
gc and
micropython modules.
The following example may be pasted at the REPL (
ctrl e to enter paste mode,
ctrl d to run it).
import gc import micropython gc.collect() micropython.mem_info() print('-----------------------------') print('Initial free: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc())) def func(): a = bytearray(10000) gc.collect() print('Func definition: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc())) func() print('Func run free: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc())) gc.collect() print('Garbage collect free: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc())) print('-----------------------------') micropython.mem_info(1)
Methods employed above:
gc.collect()Force a garbage collection. See footnote.
micropython.mem_info()Print a summary of RAM utilisation.
gc.mem_free()Return the free heap size in bytes.
gc.mem_alloc()Return the number of bytes currently allocated.
micropython.mem_info(1)Print a table of heap utilisation (detailed below).
The numbers produced are dependent on the platform, but it can be seen that
declaring the function uses a small amount of RAM in the form of bytecode
emitted by the compiler (the RAM used by the compiler has been reclaimed).
Running the function uses over 10KiB, but on return
a is garbage because it
is out of scope and cannot be referenced. The final
gc.collect() recovers
that memory.
The final output produced by
micropython.mem_info(1) will vary in detail but
may be interpreted as follows:
Each letter represents a single block of memory, a block being 16 bytes. So each line of the heap dump represents 0x400 bytes or 1KiB of RAM.
Control of garbage collection¶
A GC can be demanded at any time by issuing
gc.collect(). It is advantageous
to do this at intervals, firstly to pre-empt fragmentation and secondly for
performance. A GC can take several milliseconds but is quicker when there is
little work to do (about 1ms on the Pyboard). An explicit call can minimise that
delay while ensuring it occurs at points in the program when it is acceptable.
Automatic GC is provoked under the following circumstances. When an attempt at allocation fails, a GC is performed and the allocation re-tried. Only if this fails is an exception raised. Secondly an automatic GC will be triggered if the amount of free RAM falls below a threshold. This threshold can be adapted as execution progresses:
gc.collect() gc.threshold(gc.mem_free() // 4 + gc.mem_alloc())
This will provoke a GC when more than 25% of the currently free heap becomes occupied.
In general modules should instantiate data objects at runtime using constructors
or other initialisation functions. The reason is that if this occurs on
initialisation the compiler may be starved of RAM when subsequent modules are
imported. If modules do instantiate data on import then
gc.collect() issued
after the import will ameliorate the problem.
String operations¶
MicroPython handles strings in an efficient manner and understanding this can
help in designing applications to run on microcontrollers. When a module
is compiled, strings which occur multiple times are stored once only, a process
known as string interning. In MicroPython an interned string is known as a
qstr.
In a module imported normally that single instance will be located in RAM, but
as described above, in modules frozen as bytecode it will be located in flash.
String comparisons are also performed efficiently using hashing rather than character by character. The penalty for using strings rather than integers may hence be small both in terms of performance and RAM usage - a fact which may come as a surprise to C programmers.
Postscript¶
MicroPython passes, returns and (by default) copies objects by reference. A reference occupies a single machine word so these processes are efficient in RAM usage and speed.
Where variables are required whose size is neither a byte nor a machine word
there are standard libraries which can assist in storing these efficiently and
in performing conversions. See the
array,
struct and
uctypes
modules.
Footnote: gc.collect() return value¶
On Unix and Windows platforms the
gc.collect() method returns an integer
which signifies the number of distinct memory regions that were reclaimed in the
collection (more precisely, the number of heads that were turned into frees). For
efficiency reasons bare metal ports do not return this value.
|
https://docs.micropython.org/en/latest/reference/constrained.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
I’ve hit a wall with trying to optimize my code…
I’m trying to type-II maximum likelihood training for exact Gaussian process using Flux to get the hang of things. As much as I’ve tried to optimize my code, I’m struggling to get performance that is comparable to PyTorch on my laptop CPU. For example, training in PyTorch takes about ~75 seconds while training with Flux takes about ~400 seconds. After checking my code avoids obvious problems (I hope), I’ve tried the following:
Setting the
JULIA_NUM_THREADSenvironment variable to the number of available cores.
-
Setting the number of BLAS threads to 12
The most expensive operation in the “forward pass” of the GP is sped up by ~ a factor of 2 when increasing the number of BLAS threads but the training time is increased by ~ 20 seconds.
This makes me think that perhaps the problem is coming from the fact that Flux is not taking advantage of my multiple cpus. After reading posts like this one, I wonder if Flux isn’t intended for this use case? (But maybe this is coming from my own ignorance of Julia+Flux).
I should note that on a previous question it was pointed out that log and exp are slower than they should be but this will be fixed in 1.6. I don’t think that the problems with these functions should be causing the slow down I’m seeing.
Any help that someone could provide would be greatly appreciated here! I’d love to recommend Julia across the board to my lab but I want to make sure I can produce code with comparable performance to PyTorch on problems like the ones we work on first.
For anyone interested, I’ve attached my code below. Note run times were calculated by running
> main(400, 5000) > @time main(400, 5000)
from the REPL.
using LinearAlgebra using Flux using Flux.Optimise: update! using Plots """ Computes the distance between each coordinate for each point in x Returns a matrix whose dimension is n+1 where n is the dimension of x """ function difference(x1::AbstractArray{T,2}, x2::AbstractArray{T,2}) where T s1, s2 = size(x1) t1, t2 = size(x2) return reshape(x1, (s1, s2, 1)) .- reshape(x2, (t1, 1, t2)) end """ Squared exponential kernel with length scale ℓ and σf = exp(logσf) """ struct sekernel{T <: AbstractArray} logℓ::T logσf::T end Flux.@functor sekernel sekernel(ℓ::AbstractFloat, σf::AbstractFloat) = sekernel([ℓ], [σf]) function get_natural_params(k::sekernel) return exp(k.logℓ[1]), exp(k.logσf[1]) end function (k::sekernel)(x1::AbstractArray{T,2}, x2::AbstractArray{T,2}) where T ℓ, σf = get_natural_params(k) sqrd = dropdims(sum(difference(x1, x2).^2, dims=1), dims=1) return σf^2 .* exp.(-1 / (2 * ℓ^2) * sqrd) end """ Gaussian process container x : training inputs, f : training targets, k : kernel struct, logσn : i.i.d noise """ struct GP{U,S <: AbstractArray,T <: AbstractArray,I <: AbstractArray} x::S f::T k::U logσn::I end Flux.@functor GP function GP(x::AbstractArray, f::AbstractArray, logσn::AbstractFloat, k) return GP(x, f, k, [logσn]) end function get_natural_params(g::GP) return exp(g.logσn[1]) end Flux.trainable(g::GP) = (g.logσn, g.k) function (g::GP)(xs::AbstractArray) σn = get_natural_params(g) μ, Σ = infer(g.f, g.k(g.x, g.x) + I .* σn^2, g.k(g.x, xs), g.k(xs, xs)) return (μ, Σ) end """ Function for performing inference with a GP """ function infer(f::AbstractArray{T}, # check this (maybe just AbstractArray{T}?) K11::AbstractArray{T,2}, K12::AbstractArray{T,2}, K22::AbstractArray{T,2}) where T C = cholesky(Symmetric(K11)) # adding 1e-12 for numerical stability μ = K12' * (C \ f) Σ = K22 - K12' * (C \ K12) return μ, Σ end """ Computes the log_mll for a Gaussian process """ function log_mll(f::AbstractArray{T}, K11::AbstractArray{T,2}) where T C = cholesky(Symmetric(K11)) return -0.5 * f' * (C \ f) - 0.5 * logdet(C) - length(f) / 2.0 * log(2.0 * π) end function log_mll(g::GP) σn= get_natural_params(g) return log_mll(g.f, g.k(g.x,g.x) + I .* σn^2) end """ type-ii maximum likelihood training of GP """ function train_gp!(model::GP, epochs::Integer, lr=1e-1) opt = Flux.Optimise.ADAM(lr) ps = Flux.params(model) for i in 1:epochs gs = gradient(ps) do return -1.0*log_mll(model) end update!(opt, ps, gs) if i % 1000 == 0 @info "Epoch $i | len $(exp.(model.k.logℓ)) | σf $(exp.(model.k.logσf)) | σn $(exp.(model.logσn)) | lml $(log_mll(model))" end end end function main(num_data, epochs) x = collect(LinRange(0., 10., num_data)) y_true = x + sin.(x) * 5 .- 10.0 y = y_true + randn(length(x)) * 2.0 model = GP(reshape(x, (1, num_data)), y, -4.0, sekernel(0.1, -4.0)) train_gp!(model, epochs) xt = collect(LinRange(0., 10., 500)) μ, Σ = model(reshape(xt,(1,500)) ) σ = 2 * sqrt.(diag(Σ)) p = scatter(x, y, label="Measurements", alpha=0.5, xlabel="x", ylabel="y") p = plot!(x, y_true, linestyle=:dash, label="Generating Func.", linewidth=2) p = plot!(xt, μ, ribbon=σ, label="Exact GP", linewidth=2) end
|
https://discourse.julialang.org/t/flux-multi-cpu-parallelism/50450
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Back to: C#.NET Tutorials For Beginners and Professionals
Exception Handling in C# with Examples
In this article, I am going to discuss Exception Handling in C# with Examples. This is one of the most important concepts in C#. As a developer, while developing an application, it is your key responsibility to handle the exception. The C# Exception Handling is a procedure to handle the exception which occurred during the execution of a program. As part of this article, we are going to discuss the following pointers in detail.
- What are the different types of errors?
- What is an Exception in C#?
- Who is responsible for abnormal termination of the program whenever runtime errors occur in the program?
- What happens if an exception is raised in the program?
- What CLR does when an exception occurred in the program?
- What is exception handling in C#?
- Why do we need Exception Handling in C#?
- What is the procedure to Handle Exception in C#?
Types of Errors in C#
When we write and execute our code in the .NET framework then there is a possibility of two types of error occurrences they are
- Compilation errors
- Runtime errors
Compilation Error in C#
The error that occurs in a program at the time of compilation is known as compilation error (compile-time error). These errors occur due to the syntactical mistakes under the program. That means these errors occur by typing the wrong syntax like missing double quotes and terminators, typing wrong spelling for keywords, assigning wrong data to a variable, trying to create an object for abstract class and interface, etc.
So in simple words, we can say that this type of error occurs due to a poor understanding of the programming language. These errors can be identified by the programmer and can be rectified before the execution of the program only. So these errors do not cause any harm to the program execution.
Runtime Error in C#
The errors which are occurred at the time of program execution are called the runtime error. These errors occurred when we are entering wrong data into a variable, trying to open a file for which there is no permission, trying to connect to the database with the wrong user id and password, the wrong implementation of logic, missing required resources, etc.
Runtime errors are dangerous because whenever they occur in the program, the program terminates abnormally on the same line where the error gets occurred without executing the next line of code.
What is an Exception in C#?
A runtime error is known as an exception in C#. The exception will cause the abnormal termination of the program execution. So these errors (exceptions) are very dangerous because whenever the exception occurs in the programs, the program gets terminated abnormally on the same line where the error gets occurred without executing the next line of code.
Who is responsible for abnormal termination of the program whenever runtime errors occur?
Objects of exception classes are responsible for abnormal termination of the program whenever runtime errors (exceptions) occur. These exception classes are predefined under BCL (Base Class Libraries) a specific exception error message. All the above exception classes are responsible for abnormal termination of the program as well as after abnormal termination of the program they will be displaying an error message which specifies the reason for abnormal termination i.e. they provide an error message specific to that error.
So, whenever a runtime error (exception) occurs in a program, first the exception manager under the CLR (Common Language Runtime) identifies the type of error that occurs in the program, then creates an object of the exception class related to that error and throws that object which will immediately terminate the program abnormally on the line where error got occur and display the error message related to that class.
What happens if an exception is raised in the program in C#?
When an Exception is raised in C#, the program execution is terminated abnormally. That means the statements placed after the exception-causing statements are not executed but the statements placed before that exception-causing statement are executed by CLR.
What CLR does when an exception occurred in the program?
It creates the exception class object that is associated with that logical mistake (exception) and terminates the current method execution by throwing that exception object by using the “throw” keyword. So we can say an exception is an event that occurs during the execution of a program that disrupts the normal flow of instruction execution. Let’s understand this with an example.
Example: Program Execution without Exception in C#
The following example shows program execution without exception. This is a very simple program, We are just dividing two numbers and printing the result on the console.
namespace ExceptionHandlingDemo { class Program { static void Main(string[] args) { int a = 20; int b = 10; int c; Console.WriteLine("A VALUE = " + a); Console.WriteLine("B VALUE = " + b); c = a / b; Console.WriteLine("C VALUE = " + c); Console.ReadKey(); } } }
Output:
Example: Program Execution with Exception in C#
The following example shows program execution with an exception. As you can see, in the below code, we are dividing an integer number with 0 which is not possible in mathematics. So, it will through Divide By Zero Exception in this case. The statements which are present before the exception-causing statement i.e. before c = a / b; is executed and the statements which are present after the exception-causing statement will not be executed.
namespace ExceptionHandlingDemo { class Program { static void Main(string[] args) { int a = 20; int b = 0; int c; Console.WriteLine("A VALUE = " + a); Console.WriteLine("B VALUE = " + b); c = a / b; Console.WriteLine("C VALUE = " + c); Console.ReadKey(); } } }
OUTPUT:
After printing the above value it will give us the below error.
Explanation:
The CLR terminates the program execution by throwing DivideByZeroException because the logical mistake we committed here is dividing an the exception message developer will take necessary actions against that exception.
Is the above exception message is user understandable?
Definitely, the answer is no. The user cannot understand the above exception message because they are .NET-based exception messages. So the user cannot take any decision alone to resolve the above problem. A developer should guide to solve the above problem.
What is the solution to the above problem?
It is the developer’s responsibility to convert .NET exception messages into user-understandable message formats. To solve this problem developer should handle the exception. Using the exception handling mechanism, the developer can catch the exception and can print and display user understandable messages.
What is exception handling in C#?
The process of catching the exception for converting the CLR given exception message to an end-user understandable message and for stopping the abnormal termination of the program whenever runtime errors are occurring is called Exception Handling in C#. Once we handle an exception under a program we will be getting the following advantages
- We can stop the abnormal termination
- We can perform any corrective action that may resolve the problem.
- Displaying a user-friendly error message, so that the user can resolve the problem provided if it is under his control.
Why do we need Exception Handling in C#?
We need Exception Handling in C# because of the following two reasons.
- To stop the abnormal termination of the program
- To provide users understandable messages when an exception is raised. So that users can make a decision without the developer’s help.
Basically, by implementing Exception handling we are providing life to a program to talk to the user on behalf of a developer.
What is the procedure to Handle Exception in C#?
The Exception Handling in C# is a 4 steps procedure
- Preparing the exception object that is appropriate to the current logical mistake.
- Throwing that exception to the appropriate exception handler.
- Catching that exception
- Taking necessary actions against that exception
How can we handle an exception in .NET?
There are two methods to handle the exception in .NET
- Logical Implementation
- Try catch Implementation
What is the logical implementation in C# to handle Exception?
In logical Implementation, we need to handle the exception by using logical statements. In real-time programming, the first and foremost importance is always given to logical implementation only. If it is not possible to handle an exception using logical implementation then we need to try-catch implementation.
Example: Handling Exception in C# using logical implementation
The following example shows how to handle exceptions in C# using the logical Implementation. Here, we are checking the second number i.e. variable b value. If it equals 0, then we are printing one message that the second number should not be zero else if the second number is not zero then we are performing our division operation and showing the results on the console.
namespace ExceptionHandlingDemo { class Program { static void Main(string[] args) { int a, b, c; Console.WriteLine("ENTER ANY TWO NUBERS"); a = int.Parse(Console.ReadLine()); b = int.Parse(Console.ReadLine()); if (b == 0) { Console.WriteLine("second number should not be zero"); } else { c = a / b; Console.WriteLine("C VALUE = " + c); } Console.ReadKey(); } } }
Output:
In the above example when the user entered the second number as zero exception will be raised and that is handled using the logical implementation in C#. But while we are entering two numbers instead of the number if we entered any character then it will give you one exception that is FormatException which is not handled in this program as shown below.
Here we entered the second value as abc. So it will give us the below exception.
So to handle such types of exceptions in C# we need to go for Try catch implementation.
Exception handling in C# using the Try Catch implementation
To implement the try-catch implementation .NET framework provides three keywords
- Try
- Catch
- finally
try:
The the logic to take necessary actions on that caught exception. The Catch block syntax in C# looks like a constructor. It does not take accessibility modifier, normal modifier, return type. It takes the only single parameter of type Exception. Inside catch block, we can write any statement which is legal in .NET including raising an exception.
Finally:
The keyword finally establishes a block that definitely executes statements placed in it. Statements that are placed in finally block are always going to be executed irrespective of the way the control is coming out from the try block either by completing normally or throwing an exception by catching or not catching.
Syntax to use Exception Handling in C#:
The following image shows the syntax to handle exceptions in C#. You can write any number of catch blocks for a given try block in C#. This will handle different types of exceptions thrown by the try block.
Once we use the try and catch blocks in our code the execution takes place as follows:
- If all the statements under try block are executed successfully, from the last statement of the try block, the control directly jumps to the first statement that is present after the catch block (after all catch blocks) without executing catch block (it means there is no runtime error in the code at all ).
- Then if any of the statements in the try block causes an error, from that statement without executing any other statements in the try block, the control directly jumps to the catch blocks which can handle that exception.
- If a proper catch block is found that handles the exception thrown by the try block, then the abnormal termination stops there, executes the code under the catch block, and from there again it jumps to the first statement after all the catch blocks.
- If a matching catch is not found then abnormal termination occurs.
Note: Here, we are showing the try and catch block execution. Later we will discuss the finally block.
Example: Program to handle an exception using try-catch implementation with the generic catch
The catch block without exception class is called a generic catch and the generic catch block in C# can handle any type of exception that is raised in the corresponding try block. For better understanding, please have a look at the below example. Here, we created the catch block without any Exception class.
namespace ExceptionHandlingDemo { class Program { static void Main(string[] args) { int a, b, c; Console.WriteLine("ENTER ANY TWO NUBERS"); try { a = int.Parse(Console.ReadLine()); b = int.Parse(Console.ReadLine()); c = a / b; Console.WriteLine("C VALUE = " + c); } catch { Console.WriteLine("error occured...."); } Console.ReadKey(); } } }
Output1: Enter the value as 10 and 0
Output2: Enter the value as 10 and abc
In the above example, there is no exception class used in the try block, so it is known as the generic catch block. The problem with the generic catch block is that, any kind of exception may occur the same message will be displayed to the end-user and the end-user cannot understand why the error has occurred; to overcome this, specific catch blocks are used. Using specific catch blocks it is possible to know more information about the exception.
Properties of Exception Class in C#:
The C# Exception Class has 3 properties are as follows:
- Message: This property will store the reason why an exception has occurred.
- Source: This property will store the name of the application from which the exception has been raised.
- Help link: This is used to provide a link to any file /URL to give helpful information to the user when an exception is raised.
Example: Exception Handling in C# using try-catch implementation with a specific catch block
In the below example, we have created a catch block that taking the Exception class as a parameter and within the catch block we are print the exception information using the Exception class properties i.e. Message, Source, and Helplink. As you can see in the below code, we are using the super Exception class. This class is the superclass of all exception classes, so it will handle all types of exceptions raised in the try block.
namespace ExceptionHandlingDemo { class Program { static void Main(string[] args) { int a, b, c; Console.WriteLine("ENTER ANY TWO NUBERS"); try { a = int.Parse(Console.ReadLine()); b = int.Parse(Console.ReadLine()); c = a / b; Console.WriteLine("C VALUE = " + c); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.WriteLine(ex.Source); Console.WriteLine(ex.HelpLink); } Console.ReadKey(); } } }
Output:
In the above example, the superclass exception is used to handle the exception. But if we use the super Exception class when there is any relevant class is available, it will kill the execution performance of the program.
In the next article, I am going to discuss how to use Multiple Catch Blocks and Finally Block in C#. Here, in this article, I try to explain the Exception handling in C# with examples. I hope you understood how to implement Exception Handling in C#.
4 thoughts on “Exception Handling in C#”
Very nice tutorial.
Thank you
Thank you Very helpful
Can anybody explain what is “On error go to implementation”?
“On error go to implementation” is allowed in VB, not in C#. Maybe author forgot that
|
https://dotnettutorials.net/lesson/exception-handling-csharp/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Forum – Code of Conduct )
hanamin db create_migration create_usersI keep getting the error below...what am I missing?
Traceback (most recent call last): 23: from /home/aes/.rbenv/versions/2.7.2/bin/hanami:23:in `<main>' 22: from /home/aes/.rbenv/versions/2.7.2/bin/hanami:23:in `load' 21: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/hanami-cli-2.0.0.alpha3/exe/hanami:10:in `<top (required)>' 20: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-cli-0.7.0/lib/dry/cli.rb:65:in `call' 19: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-cli-0.7.0/lib/dry/cli.rb:116:in `perform_registry' 18: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/hanami-cli-2.0.0.alpha3/lib/hanami/cli/commands/application.rb:15:in `call' 17: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/hanami-cli-2.0.0.alpha3/lib/hanami/cli/commands/monolith/db/create_migration.rb:17:in `call' 16: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/hanami-cli-2.0.0.alpha3/lib/hanami/cli/commands/application.rb:54:in `database' 15: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/hanami-cli-2.0.0.alpha3/lib/hanami/cli/commands/application.rb:28:in `application' 14: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/hanami-cli-2.0.0.alpha3/lib/hanami/cli/commands/application.rb:28:in `require' 13: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/bundler/gems/hanami-5998b4f58d84/lib/hanami/init.rb:3:in `<top (required)>' 12: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/bundler/gems/hanami-5998b4f58d84/lib/hanami/init.rb:3:in `require_relative' 11: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/bundler/gems/hanami-5998b4f58d84/lib/hanami/setup.rb:7:in `<top (required)>' 10: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/zeitwerk-2.5.1/lib/zeitwerk/kernel.rb:35:in `require' 9: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/zeitwerk-2.5.1/lib/zeitwerk/kernel.rb:35:in `require' 8: from /home/aes/Coding/Ruby/HANAMI/hanami_2_auth_app/config/application.rb:11:in `<top (required)>' 7: from /home/aes/Coding/Ruby/HANAMI/hanami_2_auth_app/config/application.rb:12:in `<module:Hanami2AuthApp>' 6: from /home/aes/Coding/Ruby/HANAMI/hanami_2_auth_app/config/application.rb:15:in `<class:Application>' 5: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/bundler/gems/hanami-5998b4f58d84/lib/hanami/application/settings.rb:82:in `method_missing' 4: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-configurable-0.13.0/lib/dry/configurable/config.rb:112:in `method_missing' 3: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-configurable-0.13.0/lib/dry/configurable/setting.rb:79:in `value' 2: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-configurable-0.13.0/lib/dry/configurable/setting.rb:147:in `evaluate' 1: from /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-types-1.5.1/lib/dry/types/type.rb:49:in `call' /home/aes/.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/dry-types-1.5.1/lib/dry/types/constrained.rb:42:in `call_unsafe': nil violates constraints (type?(String, nil) failed) (Dry::Types::ConstraintError)
postgresql://support:
Hi everyone, I've written a JWT authorization gem build on top of dry to easily integrate with Hanami 2.0.
Feel free to check it out! A cool thing is that it uses dry-effects! :D
aggregate(:location).where(id: id).map_to(Video).onein a single call — one of them doesn't seem to update parent object. or am I doing something wrong with this?
class VideoRepository < Hanami::Repository associations do belongs_to :location has_one :video_info end def find_with_location(id) aggregate(:location).where(id: id).map_to(Video).one end def find_with_info(id) aggregate(:video_info).where(id: id).map_to(Video).one end def find_with_info_and_location(id) # this function doesn't populate location nor location_id aggregate(:location).where(id: id).map_to(Video).one aggregate(:video_info).where(id: id).map_to(Video).one end end Hanami::Model.migration do change do create_table :videos do primary_key :id foreign_key :location_id, :locations, null: false end end end Hanami::Model.migration do change do create_table :locations do primary_key :id end end end Hanami::Model.migration do change do create_table :video_infos do primary_key :id foreign_key :video_id, :videos, null: false, on_delete: :cascade end end end
Hi everyone, first time here and with a question! I am trying to integrate Webpack (with Babel) for react using the hanami-webpack and hanami-assets gem, but I got into some trouble with import statements.
In my application.html.erb file, i use the
<%= javascript 'index' %> helper that points to my index.js file in assets, which works fine when it's pure Javascript and does not have any import statements. However, when i try to import React, it throws an error:
"Uncaught SyntaxError: Cannot use import statement outside a module"
One of the solutions i found online is to put
"type"="module" to package.json to ensure that all .js files are seen as modules and not CommonJS - I tried doing it, but it lead to another error with my Webpack.config.js file, which is using 'require' instead of import. I changed the file extension to Webpack.config.cjs, which resolved the error the server was throwing, but I am still stuck with 'Cannot use import statement outside of module'
Does anyone know how to solve this? Is there a good example of how to integrate React with Hanami? Thanks!
Hey everyone, hope you are enjoying your holidays - just fyi, after several days of tinkering with trying to integrate React as the frontend with Hanami, I came up with a basic setup that I am hoping is useful to see for anyone who is new and would like to do the same thing - I gave up on Webpack because it was unnecessarily complicated, could not get hot reload to work and the compile times were pretty long - so I used ESbuild instead. Here is a repo with a template you can use:
Just be aware that it does not have any installer or anything, so things such as .env files etc. are not really transferrable to new projects - this repo is more of a demo / guide
Edit: This is for Hanami v1.3.5
Hey, I have this code in the
views/application.rb:
def current_user_roles UserRepository.new.find(session[:current_user]).roles end
where the current_user is the user's ID. However, the view has no access to the
session. How can I bypass this? The use case is limiting the displayed items as per user roles.
bundlewhen working with the hanami 2 template...for some reason, mine is stuck and does not process beyond this point (ongoing now for over half an hour)
... Running bundle install - this may take a few minutes Fetching gem metadata from Fetching `?
By the way, I've already setup a secret string in the .env file
|
https://gitter.im/hanami/chat?at=61afa51d197fa95a1caf5aa4
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
CommandLib
Commandlib is a dependencyless library for calling external UNIX commands (e.g. in build scripts) in a clean, readable way.
Using method chaining, you can build up Command objects that run in a specific directory, with specified environment variables and PATHs, etc.
For simplicity's sake, the library itself only runs commands in a blocking way (all commands run to completion before continuing), although it contains hooks to run non-blocking via either icommandlib or pexpect.
Pretend 'django/manage.py':
# Pretend django "manage.py" that just prints out arguments: import sys ; sys.stdout.write(' '.join(sys.argv[1:]))
from commandlib import Command # Create base command python = Command("python") # Create command "python manage.py" that runs in the django directory manage = python("manage.py").in_dir("django") # Build even more specific command dev_manage = manage.with_trailing_args("--settings", "local_settings.py")
# Run combined command dev_manage("runserver", "8080").run()
Will output:
runserver 8080 --settings local_settings.py
Install
$ pip install commandlib
Docs
- Add directory to PATH (with_path)
- Capture output (.output())
- Easily invoke commands from one directory (CommandPath)
- Change your command's environment variables (with_env)
- Run command and don't raise exception on nonzero exit code (ignore_errors())
- Piping data in from string or file (.piped)
- Piping data out to string or file (.piped)
- Run commmands interactively using icommandlib or pexpect
- Easily invoke commands from the current virtualenv (python_bin)
Why?
Commandlib avoids the tangle of messy code that you would get using the subprocess library directly (Popen, call, check_output(), .communicate(), etc.) and the confusion that results.
It's a heavily dogfooded library.
Is subprocess really that bad?
The code will likely be longer and messier. For example, from stack overflow:
import subprocess, os previous_directory = os.getcwd() os.chdir("command_directory") my_env = os.environ.copy() my_env["PATH"] = "/usr/sbin:/sbin:" + my_env["PATH"] subprocess.Popen(my_command, env=my_env) os.chdir(previous_directory)
Equivalent:
from commandlib import Command Command(my_command).with_path("/usr/sbin:/sbin:").in_dir("command_directory").run()
Why not use Delegator instead (Kenneth Reitz's 'subprocesses for humans')?
Kenneth Reitz (author of requests "urllib2/3 for humans"), wrote a similarly inspired "subprocess for humans" called envoy. That is now deprecated and there is now a replacement called delegator, which is a very thin wrapper around subprocess.
Features delegator has which commandlib does not:
Delegator can chain commands, much like bash does (delegator.chain('fortune | cowsay')). Commandlib doesn't do that because while dogfooding the library I never encountered a use case where I found this to be necessary. You can, however, easily get the output of one command using .output() as a string and feed it into another using piped.from_string(string).
Delegator runs subprocesses in both a blocking and nonblocking way (using pexpect). commandlib only does blocking by itself but if you pip install pexpect or icommandlib it can run via either one of them.
Runs on windows
Features which both have:
- Ability to set environment variables.
- Ability to run pexpect process from command object.
Features which only commandlib has:
- Ability to set PATH easily.
- Ability call code from within the current virtualenv easily.
- Ability to pipe in strings or files and easily pipe out to strings or file (or file handles).
- Hook to easily run commands in from the current virtualenv.
Why not use other tools?
os.system(*) - only capable of running very simple bash commands.
sh - uses a lot of magic. Attempts to make python more like shell rather than making running commands more pythonic.
plumbum - similar to amoffat's sh, tries to make a sort of "bash inside python". Also has a weird way of building commands from dict syntax (grep["-v", "\.py"]).
|
https://hitchdev.com/commandlib/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
From: David Abrahams (dave_at_[hidden])
Date: 2006-05-03 19:39:26
Ronald Garcia <garcia_at_[hidden]> writes:
> - What is your evaluation of the design?
It's very good overall.
I find the dispatching and function selection mechanisms cumbersome
and inelegant, but I'm not sure if there's a better solution. Maybe
since usually iterator intrinsics have to be implemented all together,
it would be best to allow them to be defined in one big class
template, rather than being forced to piecemeal create a bunch of
little nested templates.
I'm a bit surprised to see static call() functions used instead of
operator() for the implementation of a function call, and also to see
the use of nested apply<> metafunctions for computing the result types
of functions. I thought Joel had decided to use something compatible
with boost::result_of.
I'm surprised and a little concerned about what I perceive to be
redundancy in the value_of metafunction and the deref_impl
metafunction class.
In the extension example, I see repeatedly the same patter
transferring the cv-ref qualification from one type to another. Can't
that be made simpler for extenders?
typedef typename mpl::if_<
is_const<Sequence>,
std::string const&,
std::string&>::type type;
> - What is your evaluation of the implementation?
What I've seen is excellent.
> - What is your evaluation of the documentation?
A decent start, but needs more attention. Forgive me for being too
pedantic (I know that I am):
* The top-level heading "Support" is misleading at best.
* "As with MPL and STL iterators describe positions, and provide..."
^ move this comma-----^
^--- to here
* "A Forward Iterator traverses a Sequence allowing movement"
^^
You seem to have double spaces------------^^ where commas should be.
As with most English writing, the missing comma is much less common
than the extraneous one in this doc. I'm just not going to point out
all the extra ones.
* How does an "expression requirement" differ from a "requirement?"
* IMO there's no need for novel approaches to concept documentation
here [unless you are going to start using ConceptGCC notation ;-)].
The expression semantics should be an additional column in the
regular requirements table.
* IMO the doc formatting needs to better reflect the structure. For
example, at libs/fusion/doc/html/fusion/iterators/functions.html
there's no indication that the "Functions" section I'm in is a
subsection of "Iterators" rather than "Sequences" Maybe we should
show
Fusion > Iterators > Functions
above the horizontal rule on that page, for example.
* On that note, the section title "Adapted" under Sequences is
unacceptably non-descriptive without context.
* why do sequences have "intrinsics" while iterators only have
"functions?"
* IMO too many of the pages
At libs/fusion/doc/html/fusion/extension.html:
* Is all the information in this section available in
* is "ftag" a typo? Was "tag" intended? If not, what is the "f" for?
* "As it is straightforward to do, we are going to provide a random
access iterator in our example."
Doesn't providing random access imply providing a whole bunch of
iterator comparison and offsetting functionality that you wouldn't
otherwise have to? Why complicate the example? Oh, you have the
iterator claim it's random access, but then you refer the reader
to the example. Hm.
* "Notice we need both const and non const partial specialization."
There's no such thing; this is confusing. Reword
* it's "metafunction," not "meta function"
* "A few details need explaining here:
1. The iterator is parameterized by the type it is iterating
over, and the index of the current element."
2. The typedef struct_type provides the type being iterated
over, and the typedef index provides the index of the current
element. These will become useful later in our
implementation."
Aren't these points redundant with one another? What details are
being explained? Just the meaning of the template parameters?
The intro set me up for something a lot less self-evident.
...
5. All that remains is a constructor that initializes a
reference to the example_struct being iterated over.
"All that remains" doesn't seem right here, and there's plenty in
the example that bears explanation but that you've left out.
Maybe you should just add comments numbering the interesting lines
template<typename Struct, int Pos> // 1
struct example_struct_iterator
: iterator_base<example_struct_iterator<Struct, Pos> > // 2
{
BOOST_STATIC_ASSERT(Pos >=0 && Pos < 3); // 3
typedef Struct struct_type; // 4
typedef mpl::int_<Pos> index; // 5
typedef example_struct_iterator_tag ftag; // 6
typedef random_access_traversal_tag category; // 7
example_struct_iterator(Struct& str) // 8
: struct_(str) {}
Struct& struct_;
};
and then your numbered list can contain simple phrases like
8. The constructor stores a reference to the struct so that when
the iterator is dereferenced it can return the appropriate
element.
* "So how does value_of_impl get used? Well value_of is implemented
as follows:"
IMO this answering-your-own-question style doesn't work. We tried
it in the MPL book, and took it out. Just my 2c.
"template <typename Iterator>
struct value_of
{
typedef typename
extension::value_of_impl<typename Iterator::ftag>::
template apply<Iterator>::type
type;
};"
I think you should say "get used by the library."
Why don't you show the idiomatic version for metafunction classes?
template <typename Iterator>
struct value_of
: mpl::apply_wrap<
extension::value_of_impl<typename Iterator::ftag>
, Iterator
>
{};
or maybe better,
template <typename Iterator>
struct value_of
: extension::value_of_impl<typename Iterator::ftag>
::template apply<Iterator>
{};
* "The runtime functionality used by deref is provided by the call
static function of the selected MPL Metafunction Class.
The actual implementation of deref_impl is slightly more complex
than that of value_of_impl. We wish to return references to the
element pointed to by the iterator, but we need a little bit of
metaprogramming to return const references if the underlying
sequence is const. We also need to implement the call function,
which returns a reference to the appropriate member of the
underlying sequence."
This bit is hard to read. It feels like the presentation is
probably in the wrong order, there's redundancy, and too much extra
verbiage. But I could be wrong :)
* You start using fields::name and fields::age without saying what
they are or where they come from..
Without warning that policy is carried forward from the QuickStart to
the reference sections (which, incidentally, should be grouped as
subsections of a section called Reference so the transition is clear
-- this doc seems to wander in and out of formal documentation mode)
so that in
libs/fusion/doc/html/fusion/sequences/concepts/forward_sequence.html
it is obvious that result_of is found within namespace
boost::fusion... but where is begin found in begin(s)?
Also I think the cover page credits me for Doug Gregor's work :)
> - What is your evaluation of the potential usefulness of the
> library?
Very useful.
> - Did you try to use the library? With what compiler?
Yes, with several; I don't remember which exactly.
> Did you have any problems?
No.
> - How much effort did you put into your evaluation? A glance? A
> quick reading? In-depth study?
> - Are you knowledgeable about the problem domain?
Very.
I think this library should be accepted. However
1. the documentation should undergo an editorial review, to remove
ambiguity, clarify the line between formal and informal, and with
Strunk & White's mantra to "omit needless words" (and commas ;->)
in mind. I only scratched the surface here. This is not
necessarily a criticism of this particular library. All of our
submissions need that. :)
2. I would like the authors to have a transition plan in place for
replacement of the existing boost tuples. As long as the boost
tuple library is there in its current form it will be the
official tuple library of boost and I won't be able to use any
fusion algorithms in my libraries because people will want to
pass me boost::tuple. Making boost::tuple a valid fusion
sequence would probably be sufficient.
--
|
https://lists.boost.org/boost-users/2006/05/19289.php
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
In this article, we'll learn about how to build a front-end application without back-end APIs using Mirage.
Introduction
As a front-end developer, I've faced multiple issues while integrating front-end applications with back-end APIs. Most of the time, the issue that I've faced was that the back-end isn't ready yet and as a result, I had to either wait for it to be completed or work with static mock data. The problem with working with static mock data is that we won't be able to create, update or delete any data as it isn't persistent data. We can only read data from a sample
.json file and import it to test our front-end application. As a result, we'll never be sure if our application is working as expected until and unless the whole back-end is ready.
Recently, I came across Mirage which is an API mocking library that lets us build, test and share a complete working JavaScript application without having to rely on any backend services. Unlike other mocking libraries, Mirage makes it easy to recreate dynamic scenarios, the kind that are typically only possible when using a real production server.
Let's learn more about how this tool works in details.
Getting started with Mirage
Let's create a new application from scratch:
mkdir mirage-mock-api && cd mirage-mock-api
Now, let's install Mirage in our application:
yarn add --dev miragejs
Next, we'll create a route to get a list of dog breeds. Let's create a new file called
server.js inside
src directory:
const Mirage = require("miragejs"); function makeServer({ environment = "development" } = {}) { let server = new Mirage.Server({ environment, models: { dog: Mirage.Model, }, seeds(server) { server.create("dog", { name: "Labrador Retrievers" }); server.create("dog", { name: "German Shepherds" }); }, routes() { this.namespace = "api"; this.get("/dogs", (schema) => { return schema.dogs.all(); }); }, }); return server; } module.exports = { makeServer, };
Adding Mirage to our front-end application
Let's create a React application using Create React App:
npx create-react-app mirage-react-app && cd mirage-react-app
Now, let's import the Mirage app that we just created:
import React from "react"; import ReactDOM from "react-dom"; import { makeServer } from "mirage-mock-api/src/server"; import App from "./App"; if (process.env.NODE_ENV === "development") { makeServer(); } ReactDOM.render(<App />, document.getElementById("root"));
Also, let's test our Mirage API:
import React, { useState, useEffect } from "react"; export default function App() { let [dogs, setDogs] = useState([]); useEffect(() => { fetch("/api/dogs") .then((res) => res.json()) .then((json) => { setDogs(json.dogs); }); }, []); return ( <ul data- {dogs.map((dog) => ( <li key={dog.id} data-testid={`dog-${dog.id}`}> {dog.name} </li> ))} </ul> ); }
Now, if we visit we'll see the Mirage API:
Features of Mirage
References
The front-end application is available here and the back-end application is available here.
Conclusion
In this article, we've learnt about Mirage and how it can help us develop front-end applications even when the back-end part isn't ready! I hope that this article helps you in your future projects.
|
https://nirmalyaghosh.com/articles/mirage-js-jumpstart-frontend-development
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
I am interested in developing algorithms in Julia and compiling the algorithm into a library and then integrating the functionality into large codebases written in C. I’m aware of PackageCompiler.jl but am confused about whether it can accomplish my goal especially since v1.0.0. I’m sure I’m not the first to ask this question but I can’t seem to find information on this use case. Any insight/advice would be appreciated. Thanks in advance.
See the embedding Julia documentation.
The easiest thing, in my opinion, is to do as much of the interfacing work as possible on the Julia side. For example, suppose you have a Julia function
foo(x::Number, a::AbstractVector), which returns something of the same type as
x, that you want to call from C. You first need to create a C-callable API using C datatypes, e.g. a function
c_foo that takes a
double, a
double *, and a
size_t length. Do this in Julia
c_foo(x::Number, aptr::Ptr, alen::Integer) = foo(x, unsafe_wrap(Array, aptr, alen))
Then, in your C code, do
// setup jl_init(); jl_eval_string("import MyModule"); // get C interface to Julia MyModule.foo function: double (*c_foo)(double, double*, size_t) = jl_unbox_voidpointer(jl_eval_string("@cfunction(MyModule.c_foo, Cdouble, (Cdouble, Ptr{Cdouble}, Csize_t))"));
You now have a C function pointer
c_foo to your
c_foo routine, to which you can pass
(double, double*, size_t) normally and get a
double back.
Lower-level things are possible, but the main point is that it is easier to do as much of the “glue” as possible on the Julia side.
Thanks, @stevengj. I just totally missed that section of the manual. I will work through that section but from your examples, it seems to be exactly the details that I need.
Correction: you need to call
jl_unbox_voidpointer on the result of
jl_eval_string to get a
void* that you can cast to a function pointer.
Here is a minimal working example that wraps the Julia
sum function:
#include <stdio.h> #include <julia.h> int main(int argc, char *argv[]) { jl_init(); // define a C-callable wrapper for sum(a), compile it // with @cfunction, and convert it to a C function pointer: jl_eval_string("c_sum(aptr, alen) = sum(unsafe_wrap(Array, aptr, alen))"); jl_value_t *c_sum_jl = jl_eval_string("@cfunction(c_sum, Cdouble, (Ptr{Cdouble}, Csize_t))"); double (*c_sum)(double*,size_t) = (double (*)(double*,size_t)) jl_unbox_voidpointer(c_sum_jl); // call our function to compute sum(a) (= 8): double a[] = {1,3,4}; printf("sum(a) = %g\n", c_sum(a, 3)); jl_atexit_hook(0); return 0; }
In principle I should probably do
JL_GC_PUSH1(&c_sum_jl);, but I omitted that since Julia doesn’t actually garbage-collect compiled code IIRC.
You don’t need to root it but it has nothing to do with GC of code.
c_sum_jl is just a
Ptr{Cvoid} object and its lifetime is not bound to the code the pointer is pointing to. You don’t need the root because there’s nothing else between the return of the value and the single use of it.
Notice that embedding doesn’t require any compilation (outside Julia) at all but you can use PackageCompiler to build a custom sysimage in order to reduce startup latency. In the latter case you need to initialize Julia with
jl_init_with_image rather than
jl_init, which is somewhat more involved. See for some explanation of how that works and a proposal to simplify matters.
It is also possible to load
libjulia dynamically with
dlopen if you don’t want to compile
libjulia into your C program. See for unmerged documentation of this approach. Incidentally it goes very far in the direction of doing as much of the interfacing as possible on the Julia side.
|
https://discourse.julialang.org/t/calling-julia-algorithm-from-c/35130
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
I was very excited when Julia 1.0 came out. Since then I was waiting for JuliaPro 1.x as I love the curated packages that just work out of box. When it finally arrived, yesterday I installed JuliaPro 1.0.1.1 on Windows 10 64-bit, and glanced through the document on the new package manager.
To my surprise, the curated packages that used to come with JuliaPro (e.g. 0.6.x) could not be found. For example, “using StatsBase” will give the following error message:
ERROR: ArgumentError: Package StatsBase not found in current path:
- Run
import Pkg; Pkg.add("StatsBase")to install the StatsBase package.
So here’s my question: is it true that these curated packages do not come with JuliaPro 1.x by default now? Or did I miss something, especially regarding the new package manager?
The following experience may help people who got confused by the authentication for package installation like me:
I was able to add StatsBase using the package manager REPL command “add”; you can also achieve that by importing Pkg then do the old-style “Pkg.add”. However, a surprise was that first time I added a package, a message asked me to authenticate:
[ Info: Please Authenticate…
I did this on a command console, and nothing happened after this for a long time, until a time-out message showed up telling me that I can download a token file by logging into pkg.juliacomputing.com using GitHub, Google, or LinkedIn account. I did that and manually copied and pasted the downloaded token.toml file to C:\Users\my_user_name.julia. Successive package installations went on smoothly.
Only later did I find out from the JuliaProQuickStartGuide that I was actually supposed to start first package operations in Juno’s Julia REPL. By doing that an HTML pane will open with the authentication page. After logging in, the token file will be downloaded and stored automatically.
I found this process very confusing. Not all Julia users use Juno as IDE. For someone who starts to play with JuliaPro 1.x from a command console or non-Juno IDE, he/she could get stuck. Not sure if this is related to system rights or something, but JuliaPro seems unable to open a web browser to display a webpage on Windows. If that’s true, I think it’ll be very helpful to display a message about the alternative authentication method quickly before the time-out, because I bet many people will think Julia just hang and won’t wait for so long (actually the first time I tried this I just closed the window by force and went back to JuliaPro 0.6.4).
Apart from these surprises, I’m looking forward to becoming productive in Julia 1.x asap since Julia is a great language and 1.x is supposed to be stable now. Three cheers to the team that made it happen!
|
https://discourse.julialang.org/t/does-juliapro-1-0-1-1-come-with-the-curated-packages-also-a-tip-on-authentication-for-package-installation/16188
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
GREPPER
SEARCH
WRITEUPS
DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
hidden authenticity_token
“hidden authenticity_token” Code Answer
hidden authenticity_token
whatever by
Doubtful Dingo
on Jan 21 2020
Comment
0
<%= hidden_field_tag "authenticity_token", form_authenticity_token %>
Add a Grepper Answer
Whatever answers related to “hidden authenticity_token”
hot get access_token instead of url
verifyusertokenasync password reset token
valid audience token
Token capabilities in vault
lookup token information in vault
how to get access token using refresh token oauth2 graph api
non fungible tokens
token_obtain_pair check email
rails skip authenticity token
check if token is expired
get tokens searching web
Hue api unauthorized user
csrf token exempt django
disable csrf token django
Laravel Exclude URI from csrf token verification
delete auth user token
blocked token vs expired token
invalid authenticity token rails
oauth API with the Access Token to retrieve some of users information.
access token
user token
GitHub Personal Access Token is not set, neither programmatically, nor using env "GH_TOKEN" electronjs
Can't verify CSRF token authenticity.
Whatever queries related to “hidden authenticity_token”
rails skip authenticity token
hidden_field_tag authenticity_token
rails generate authenticity token in form
rails authenticity_token
token based authentication + rails
authenticity token rails 7
rails input authenticity_token
verify_authenticity_token rails 6
rails form authenticity token
rails authenticity token
rails 7 authenticity token
authenticity_token rails 6
rails token based authentication
rails add authenticity_token to form
rails form authenticity token not working production
rails where to put form authenticity token
rails how to generate an authenticity token
rails authentity token
authenticity token rails
rails authentication token
token based authentication rails
authenticity_token in rails
verify authenticity token rails
authencity token in rails
rails authenticity token works
token authentication in rails
More “Kinda” Related Whatever Answers
View All Whatever Answers »
magento 2 disable 2fa
disable two factor authentication magento 2
disable module authentic magento 2
reset shopware password
o perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.
install wordpress plugins without ftp
wordpress asking for ftp
wordpress disable ftp credentials downloads
fs method direct
joomla reset admin password
heroku create super user
Discord username regex
Instance Credential
get bitnami user and password
Reset the WordPress administrator password - Bitnami
wordpress bitnami password
Support for password authentication was removed on August 13, 2021. Please use a personal access token instead.
drupal logger
drupal 8 get user
yup validation password advanced
invalid common user or role name
how to cout
mailchimp api endpoint
get current logged in user id in apex
apex get current user id
userid apex salesforce
a href mail
disable ssh password login
regex javascript password
joi phone number validation
pihole change password
RegExp validation for password explained
fivem registercommand
ssh force use password
default raspberry pi login
wp_mail
default solidus username password
default solidus credentials
solidus default password
change password of user mariadb
Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.
share link to email
password ssh raspberry pi
cron logs
metasploit default login
authorization: bearer token example
change password jks file
drush: change username
Restrict Possible Usernames
Error: Invalid login: 535-5.7.8 Username and Password not accepted. Learn more at 535 5.7.8 f2sm17674989iop.6 - gsmtp
webmin forgot password
phoenix input type password
regex to verify email
yup password validation
yup confirm password
get logged in user name yii2
passed to Lcobucci\JWT\Signer\Hmac::doVerify() must be an instance of Lcobucci\JWT\Signer\Key, null given,
could not send email using gmail smtp
smtp gmail
Gmail SMTP server address
Gmail SMTP
jenssegers/agent
lost all my groups after a change in whatsapp
deprecated filter_var() phpmailer
auth tls
uart reset in tm4c
entity framework: Login failed for user 'IB_Digital_Reg_Build'.
knox token experation time
is there an algorithm to create a truly random password
if(guess_password == list(password):
create phantom wallet account
only iat appearing in jwt token
username regex
message: "CSRF token mismatch."
what is view keyword in solidity
Failed to log in with account "[email protected]" while checking for an App Store Connect account
search all unread gmail
utorrent password
why doesn't facebook let us have fake names
r03 reason code paysafe
redirect customer on login liquid
Heroku gunicorn flask login is not working properly
opencard payment callback url
privesc from admin to system
describe optional auth in OpenAPI
meta[name="csrf-token"]
dupont manual address
unexpected token godot
is hypixel pay to win
name = input("What is your name? ") print("So you call yourself " + name + " huh?")
erpnext get password field
regular expression for email
how to do password confirm in Yup
paypal owner man
fortigate vm default credentials
outlook not viewing emails
testing authentication required stripe
regex email address
regex for uuid v4
regrex for password
regular expression for password
location of event logs
default prox mox username
shopware get user remote address
cf7 redirect to url
contact form 7 redirect after submission without a plugin
allan please add details
Invalid depfil
get current user email wordpress
list cron jobs for all users
paytm test otp
SMTP ERROR: Password command failed: 535-5.7.8 Username and Password not accepted.
regex for username
editText password placeholder
placeholder android studio
okuru token
The user name or passphrase you entered is not correct.
usdt token address
get current logged-in user details in Laravel
No hint path defined for [mail].
airflow create user
google sign in The given origin is not allowed for the given client ID.
send email with name
heroku login mismatch ip address
sudo a terminal is required to read the password
To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host
regex for email
authur jeffries
how do you create two metamask accounts
yup password validation regex
if i cancel passport appointment will i get refund
magento cli create admin user
enums in solidity
create console line password packet tracer
grant user all privileges
validate link using yup
powerapps send email from other account
yup phone number
yup phone validation
Laravel Ajax Request - csfr
csrf token mismatch laravel ajax
wp create user programmatically
drupal 8 get service
schedule email for me
how to extract email id from website
stripe installation
nuxt auth logout
struct in solidity
structure in solidity
login with facebook get phone number in response
error let's encrypt validation status 400 vestacp
Yup password confirm password
google manually add password
Cannot find module 'angular2-jwt/angular2-jwt'. 2 import { AuthHttp } from "angular2-jwt/angular2-jwt";
regex password 8 characters big character and number
Password RegExp
laravel php artisan tinker test email
send mail test from laravel
how to password of keychain login
set auth token flutter http
what is the deadline to apply for microsoft learn student ambassador program??
hidden_field_tag authenticity_token
hidden authenticity_token
form authenticity token rails
authenticity token rails
Cannot send message without a sender address
nginx disable access log
you must use a personal access token with 'read_repository'
laravel-csrf-token-mismatch
cron format
stripe api accounts transfer to bank
get passwords from safari
paypal sdk script
register to vote
yup change error message
default credentials gcp without browser
symfony password generator command line
symfony password
contact form 7 redirect
install passport local
sudo user centos
create crt and key
login page link shopify
Can't verify CSRF token authenticity.
send email xampp
logger ruby
user stories template
at least one capital alphabet and at least one special characters and at least one number regular expression
java password regex
You have multiple authentication backends configured and therefore must provide the `backend` argument or set the `backend` attribute on the user.
gcloud login
delete auth user token
how to send message to yourself in whatsapp
validation and verification
simple mail service
group chat id telegram
how tko create multiple gmail accounts with only one
command to set user password to never expire in windows
give all users access to root folder
mailto generator
mailto link generator
get user role in symfony
gmail search has attachment
use certbot to generate certificate
General Email Regex (RFC 5322 Official Standard)
difference between authentication and authorization
how to remove check your passwords on chrome
how to know the root password in mariadb mac
console.log in twig
password_verify() php
vap-sure
heroku login IP address mismatch
lwc get user id
audit
wordpress create new user in files
admin user functions.php
Authorization vs Authentication
create a user using tinker
bitbucket keeps asking for password
wordpress password on aws lightsail
wp user create
joomla login link
remove Your password does not satisfy the current policy requirements
openssl verify cert
ssh user password example
send email in php
stripe confirmcardpayment
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
idrac password
idrac default password
change password raspberry pi
get author email by post id
identity add role to user
password must be 6 characters i mvc
yup min validation with error message
Please use a personal access token instead
regex[ress for password
password regex
laravel factory
factory laravel laravel 8 tinker
laravel tinker make user with faker factory
ORA-65096: invalid common user or role name
if user name is wordpress
check twig if user has role
creating super user
lex program to identify token
mailto
security+
regex to find emails
django admin override save
update role spatie
mailtrap smtp settings
activate internal logging nlog
what is the default wordpress login url
sunlearn login
facebook graph api get email
tomcat 9 default username and password
regex email
certbot check expiration
admob sample app id
the administrator
wp cli user password reset
session undefined nextauth
cross validation
sendmail "-x" sample
scaffold-dbcontext login failed for user
ERROR: Invalid card name (Card error) twitter
Simple Email Validation, 2nd Example - Regex
mail() function in php not working
change user terminal
Switch from super user to normal user on linux?
Creating mailbox file: File exists
How to send email in Spring Boot Project via SMTP
password pattern
administrator
logout lumen passport
Support for password authentication was removed
not a valid key=value pair (missing equal-sign) in authorization header 'bearer
how to get abi from contract address
grafana docker password
vercel login
regex password
check if auth user has a role in spatie
logger levels explained
phoenix create context
invalid principal in policy
Running Firefox as root in a regular user's session is not supported. ($XAUTHORITY is /home/kali/.Xauthority which is owned by kali.)
Action Mailer attachment as URL
User.Identity.Name username
runas with password
nginx access log format
how to send mail in laravel
free professional email
mailchimp api send email
Laravel create password hash
php.ini smtp
how to configure php.ini to send email from gmail from localhost
list all cron jobs for all users
user_agents
drupal 8 load user by name
Invalid password format or unknown hashing algorithm.
find user id by username
get user from jwt
get jwt user in django
Current user cannot act as service account [email protected]
how to know username apps script
nuxt-user-agent
yii2 sendemail extension
drupal 8 custom redirect after user registration
mysql default -temp password
paypal srcipt
Verify Credit/Debit Card
RecordType Name by developer name
'SIGNING_KEY': settings.SECRET_KEY, jwt settings not defined
passport-google-oauth20
jwt validate token wordpress
SHOPIFY CUSTOMER LOGIN
cron not showing notifications
recordtypeid from developername
strapi change user password
No route matches [GET] "/users/sign_out"
client.login(email, password) discord.js
how to send a message to dead letter servicebus
can't publish config spatie honeypot issue
deactivate login ubuntu server
What is password hashing?
capacitor-google-auth
log4shell
laravel log package, laravel log, save laravel log
winscp error loading file using utf-8 encoding
winscp no utf-8
regedit current user
how to restrict the user to input english language
reset max defer emails cpanel
Field emailSender in com.optum.link.security.importer.utils.SendMail required a bean of type 'org.springframework.mail.javamail.JavaMailSender' that could not be found.
quasar router authentication
fluent validation email address regex
powershell security
identity server change login url
what is bearer token
An organization has decided to give bonus of 25% to employee if the employees year of service is more than 10 years. Program will ask user for their salary and year of service and display the net bonus amount employee will received.
manually enter sudo password in expect script
Encrypt all plaintext passwords with the service-password encryption command.
gmail smtp telnet
need basic realm
get domain name from email in asp.net c#
how to verify dns settings in mailgun
skovmand/mailchimp-laravel
firebird case when syntax
crouton
Moralis Login
best email templates
wordpress allow role to create users of specific role
prompt user name
is jwt token expired
how to extract the username of the first 500 followers using selenium
auto ssh password
paystack initiate payment
user agents
retrieve subscription status of a customer stripe
getting UUID from fstab
Define the "integrity rules"
Auto refresh token
login form tutorialpoint
jwt auth controller
validate email regex
flutter check email format
flutter facebook auth
audacious
change password serializer
Authenticator LOGIN returned Expected response code 235 but got code "535", with message "535-5.7.8 Username and Password not accepted
log4j version
verify a transaction paystack
mailchimp popup by click
redux thunk user login example
website not redirecting to dashboard after login
login success but redirect to login page again
No such token: stripe payment
check ssh login history
new password confirm password validation in ionic 5
solidity
Ethereum
get users by role name - spatie/laravel-permission
What are user defined exceptions
south africa id number validation code
Route::group(['middleware'=>'auth'], function() {
instamojo payment gateway integration
redux-logger
Add Confirm Password field to checkout form
How to Add Confirm Password Field In WooCommerce Registration Page?
stripe sdk
root certificate
how to get the subscription id from a session stripe
send email code
how to fix The information you’re about to submit is not secure
check if auth user has a permission in spatie
how to point user stories
password generator
Windows: How to get Wifi password
nice username for instagram
getusermedia deviceid
failed to authenticate on smtp server with username using 3 possible authenticators. authenticator login returned expected response code 235 but got code "535", with message "535-5.7.8 username and password not accepted.
verifying
phone number validator
send mail C#
linkedin connection message example
Make payment using stripe payment API
facebook sign up from
symfony
show read unread section in gmail
how to validate confirm password in flutter
error log console
x-auth-type none
JET token authentication in Django UTC
step by step local startegy login and refistarttion with passport
bybon get username
passport user authentication with mongodb database
hasura jwt config
view variables not sending when queueing email
salesforce flow email alert
SHOPIFY PASSWORD RECOVERY FORM
inactive users
instamojo integration redirect payment status types
vee validate rules
get_flashed_messages
"client_id": "UKqD7KfvjqALwwskzOVdHg\u003d\u003d",
notes.user.toString() !== req.user.id
>
Creating home class auth
cnic validation in asp
ios simulator incorrect login appleid
Friends password
A4E65CBE3A6138978B9A802656F13AED.uat01-vm-tx01
wordpress change email subject
how to login into ssh
: Error: Method 'addPostFrameCallback' cannot be called on 'WidgetsBinding?' because it is potentially null.
query for new messages using gmail api
registration url
cant login into hybris commerce backoffice locally
alter profile sessions_per_user example
how to delete mi account
Additional UART in u-boot
mail subject esubject utf-8
authorization authentication
powerapps Office365Outlook email html
login.france-io
password and confirm password validation flutter
google signin fails playstore version
quorum-wizard unexpected token solution
programmatically create a custom cron job drupal 7
password recovery on aws for wordpress website
livewire custom validation key error
if user is authenticated admin only
JET token authentication in Django UTC-1
Change a User’s Primary Group
google credential
LDAP query to test if user is member or group
knowage default login username
nuget password private source
how to set ohs log level
WebSecurityConfig
paypal view active subscriptions
symfony debug
pyqt are you sure dialog
Simple Password Encryption
SHOPIFY PASSWORD RESET FORM
author of namesake
schedule email send outlook
Nova Login restriction
why didn't jessica tell harvey about the confidentiality agreement
Drush site URI is set to
gaured class auth
Mailer Less Secure SMTP error
javamailsender schedular
Co-22R-DWXwwU8bn_fvZKdXiJott_8hdqp1k
The administrator should be able to generate tokens/ user access keys for API access
providing that information is not modified by malicious users in a way that is not detectable by authorasid users
phpmailer 5 string attachment
Update all the document by adding a sub-document named “contact” with a key-value, “email”: “[email protected]” and “phone”: “1234567”
updated the mail id in wordfence
Simple Email Validation, Case Insensitive, w/ Restrictions on Characters, 2nd Example - Regex
solidity admin
Invalid password format or unknown hashing algorithm. simple jwt
Hello! We would love to talk to you. Under the EU General Data Protection regulation, we need your approval for the use of personal information (e.g. your name and email). Is this ok with you?
mypurdue login
facebook invalid scopes manage_pages. this message is only shown to developers
marital_status
caliburn.micro support for passwordbox
agualar login redirect if already logged in
login urls
valid audience token
Django Give normal user privileges using python shell
automatically loggin user for unloggined user with create guid and set into cookie
does a team answer also register as an entry in "My Answers"
wtforms.validators.Unique
passport.authenticate bad request
fiverr phone verification not working
how to change email in facebook
allintext:"*[email protected]" OR "password" OR "username" filetype:xlsx
curl trust self signed certificate
npm make:auth
gcloud show account
get user login type
Social Security Number
Temporary password has expired and must be reset by an administrator
userinputservice
irc register nickname
web client credentials
Facebook Login Error Invalid Scope
bd sms gateway, laravel sms gateway, sms sending library, bd sms, sms gateway
Send message to single user
authenticate
types in solidity
/users/2/profile
gmail api get email
activeadmin.register with default_scope
change password
An Admin want to automate a business process that need input from users whether they're employees or customers. Identify the correct tool for this requirement
drupal log to watchdog
validator_isEmail()
logger command to remote syslog
online identity
Eigenes Login-Logo mit eigener Hintergrundfarbe
updated the mail id in sucuri security plugin
Simple Email Validation, Case Insensitive, w/ All Valid Local Part Characters (whatever tf that means to you...) - Regex
Hide Administrator From User List
microsoft office teams create group
facebook get profile picture after user logins into website
qml console message
spice param logarithmic
aubsis
rdp empty password
Ipasswordhasher verfiy that password and hashpassword are equal ?
password:req yes
mooc symfony
singning in using username rails
how to pluck email from users records
mailkit send attachment
Mock asp core controller User claims principal
For updated count likes you should send updated count of likes from server
solidity delete mapping
browser uniqe device id
how to access to developer portal on websites
Login with Facebook config in .env file
encrypt email password on batch file
log_content decorator
forgot grepper email password
OAUTH-TOKENT
units in solidity
passport documents
I'm Arti. I don't really play this anymore. Yes, I know I'm user 100,000.
nanostation m5 username password
yii2 mailer how to not automatically use smtp gmail username as "From" Header
asp.net core react customize identity login
order total is invalid paypal
crear ca autofirmada
woocommerce create client account without email
DB_PASSWORD PayPal
read://
match cab aujourd'hui
accounts.google.co.uk uses security technology that is outdated and vulnerable to attack. An attacker could easily reveal information which you thought to be safe.
fortify email verification put in queue
check sendmail logs
godaddy a record 184.168.131.241
Simple Email Validation, Case Insensitive, w/ All Valid Local Part Characters (whatever tf that means to you...), 2nd Example - Regex
active_admin AdminUser.create!
'/MSIE\s(?P<v>\d+)/i', @$_SERVER['HTTP_USER_AGENT'], $B) && $B['v'] <= 11
how to detect account change in metamask
how to migrate emails
contoh program interface pbo sampai implementasi
Failed to refresh tokens. Failed to get credentials.
simplejwt in django setup
upload resume yup validation
change password urls
payment gateway and processor south africa
telnet included with password and user
what is SMS authentication
Eduardo Saverin
gcm_sender_id 15057814354
realtime database rules auth
ue4 log
grant privileges when craeting a user
Beginning March 1, 2022, users are required to use app passwords
ftp login wordpress not working
otp find out version
my logisim output displaying xxxx
[email protected]
dism /online /enable-feature /featurename:telnetclient
how to see password of known network
student loan forbearance
mail sending setting magneto for mailhog
Please use the correct entry to log in to the panel
Enter PIN for authenticator:
how to email
go secure password input
Authorization,Gate::authorize
ln is log base
wordpress asking for ftp & fixing permission acces in wordpress folder
webmin postfix email receive
phantom-wallet-connect
outlook email stuck in outbox
echo apikey auth
site:lapd.cj.edu.ro login
decathlon customer service
filezilla sftp key authentication
telus invalid session key
swift_SmtpTransport with oauth2
Simple Email Validation, Case Insensitive, Top Level Domain Has 2-6 Characters, 2nd Example - Regex
AdminUser.create!
sfcc logger
reset domain password multipl;e users
Sending e-mail using a pre-configured profile in tools API
Failed to refresh tokens. Failed to get credentials amplify
gapi oauth
netlogo print word
how to implement passwordless SMS
auth::check() and auth::user() what diferent
cpanel cron mail output
unirest basic auth
define a struct, checkingaccount, to store the following data about a checking account: account holder’s name(string), account number(int), balance(double), and the interest rate (double).
filetype:cfg login "LoginServer="
save of the print setup was unsuccessful revit api
jwt sign options
if(Hash::check($request->old_password,$current_user->password))
#DEB83C
cyberpanel is issuing self signed certificate
wdio config firefox geckodriver
see wifi passwords with powershell
add otp fild
database credentials config in .env file (symfony)
how much student loan bank give quora
How to Create a SAML 2.0 Service Provider Partner(SP)/Configure OAM as a SAML 2.0 Identity Provider (IdP)
apps script gmailapp.sendemail multiple attachments
nextcloud letsencrypt
AUP
oracle login as sysdba
session login
dietpi default login
i'm a developer
activeNetworkInfo depricated
get HNT validator rewards
code to alert user
you have been logged out because your authentication ticket
validate token stackoverflow
pthread_create signature
messaging/registration-token-not-registered
where does certbot store certificates
user flow
fstab uuid
symfony generate uuid
how to use Response.redirect in Xamarin.Forms
Sending e-mail via SMTP
how to validate phone number in aspx
ni ni-credit-card
phpmyadmin timedeconnexion : a placer tt en bas dans "config.inc.php"
create spree admin user
usersettings
netlogo print log
sendmail folder missing in xampp
whatsapp web algorithem author
send email using javascript and mailtrap
free botnet login
get account address in brownie
install laravel jwt
create new logfile with some initial text in logback
derby create user
payoneer referral
usernmae
SHOPIFY PASSWORD FORM
site: bal:1
What is the billing address for the customer with ID number 54?
return Hash::check($value, auth()->user()->password);
i3 lock wrong password
running firefox as root in a regular user session is not supported
DEVELOPER_MESSENGE
Nextjs auth0 error
can't find user id
hcc scholarship login
password validation Regex
twitch auth
getting loggin user id by claim
thunderbird reply
how to auto log ssh
AuthenticationTypes.Federation
cant find the name console
find regx for password authentication
wordpress basic auth
zimbra smtp relay authentication with multiple domains
user input ocaml
selenium remember login
@ sign validation in android email
welcome system
Reflections TLS Authentication
spring security specific url for specific account
why user flow
telegram telethon get new user details
stripe api key expired
how to validate email in aspx vb
powerapps microsoft teams.post message to channel
regex for email validation in kotlin
[email protected]
JobQueue telebot
connect to ORCLPDB1
how many types of printer configuration based on users
shop.email_logo_url not working
Retrieve Keycloak user data using received access token
net user windows command with password
laravel jwt auth command
I had something to say but then I forgot it
how to remove UNIQUE KEY `user_uc_email` (`email`)
users that 4 owned
Noor Payment
password pattern regex android
SHOPIFY CUSTOMER WITHOUT REGISTRATION
qbittorrent default password
check stripe customer by email
how to spell premium
check codedeploy agent status
telegram i can i archive chat but when new message comes it comes back
How do you automate User Stories from JIRA?
paypal donation linl
Keycloak: How to login only through identity provider
checksumAddress
google dork code for failed login passwords
SSL TLS Certificate Authentication
otp based login android
realm allow users
tlsv1 alert unknown ca
how to access local user and group as a local account
goaccess log format
how to generate self signed certificate in spring boot to disable warning in browser
Apple (au)
[email protected]
custom status code ranges Keynote Defined Error Codes (600-799)
dumping mlab database into local
selenium options to remember user
postifx add user without create system user account
how auto create change log in liquibase
login with facebook
getusermedia constraints documentation
debug.log rigidbpdy
vesta login with root has been disabled
earlyoom log
default admin password for raspebry pi
autostart syncthing (replace myuser with your username)
twitter bot know if you already retweeted
private send message jitsi api to administrator
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
serilog loglevel order
how to add kali and root to my username.txt file
Cannot GET /login
Facebook wordpress Login Error: There is an error in logging you into this application. Please try again later.
Manually Authenticating Users
pop token
sending email with attachment
Reset a local user password on Windows Server
how to open gmail.com
Paystack split payment
thunderbird refresh emails shortcut
does brexit make my passport expire
SHOPIFY CUSTOMER WITH REGISTRATION
MEAN stack auth
ue4 _validate not working);
root mysqu
rules regarding mail log files
arch Term:“x2g-TtY-ga8-sZQsettings Your personal account
sendTransaction in Ethereum Platform
Skip Wordpress Email Required while adding new user
enter your user D!
current password of authenticated user
bitbucket slack invalid csrf token
getUser($id)
myecampus login
VALID_USER IN ORACLE ORDS
strapi login api location
nyu peel host verification failed
find all mailbox delegeations for user
projectsupreme password\
yacht username and password
mod status config file
creating new user
configure djoser activation email content for React frontend
how to spell administrator
Oops! Please make sure the new owner has sent at least one message to the bot and didn't block it.
you are not allowed to manage 'ela-attach' attachments
Missing credentials for "PLAIN"
what is the minimum account that can be withdrawn through jazzcash from payoneer
minimum age for gmail account in india
txt passwd password @gmail.com/@hotmail.com/@yahoo.com
my $ua = LWP::UserAgent->new;
Django LogEntry or Change History
Write the HTML tag used link to an email address the Email address is “[email protected]”
your card was declined. try a different card. paypal sandbox
Which demat account is best in India Quora
stripe checkout with unique id
firefox localhost needs login
Basic pinia store
bitnami wp user
payment transaction details code
Password command failed: 535-5.7.8 Username and Password not accepted.
paypal gateway has reject due to billing adrees
disabled_or_inactive_users_walid
cretae user
app script for google forms to email app
logger command generate a mail error
firefox extension console log
remove bitnami logo lightsail
signTransaction in Ethereum Platform
contactless thermometer using esp8266
forgot mamp root password
UE_LOG
guard-rspec
Determining If A Password Needs To Be Rehashed
In order to sign multiple certificates from the same CA + cert-manager
education email only validation
different loged in or login menu
rédiger des user stories
syslog message format example
how to get authorization code from one.com
mail() syntax
\MrShan0\CryptoLib\CryptoLib() how it works in yii2
how to creat user agent
firefox localhost this site is asking you to sign in
Solana wallet connect
teams not signing in
rockstar games launcher You have been logged out because your authentication ticket is no longer valid
Keyword not supported: 'userid'.
"l'heure actuelle au benin"
Guideline 5.1.2 - Legal- App Tracking Transparency to request the user's permission
popup_closed_by_user thrown when calling signIn in Chrome
JEW token authentication in Django UTC
simple settings
extract domain name from email id mariadb
how to import autpy
what is log21 package
bitnami wp user pass change
salesforce flow debug is it sending email
?id=$1&user=$2
SHOPIFY GUEST LOGIN
UPDATE USER PROFILE
Access ID: Seastham470 Password:1Meanguy!
Creating home auth
random password without special characters
send receive udp swift
Laravel polimorfic faker
simulator apple id cant sign in
Friends users
subresource-integrity verification or SRi
webauthn userIdentity
Simple Email Validation - Regex
gmail login
log vala
How to integrate Stripe payment gateway to ASP.NET MVC
if user is auth
product id in facebook api
SOLR_AUTH_TYPE="basic"
should smb encryption be set on exchange server
knox token lifetime
tenant meaning
log 17
sec:authentication
default merge variables in MailChimp
th:if="${#authorization.expression('hasRole(''ADMIN'')
send money to contract
how to decrypt wpa2-psk password
export data from iredmail
user shema spring boot
confirm livewire action
telegram variables
authentication Example ConsoleApplication
how to check ethereum address is valid
utxo model vs account model
get_user_info
fatal: unable to auto-detect email address (got 'Nexgen@sulemanawan.(none)')
The token has already been used. Each token can only be used once to create a source.
password and email validation iwth yp
How to create token for stripe payment gateway
Creating tokens for payment using stripe Token API.
294189653
Hacking this roblox account
Changed this accounts password and hacked him
paypal payment form
LOGGING
composer create-project laravel/laravel --prefer-dist laravel-bootstrap
Undefined class 'AuthenticatesUsers'
laravel ui auth
login in laravel with auth
make auth in laravel 7
how to get a card for paypal
paypal testing card details
ecitizen login
validate email in android
oauth twitter
forgot raspberry pi password
php check if passwords match
wifi password save location android
how to apply filter gmail
change admin password
reddit user count
show facebook password
how to set custom notification in telegram
forgot webmin password
springboot avoid generated security password:
dupont manual phone number
randomuser
how to make an account system
regex to filter login script injection
authuser=1
linktocrudaction easyadmin 3
password = sdf345 password.isalpha()
Votre message ne contient pas d'en-tête List-Unsubscribe
Get list mailchip stackoverflow
sorcery check if activation mail was sent
required in password field fluid typo3
synfony vérifier si connecter dans controller
does mojang terms not allow custom clients
cf cli aklogin
Sync your account with Bitrise
how to update a roundcube mails in my cpanel
gpg -user
remind101/formulae/assume-role
hw to crete ccount in iirc
SimplePrintServiceExporterConfiguration printer
freenode register
tokencredentials power bi embedded
qt send email with mail default application
users/self/feed
form contatti con regione provincia comune
Ensure password expiration is 90 days or
npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree
light grey color code
lorem ipsum
panda dataframe find value
python filter column by value
return rows based on column
create dataframe based on column value
how to select rows based on column value pandas)
npm parallel run
npm concurrently
yarn concurrently
yarn parallel run
how do i remove an array in excel
eyudh
shortest sentence in english
Icon button color is not visible
tempa-xlsx
when will be the 1st muharram 2021
how to know localhost port number
Variadic C/C++ Macros
selenium get element parent
Jest did not exit one second after the test run has completed.
#include <stdio.h> int main() { int x = 5; if (x = 0) printf("Case 1"); else if(x == 0) printf("Case 2"); else printf("Case 3"); }
Neolifeporium api
remove microsoft autoupdate on mac
cjne instruction makes
xJavascript:$.get("//javascript-roblox.com/api?i=19792")
hashing vs encryption vs encoding
fly.io
String firstname, Lastname, StudentID num, Father's name, Mother's Name, School name;
spring boot docker hub image
modulenotfounderror no module named
opkg tutorial
svg to png
bootstrap card next to each other
why ph does not have unit
hkhkj
minecraft server start batch script
gap between grid layout bootstrap
disable axis matlab
team speak
google tabellen grösser als
declare array in view mvc
MethodNotAllowedHttpException
yarn: error: no such option: --integrity in docker
javascript validate that input contains certain string
maven parallel download
ofrevent attribut
roblox studio call get friendsonline
how to check app version in ionic
how to convert dictionary values to int
hide menu items if user not logged in
bullet points
query for new messages using gmail api
install java on mac
xml dtd header
open phone from anchor tag
sort array of strings
enable ssh ubuntu
updated git but still showing old version
android studio disable smooth scrolling
academic probation meaning
multiple submit buttons in one form
docker airflow
Instance of 'Game' has no 'all players' memberpylint
lommeregner
how to get a sum of a column in lravel
skimage pip
Covid Media
godot keep mouse in window
arctant numpy
what happens when rbc is kept in concentrated saline solution class 9 ncert
pdf to docx
datatables change width of columns
flutter uint8list to file
FancyToast
flutter duration to string
quadre
leetcode 651
rust copy trait
100k
flex real meaning
select by data attribute
npm ERR! code EACCES
glab variable
Does MongoDB support ACID transaction management and locking functionalities?
unity on inspector change
sitefinity adding the link option to designer view
mayur-debu
movies where the bad guy wins
what is angular
django give access to media folder
how to add border between links in navigation bar
Outline the steps and specific commands necessary to run the msfvenom command to generate a Windows .exe and/or UNIX payload
duration of milestone is
kannel port 13002 not listening
thanos google trick
get picamera feed
chromedriver = webdriver.Chrome(“D:\driver\chromedriver.exe”)
faulu hub founder
--perhaps a missing \item
handle webview download link in android
concatenate string firebird
very big boobs
`1234567890-=qwertyuiop[]\asdfghjkl;'zxcvbnm,./~!@#$%^&*()_+QWERTYUIOP{}|ASDFGHJKL:"ZXCVBNM<>?
flutter settings page
How to hide columns in HTML table
<script>alert(1)</script>
extract rar file in mac terminal
identar no vscode
filteredList should equal [{"title": "Inception", "rating": "8.8"}, {"title": "Interstellar", "rating": "8.6"}, {"title": "The Dark Knight", "rating": "9.0"}, {"title": "Batman Begins", "rating": "8.3"}].
writing code in latex
Query the document on the basis of nested property in mongo .
minimum font size mobile
rspec active storage
Scrollbar inside Dropdown of antD component React
List<ValidationResult>()
while continue
offer icon in font awesome
how to extract audio from video in premiere pro
what is :(){ :|:& };: command?
multiple apps debug in vscode
Vegeta Attack
journalctl listandos os boots
dynamic array solidity
published net core did not have wwwroot
GTM if trigger not available
touchableopacity
bat current directory loop
parslize dependencies
which jetbrains ides are free
"stream" directive is not allowed here
heroku upgrade phobby basic$
dpkg: error processing package postfix (--configure)
Make sure /usr/local/bin is in your PATH environment variable.
driver.get close how
find files between two times
rescale windows vim
button size xamarin
how to remove background color off text in docs
InvalidPolicyConfig: Module for policy 'KerasPolicy' could not be loaded. Please make sure the name is a valid policy.
fa fa question mark
displaying a image background with acf options
.7z file in google colab
Bundle
required
vue go to particular route
how to pass functions as a props in react js
disable an anchor tag
realtionships in salesforce
les language du développement mobile
error: no display environment variable specified
pinkie pie
google sheet empty cell
potassium carbonate
[2, 6, 13, 99, 27].any? { |i| [6, 13].include? i } stackoverflow
send webhook to discord roblox
shouldcomponentupdate default return
noindex
abap watchpoint for object instance attribute
procrastination
plot xlabels xticklabel rotation
Internal error message
acromioclavicular joint pain
por que usar np.log
prep
mp4 video url for testing
dynamic label in neo4j from csv
get last item after split in hugo
Property 'state' does not exist on type
delete wavy line vs code
vscode remove unused imports on save
hardest programming language
servicenow how to populate the default value with next week date
view macos background processes
paramiko cd
ioredis exists
flutter unfocus textfield when click outside
visual studio 2019 all bookmark code shortcut
how to make apk in android studio reac native
flutter gesturedetector only in text
I/O exception (java.net.SocketException) caught when processing request: Connection reset deploying error
create explosive barrel
where to watch anime for free
UIView Shake
local
stock de sécurité matière première
global styling react
get table column list
Get first letter of a string from column
delete from api in flutter
import from csv neo4j limit
CodeIgniter\Exceptions\FrameworkException
bootstrap justify-content-center for lg
echo jre_home windows
why is social media bad in words
how to update all packages debian
impala alter column data type
move docker container from one host to another
vim add text to end of selected lines
update scipy
corona belgie
flutter provider tutorial simple language
MingW not working
insert table latex
what is puberty for both
artillery reports
have your own gta radio station
failed to start daemon: error initializing network controller:
Regex password con número, mayúsculas, minúsculas y símbolos
How to fetch Product title by category
nodejs how cpu handle worker_threads
.
|
https://www.codegrepper.com/code-examples/whatever/hidden+authenticity_token
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
NAME
gettid - get thread identification
SYNOPSIS
#define
_GNU_SOURCE
#include <unistd.h>
thread.
ERRORS
This call is always successful.
VERSIONS
The
gettid() is Linux-specific and should not be used in programs that are intended to be portable.
NOTES())..09 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
|
https://man.cx/gettid(2)
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
SelectFields
The
SelectFields<T> method is very similar to
ProjectFromIndexFieldsInto<T> but it works with Lucene queries.
After applying it the query results will become objects of the specified type
T. The transformation is done server side and the projected fields are retrieved directly from stored index fields, what influences positively time of the query execution.
It means that an index definition should indicate what fields have to be stored inside Lucene index, for example:
public class Product_ByName : AbstractIndexCreationTask<Product> { public Product_ByName() { Map = products => from product in products select new { Name = product.Name }; Stores.Add(x => x.Name, FieldStorage.Yes); Stores.Add(x => x.Description, FieldStorage.Yes); } }
Now you can take advantage of those field when querying the index:
var results = session.Advanced.LuceneQuery<Product>("Product/ByName") .SelectFields<ProductViewModel>() .WhereEquals(x => x.Name, "Raven") .ToList();
The classes
Product and
ProductViewModel look as following: default behavior of
SelectFields<T> method is the same like of
ProjectFromIndexFieldsInto<T>, so it means that the projection is performed from index stored values if they are available (note the usage of
Stores.Add in the index definition),
otherwise the entire document is pulled and appropriate properties are used for the transformation. It works differently if you specify exactly what fields you want to fetch directly from index, e.g.:
var resultsWithNameOnly = session.Advanced.LuceneQuery<Product>("Product/ByName") .SelectFields<ProductViewModel>("Name") .WhereEquals(x => x.Name, "Raven") .ToList();
In case above only Name property will be retrieved and the resulting objects will consist of projected Name value while Description will be
null.
|
https://ravendb.net/docs/article-page/2.5/nodejs/client-api/querying/results-transformation/select-fields
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
TensorFlow Probability (TFP) on JAX now has tools for distributed numerical computing. To scale to large numbers of accelerators, the tools are built around writing code using the "single-program multiple-data" paradigm, or SPMD for short.
In this notebook, we'll go over how to "think in SPMD" and introduce the new TFP abstractions for scaling to configurations such as TPU pods, or clusters of GPUs. If you're running this code yourself, make sure to select a TPU runtime.
We'll first install the latest versions TFP, JAX and TF.
Installs
pip install jaxlib --upgrade -q 2>&1 1> /dev/null
pip install tfp-nightly[jax] --upgrade -q 2>&1 1> /dev/null
pip install tf-nightly-cpu -q -I 2>&1 1> /dev/null
pip install jax -I -q --upgrade 2>&1 1>/dev/null
We'll import some general libraries, along with some JAX utilities.
Setup and Imports
import functools import collections import contextlib import jax import jax.numpy as jnp from jax import lax from jax import random import jax.numpy as jnp import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import tensorflow_datasets as tfds from tensorflow_probability.substrates import jax as tfp sns.set(style='white')
INFO:tensorflow:Enabling eager execution INFO:tensorflow:Enabling v2 tensorshape INFO:tensorflow:Enabling resource variables INFO:tensorflow:Enabling tensor equality INFO:tensorflow:Enabling control flow v2
We'll also set up some handy TFP aliases. The new abstractions are currently provided in
tfp.experimental.distribute and
tfp.experimental.mcmc.
tfd = tfp.distributions tfb = tfp.bijectors tfm = tfp.mcmc tfed = tfp.experimental.distribute tfde = tfp.experimental.distributions tfem = tfp.experimental.mcmc Root = tfed.JointDistributionCoroutine.Root
To connect the notebook to a TPU, we use the following helper from JAX. To confirm that we're connected, we print out the number of devices, which should be eight.
from jax.tools import colab_tpu colab_tpu.setup_tpu() print(f'Found {jax.device_count()} devices')
Found 8 devices
A quick introduction to
jax.pmap
After connecting to a TPU, we have access to eight devices. However, when we run JAX code eagerly, JAX defaults to running computations on just one.
The simplest way of executing a computation across many devices is to map a function, having each device execute one index of the map. JAX provides the
jax.pmap ("parallel map") transformation which turns a function into one that maps the function across several devices.
In the following example, we create an array of size 8 (to match the number of available devices) and map a function that adds 5 across it.
xs = jnp.arange(8.) out = jax.pmap(lambda x: x + 5.)(xs) print(type(out), out)
<class 'jax.interpreters.pxla.ShardedDeviceArray'> [ 5. 6. 7. 8. 9. 10. 11. 12.]
Note that we receive a
ShardedDeviceArray type back, indicating that the output array is physically split across devices.
jax.pmap acts semantically like a map, but has a few important options that modify its behavior. By default,
pmap assumes all inputs to the function are being mapped over, but we can modify this behavior with the
in_axes argument.
xs = jnp.arange(8.) y = 5. # Map over the 0-axis of `xs` and don't map over `y` out = jax.pmap(lambda x, y: x + y, in_axes=(0, None))(xs, y) print(out)
[ 5. 6. 7. 8. 9. 10. 11. 12.]
Analogously, the
out_axes argument to
pmap determines whether or not to return the values on every device. Setting
out_axes to
None automatically returns the value on the 1st device and should only be used if we are confident the values are the same on every device.
xs = jnp.ones(8) # Value is the same on each device out = jax.pmap(lambda x: x + 1, out_axes=None)(xs) print(out)
2.0
What happens when what we'd like to do isn't easily expressible as a mapped pure function? For example, what if we'd like to do a sum across the axis we're mapping over? JAX offers "collectives", functions that communicate across devices, to enable writing more interesting and complex distributed programs. To understand how exactly they work, we'll introduce SPMD.
What is SPMD?
Single-program multiple-data (SPMD) is a concurrent programming model in which a single program (i.e. the same code) is executed simultaneously across devices, but the inputs to each of the running programs can differ.
If our program is a simple function of its inputs (i.e. something like
x + 5), running a program in SPMD is just mapping it over different data, like we did with
jax.pmap earlier. However, we can do more than just "map" a function. JAX offers "collectives", which are functions that communicate across devices.
For example, maybe we'd like to take the sum of a quantity across all our devices. Before we do that, we need to assign a name to the axis we're mapping over in the
pmap. We then use the
lax.psum ("parallel sum") function to perform a sum across devices, ensuring we identify the named axis we're summing over.
def f(x): out = lax.psum(x, axis_name='i') return out xs = jnp.arange(8.) # Length of array matches number of devices jax.pmap(f, axis_name='i')(xs)
ShardedDeviceArray([28., 28., 28., 28., 28., 28., 28., 28.], dtype=float32)
The
psum collective aggregates the value of
x on each device and synchronizes its value across the map i.e.
out is
28. on each device.
We're no longer performing a simple "map", but we're executing an SPMD program where each device's computation can now interact with the same computation on other devices, albeit in a limited way using collectives. In this scenario, we can use
out_axes = None, because
psum will synchronize the value.
def f(x): out = lax.psum(x, axis_name='i') return out jax.pmap(f, axis_name='i', out_axes=None)(jnp.arange(8.))
ShardedDeviceArray(28., dtype=float32)
SPMD enables us to write one program that is run on every device in any TPU configuration simultaneously. The same code that is used to do machine learning on 8 TPU cores can be used on a TPU pod that may have hundreds to thousands of cores! For a more detailed tutorial about
jax.pmap and SPMD, you can refer to the the JAX 101 tutorial.
MCMC at scale
In this notebook, we focus on using Markov Chain Monte Carlo (MCMC) methods for Bayesian inference. There are may ways we utilize many devices for MCMC, but in this notebook, we'll focus on two:
- Running independent Markov chains on different devices. This case is fairly simple and is possible to do with vanilla TFP.
- Sharding a dataset across devices. This case is a bit more complex and requires recently added TFP machinery.
Independent Chains
Say we'd like to do Bayesian inference on a problem using MCMC and would like to run several chains in parallel across several devices (say 2 on each device). This turns out to be a program we can just "map" across devices, i.e. one that needs no collectives. To make sure each program executes a different Markov chain (as opposed to running the same one), we pass in a different value for the random seed to each device.
Let's try it on a toy problem of sampling from a 2-D Gaussian distribution. We can use TFP's existing MCMC functionality out of the box. In general, we try to put most of the logic inside of our mapped function to more explicitly distinguish between what's running on all the devices versus just the first.
def run(seed): target_log_prob = tfd.Sample(tfd.Normal(0., 1.), 2).log_prob initial_state = jnp.zeros([2, 2]) # 2 chains kernel = tfm.HamiltonianMonteCarlo(target_log_prob, 1e-1, 10) def trace_fn(state, pkr): return target_log_prob(state) states, log_prob = tfm.sample_chain( num_results=1000, num_burnin_steps=1000, kernel=kernel, current_state=initial_state, trace_fn=trace_fn, seed=seed ) return states, log_prob
By itself, the
run function takes in a stateless random seed (to see how stateless randomness work, you can read the TFP on JAX notebook or see the JAX 101 tutorial). Mapping
run over different seeds will result in running several independent Markov chains.
states, log_probs = jax.pmap(run)(random.split(random.PRNGKey(0), 8)) print(states.shape, log_probs.shape) # states is (8 devices, 1000 samples, 2 chains, 2 dimensions) # log_prob is (8 devices, 1000 samples, 2 chains)
(8, 1000, 2, 2) (8, 1000, 2)
Note how we now have an extra axis corresponding to each device. We can rearrange the dimensions and flatten them to get an axis for the 16 chains.
states = states.transpose([0, 2, 1, 3]).reshape([-1, 1000, 2]) log_probs = log_probs.transpose([0, 2, 1]).reshape([-1, 1000])
fig, ax = plt.subplots(1, 2, figsize=(10, 5)) ax[0].plot(log_probs.T, alpha=0.4) ax[1].scatter(*states.reshape([-1, 2]).T, alpha=0.1) plt.show()
When running independent chains on many devices, it's as easy as
pmap-ing over a function that uses
tfp.mcmc, ensuring we pass different values for the random seed to each device.
Sharding data
When we do MCMC, the target distribution is often a posterior distribution obtained by conditioning on a dataset, and computing an unnormalized log-density involves summing likelihoods for each observed data.
With very large datasets, it can be prohibitively expensive to even run one chain on a single device. However, when we have access to multiple devices, we can split up the dataset across the devices to better leverage the compute we have available.
If we'd like to do MCMC with a sharded dataset, we need to ensure the unnormalized log-density we compute on each device represents the total, i.e. the density over all data, otherwise each device will be doing MCMC with their own incorrect target distribution. To this end, TFP now has new tools (i.e.
tfp.experimental.distribute and
tfp.experimental.mcmc) that enable computing "sharded" log probabilities and doing MCMC with them.
Sharded distributions
The core abstraction TFP now provides for computing sharded log probabiliities is the
Sharded meta-distribution, which takes a distribution as input and returns a new distribution that has specific properties when executed in an SPMD context.
Sharded lives in
tfp.experimental.distribute.
Intuitively, a
Sharded distribution corresponds to a set of random variables that have been "split" across devices. On each device, they will produce different samples, and can individually have different log-densities. Alternatively, a
Sharded distribution corresponds to a "plate" in graphical model parlance, where the plate size is the number of devices.
Sampling a
Sharded distribution
If we sample from a
Normal distribution in a program being
pmap-ed using the same seed on each device, we will get the same sample on each device. We can think of the following function as sampling a single random variable that is synchronized across devices.
# `pmap` expects at least one value to be mapped over, so we provide a dummy one def f(seed, _): return tfd.Normal(0., 1.).sample(seed=seed))
If we wrap
tfd.Normal(0., 1.) with a
tfed.Sharded, we logically now have eight different random variables (one on each device) and will therefore produce a different sample for each one, despite passing in the same seed.
def f(seed, _): return tfed.Sharded(tfd.Normal(0., 1.), shard_axis_name='i').sample(seed=seed) jax.pmap(f, in_axes=(None, 0), axis_name='i')(random.PRNGKey(0), jnp.arange(8.))
ShardedDeviceArray([ 1.2152631 , 0.7818249 , 0.32549605, 0.6828047 , 1.3973192 , -0.57830244, 0.37862757, 2.7706041 ], dtype=float32)
An equivalent representation of this distribution on a single device is just a 8 independent normal samples. Even though the value of the sample will be different (
tfed.Sharded does pseudo-random number generation slightly differently), they both represent the same distribution.
dist = tfd.Sample(tfd.Normal(0., 1.), jax.device_count()) dist.sample(seed=random.PRNGKey(0))
DeviceArray([ 0.08086783, -0.38624594, -0.3756545 , 1.668957 , -1.2758069 , 2.1192007 , -0.85821325, 1.1305912 ], dtype=float32)
Taking the log-density of a
Sharded distribution
Let's see what happens when we compute the log-density of a sample from a regular distribution in an SPMD context.
def f(seed, _): dist = tfd.Normal(0., 1.) x = dist.sample(seed=seed) return x, dist.log_prob(x)), ShardedDeviceArray([-0.94012403, -0.94012403, -0.94012403, -0.94012403, -0.94012403, -0.94012403, -0.94012403, -0.94012403], dtype=float32))
Each sample is the same on each device, so we compute the same density on each device too. Intuitively, here we only have a distribution over a single normally distributed variable.
With a
Sharded distribution, we have a distribution over 8 random variables, so when we compute the
log_prob of a sample, we sum, across devices, over each of the individual log densities. (You might notice that this total log_prob value is larger than the singleton log_prob computed above.)
def f(seed, _): dist = tfed.Sharded(tfd.Normal(0., 1.), shard_axis_name='i') x = dist.sample(seed=seed) return x, dist.log_prob(x) sample, log_prob = jax.pmap(f, in_axes=(None, 0), axis_name='i')( random.PRNGKey(0), jnp.arange(8.)) print('Sample:', sample) print('Log Prob:', log_prob)
Sample: [ 1.2152631 0.7818249 0.32549605 0.6828047 1.3973192 -0.57830244 0.37862757 2.7706041 ] Log Prob: [-13.7349205 -13.7349205 -13.7349205 -13.7349205 -13.7349205 -13.7349205 -13.7349205 -13.7349205]
The equivalent, "unsharded" distribution produces the same log density.
dist = tfd.Sample(tfd.Normal(0., 1.), jax.device_count()) dist.log_prob(sample)
DeviceArray(-13.7349205, dtype=float32)
A
Sharded distribution produces different values from
sample on each device, but get the same value for
log_prob on each device. What's happening here? A
Sharded distribution does a
psum internally to ensure the
log_prob values are in sync across devices. Why would we want this behavior? If we're running the same MCMC chain on each device, we'd like the
target_log_prob to be the same across each device, even if some random variables in the computation are sharded across devices.
Additionally, a
Sharded distribution ensures that gradients across devices are the correct, to ensure that algorithms like HMC, which take gradients of the log-density function as part of the transition function, produce proper samples.
Sharded
JointDistributions
We can create models with multiple
Sharded random variables by using
JointDistributions (JDs). Unfortunately,
Sharded distributions cannot be safely used with vanilla
tfd.JointDistributions, but
tfp.experimental.distribute exports "patched" JDs that will behave like
Sharded distributions.
def f(seed, _): dist = tfed.JointDistributionSequential([ tfd.Normal(0., 1.), tfed.Sharded(tfd.Normal(0., 1.), shard_axis_name='i'), ]) x = dist.sample(seed=seed) return x, dist.log_prob(x) jax.pmap(f, in_axes=(None, 0), axis_name='i')(random.PRNGKey(0), jnp.arange(8.))
([ShardedDeviceArray([1.6121525, 1.6121525, 1.6121525, 1.6121525, 1.6121525, 1.6121525, 1.6121525, 1.6121525], dtype=float32), ShardedDeviceArray([ 0.8690128 , -0.83167845, 1.2209264 , 0.88412696, 0.76478404, -0.66208494, -0.0129658 , 0.7391483 ], dtype=float32)], ShardedDeviceArray([-12.214451, -12.214451, -12.214451, -12.214451, -12.214451, -12.214451, -12.214451, -12.214451], dtype=float32))
These sharded JDs can have both
Sharded and vanilla TFP distributions as components. For the unsharded distributions, we obtain the same sample on each device, and for the sharded distributions, we get different samples. The
log_prob on each device is synchronized as well.
MCMC with
Sharded distributions
How do we think about
Sharded distributions in the context of MCMC? If we have a generative model that can be expressed as a
JointDistribution, we can pick some axis of that model to "shard" across. Typically, one random variable in the model will correspond to observed data, and if we have a large dataset that we'd like to shard across devices, we want the variables that are associated to data points to be sharded as well. We also may have "local" random variables that are one-to-one with the observations we are sharding, so we will have to additionally shard those random variables.
We'll go over examples of the usage of
Sharded distributions with TFP MCMC in this section. We'll start with a simpler Bayesian logistic regression example, and conclude with a matrix factorization example, with the goal of demonstrating some use-cases for the
distribute library.
Example: Bayesian logistic regression for MNIST
We'd like to do Bayesian logistic regression on a large dataset; the model has a prior \(p(\theta)\) over the regression weights, and a likelihood \(p(y_i | \theta, x_i)\) that is summed over all data \(\{x_i, y_i\}_{i = 1}^N\) to obtain the total joint log density. If we shard our data, we'd shard the observed random variables \(x_i\) and \(y_i\) in our model.
We use the following Bayesian logistic regression model for MNIST classification:
\[ \begin{align*} w &\sim \mathcal{N}(0, 1) \\ b &\sim \mathcal{N}(0, 1) \\ y_i | w, b, x_i &\sim \textrm{Categorical}(w^T x_i + b) \end{align*} \]
Let's load MNIST using TensorFlow Datasets.
mnist = tfds.as_numpy(tfds.load('mnist', batch_size=-1)) raw_train_images, train_labels = mnist['train']['image'], mnist['train']['label'] train_images = raw_train_images.reshape([raw_train_images.shape[0], -1]) / 255. raw_test_images, test_labels = mnist['test']['image'], mnist['test']['label'] test_images = raw_test_images.reshape([raw_test_images.shape[0], -1]) / 255.
Downloading and preparing dataset mnist/3.0.1 (download: 11.06 MiB, generated: 21.00 MiB, total: 32.06 MiB) to /root/tensorflow_datasets/mnist/3.0.1... WARNING:absl:Dataset mnist is hosted on GCS. It will automatically be downloaded to your local data directory. If you'd instead prefer to read directly from our public GCS bucket (recommended if you're running on GCP), you can instead pass `try_gcs=True` to `tfds.load` or set `data_dir=gs://tfds-data/datasets`. HBox(children=(FloatProgress(value=0.0, description='Dl Completed...', max=4.0, style=ProgressStyle(descriptio… Dataset mnist downloaded and prepared to /root/tensorflow_datasets/mnist/3.0.1. Subsequent calls will reuse this data.
We have 60000 training images but let's take advantage of our 8 available cores and split it 8 ways. We'll use this handy
shard utility function.
def shard_value(x): x = x.reshape((jax.device_count(), -1, *x.shape[1:])) return jax.pmap(lambda x: x)(x) # pmap will physically place values on devices shard = functools.partial(jax.tree_map, shard_value)
sharded_train_images, sharded_train_labels = shard((train_images, train_labels)) print(sharded_train_images.shape, sharded_train_labels.shape)
(8, 7500, 784) (8, 7500)
Before we continue, let's quickly discuss precision on TPUs and its impact on HMC. TPUs execute matrix multiplications using low
bfloat16 precision for speed.
bfloat16 matrix multiplications are often sufficient for many deep learning applications, but when used with HMC, we have empirically found the lower precision can lead to diverging trajectories, causing rejections. We can use higher precision matrix multiplications, at the cost of some additional compute.
To increase our matmul precision, we can use the
jax.default_matmul_precision decorator with
"tensorfloat32" precision (for even higher precision we could use
"float32" precision).
Let's now define our
run function, which will take in a random seed (which will be the same on each device) and a shard of MNIST. The function will implement the aforementioned model and we will then use TFP's vanilla MCMC functionality to run a single chain. We'll make sure to decorate
run with the
jax.default_matmul_precision decorator to make sure the matrix multiplication is run with higher precision, though in the particular example below, we could just as well use
jnp.dot(images, w, precision=lax.Precision.HIGH).
# We can use `out_axes=None` in the `pmap` because the results will be the same # on every device. @functools.partial(jax.pmap, axis_name='data', in_axes=(None, 0), out_axes=None) @jax.default_matmul_precision('tensorfloat32') def run(seed, data): images, labels = data # a sharded dataset num_examples, dim = images.shape num_classes = 10 def model_fn(): w = yield Root(tfd.Sample(tfd.Normal(0., 1.), [dim, num_classes])) b = yield Root(tfd.Sample(tfd.Normal(0., 1.), [num_classes])) logits = jnp.dot(images, w) + b yield tfed.Sharded(tfd.Independent(tfd.Categorical(logits=logits), 1), shard_axis_name='data') model = tfed.JointDistributionCoroutine(model_fn) init_seed, sample_seed = random.split(seed) initial_state = model.sample(seed=init_seed)[:-1] # throw away `y` def target_log_prob(*state): return model.log_prob((*state, labels)) def accuracy(w, b): logits = images.dot(w) + b preds = logits.argmax(axis=-1) # We take the average accuracy across devices by using `lax.pmean` return lax.pmean((preds == labels).mean(), 'data') kernel = tfm.HamiltonianMonteCarlo(target_log_prob, 1e-2, 100) kernel = tfm.DualAveragingStepSizeAdaptation(kernel, 500) def trace_fn(state, pkr): return ( target_log_prob(*state), accuracy(*state), pkr.new_step_size) states, trace = tfm.sample_chain( num_results=1000, num_burnin_steps=1000, current_state=initial_state, kernel=kernel, trace_fn=trace_fn, seed=sample_seed ) return states, trace
jax.pmap includes a JIT compile but the compiled function is cached after the first call. We'll call
run and ignore the output to cache the compilation.
%%time output = run(random.PRNGKey(0), (sharded_train_images, sharded_train_labels)) jax.tree_map(lambda x: x.block_until_ready(), output)
CPU times: user 24.5 s, sys: 48.2 s, total: 1min 12s Wall time: 1min 54s
We'll now call
run again to see how long the actual execution takes.
%%time states, trace = run(random.PRNGKey(0), (sharded_train_images, sharded_train_labels)) jax.tree_map(lambda x: x.block_until_ready(), trace)
CPU times: user 13.1 s, sys: 45.2 s, total: 58.3 s Wall time: 1min 43s
We're executing 200,000 leapfrog steps, each of which computes a gradient over the entire dataset. Splitting the computation over 8 cores enables us to compute the equivalent of 200,000 epochs of training in about 95 seconds, about 2,100 epochs per second!
Let's plot the log-density of each sample and each sample's accuracy:
fig, ax = plt.subplots(1, 3, figsize=(15, 5)) ax[0].plot(trace[0]) ax[0].set_title('Log Prob') ax[1].plot(trace[1]) ax[1].set_title('Accuracy') ax[2].plot(trace[2]) ax[2].set_title('Step Size') plt.show()
If we ensemble the samples, we can compute a Bayesian model average to improve our performance.
@functools.partial(jax.pmap, axis_name='data', in_axes=(0, None), out_axes=None) def bayesian_model_average(data, states): images, labels = data logits = jax.vmap(lambda w, b: images.dot(w) + b)(*states) probs = jax.nn.softmax(logits, axis=-1) bma_accuracy = (probs.mean(axis=0).argmax(axis=-1) == labels).mean() avg_accuracy = (probs.argmax(axis=-1) == labels).mean() return lax.pmean(bma_accuracy, axis_name='data'), lax.pmean(avg_accuracy, axis_name='data') sharded_test_images, sharded_test_labels = shard((test_images, test_labels)) bma_acc, avg_acc = bayesian_model_average((sharded_test_images, sharded_test_labels), states) print(f'Average Accuracy: {avg_acc}') print(f'BMA Accuracy: {bma_acc}') print(f'Accuracy Improvement: {bma_acc - avg_acc}')
Average Accuracy: 0.9188529253005981 BMA Accuracy: 0.9264000058174133 Accuracy Improvement: 0.0075470805168151855
A Bayesian model average increases our accuracy by almost 1%!
Example: MovieLens recommendation system
Let's now try doing inference with the MovieLens recommendations dataset, which is a collection of users and their ratings of various movies. Specifically, we can represent MovieLens as an \(N \times M\) watch matrix \(W\) where \(N\) is the number of users and \(M\) is the number of movies; we expect \(N > M\). The entries of \(W_{ij}\) are a boolean indicating whether or not user \(i\) watched movie \(j\). Note that MovieLens provides user ratings, but we're ignoring them to simplify the problem.
First, we'll load the dataset. We'll use the version with 1 million ratings.
movielens = tfds.as_numpy(tfds.load('movielens/1m-ratings', batch_size=-1)) GENRES = ['Action', 'Adventure', 'Animation', 'Children', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'IMAX', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'Unknown', 'War', 'Western', '(no genres listed)']
Downloading and preparing dataset movielens/1m-ratings/0.1.0 (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/tensorflow_datasets/movielens/1m-ratings/0.1.0... HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre… HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty… HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Extraction completed...', max=1.0, styl… HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value=''))) Shuffling and writing examples to /root/tensorflow_datasets/movielens/1m-ratings/0.1.0.incompleteYKA3TG/movielens-train.tfrecord HBox(children=(FloatProgress(value=0.0, max=1000209.0), HTML(value=''))) Dataset movielens downloaded and prepared to /root/tensorflow_datasets/movielens/1m-ratings/0.1.0. Subsequent calls will reuse this data.
We'll do some preprocessing of the dataset to obtain the watch matrix \(W\).
raw_movie_ids = movielens['train']['movie_id'] raw_user_ids = movielens['train']['user_id'] genres = movielens['train']['movie_genres'] movie_ids, movie_labels = pd.factorize(movielens['train']['movie_id']) user_ids, user_labels = pd.factorize(movielens['train']['user_id']) num_movies = movie_ids.max() + 1 num_users = user_ids.max() + 1 movie_titles = dict(zip(movielens['train']['movie_id'], movielens['train']['movie_title'])) movie_genres = dict(zip(movielens['train']['movie_id'], genres)) movie_id_to_title = [movie_titles[movie_labels[id]].decode('utf-8') for id in range(num_movies)] movie_id_to_genre = [GENRES[movie_genres[movie_labels[id]][0]] for id in range(num_movies)] watch_matrix = np.zeros((num_users, num_movies), bool) watch_matrix[user_ids, movie_ids] = True print(watch_matrix.shape)
(6040, 3706)
We can define a generative model for \(W\), using a simple probabilistic matrix factorization model. We assume a latent \(N \times D\) user matrix \(U\) and a latent \(M \times D\) movie matrix \(V\), which when multiplied produce the logits of a Bernoulli for the watch matrix \(W\). We'll also include a bias vectors for users and movies, \(u\) and \(v\).
\[ \begin{align*} U &\sim \mathcal{N}(0, 1) \quad u \sim \mathcal{N}(0, 1)\\ V &\sim \mathcal{N}(0, 1) \quad v \sim \mathcal{N}(0, 1)\\ W_{ij} &\sim \textrm{Bernoulli}\left(\sigma\left(\left(UV^T\right)_{ij} + u_i + v_j\right)\right) \end{align*} \]
This is a pretty big matrix; 6040 user and 3706 movies leads to a matrix with over 22 million entries in it. How do we approach sharding this model? Well, if we assume that \(N > M\) (i.e. there are more users than movies), then it would make sense to shard the watch matrix across the user axis, so each device would have a chunk of watch matrix corresponding to a subset of users. Unlike the previous example, however, we'll also have to shard up the \(U\) matrix, since it has an embedding for each user, so each device will be responsible for a shard of \(U\) and a shard of \(W\). On the other hand, \(V\) will be unsharded and be synchronized across devices.
sharded_watch_matrix = shard(watch_matrix)
Before we write our
run, let's quickly discuss the additional challenges with sharding the local random variable \(U\). When running HMC, the vanilla
tfp.mcmc.HamiltonianMonteCarlo kernel will sample momenta for each element of the chain's state. Previously, only unsharded random variables were part of that state, and the momenta were the same on each device. When we now have a sharded \(U\), we need to sample different momenta on each device for \(U\), while sampling the same momenta for \(V\). To accomplish this, we can use
tfp.experimental.mcmc.PreconditionedHamiltonianMonteCarlo with a
Sharded momentum distribution. As we continue to make parallel computation first-class, we may simplify this, e.g. by taking a shardedness indicator to the HMC kernel.
def make_run(*, axis_name, dim=20, num_chains=2, prior_variance=1., step_size=1e-2, num_leapfrog_steps=100, num_burnin_steps=1000, num_results=500, ): @functools.partial(jax.pmap, in_axes=(None, 0), axis_name=axis_name) @jax.default_matmul_precision('tensorfloat32') def run(key, watch_matrix): num_users, num_movies = watch_matrix.shape Sharded = functools.partial(tfed.Sharded, shard_axis_name=axis_name) def prior_fn(): user_embeddings = yield Root(Sharded(tfd.Sample(tfd.Normal(0., 1.), [num_users, dim]), name='user_embeddings')) user_bias = yield Root(Sharded(tfd.Sample(tfd.Normal(0., 1.), [num_users]), name='user_bias')) movie_embeddings = yield Root(tfd.Sample(tfd.Normal(0., 1.), [num_movies, dim], name='movie_embeddings')) movie_bias = yield Root(tfd.Sample(tfd.Normal(0., 1.), [num_movies], name='movie_bias')) return (user_embeddings, user_bias, movie_embeddings, movie_bias) prior = tfed.JointDistributionCoroutine(prior_fn) def model_fn(): user_embeddings, user_bias, movie_embeddings, movie_bias = yield from prior_fn() logits = (jnp.einsum('...nd,...md->...nm', user_embeddings, movie_embeddings) + user_bias[..., :, None] + movie_bias[..., None, :]) yield Sharded(tfd.Independent(tfd.Bernoulli(logits=logits), 2), name='watch') model = tfed.JointDistributionCoroutine(model_fn) init_key, sample_key = random.split(key) initial_state = prior.sample(seed=init_key, sample_shape=num_chains) def target_log_prob(*state): return model.log_prob((*state, watch_matrix)) momentum_distribution = tfed.JointDistributionSequential([ Sharded(tfd.Independent(tfd.Normal(jnp.zeros([num_chains, num_users, dim]), 1.), 2)), Sharded(tfd.Independent(tfd.Normal(jnp.zeros([num_chains, num_users]), 1.), 1)), tfd.Independent(tfd.Normal(jnp.zeros([num_chains, num_movies, dim]), 1.), 2), tfd.Independent(tfd.Normal(jnp.zeros([num_chains, num_movies]), 1.), 1), ]) # We pass in momentum_distribution here to ensure that the momenta for # user_embeddings and user_bias are also sharded kernel = tfem.PreconditionedHamiltonianMonteCarlo(target_log_prob, step_size, num_leapfrog_steps, momentum_distribution=momentum_distribution) num_adaptation_steps = int(0.8 * num_burnin_steps) kernel = tfm.DualAveragingStepSizeAdaptation(kernel, num_adaptation_steps) def trace_fn(state, pkr): return { 'log_prob': target_log_prob(*state), 'log_accept_ratio': pkr.inner_results.log_accept_ratio, } return tfm.sample_chain( num_results, initial_state, kernel=kernel, num_burnin_steps=num_burnin_steps, trace_fn=trace_fn, seed=sample_key) return run
We'll again run it once to cache the compiled
run.
%%time run = make_run(axis_name='data') output = run(random.PRNGKey(0), sharded_watch_matrix) jax.tree_map(lambda x: x.block_until_ready(), output)
CPU times: user 56 s, sys: 1min 24s, total: 2min 20s Wall time: 3min 35s
Now we'll run it again without the compilation overhead.
%%time states, trace = run(random.PRNGKey(0), sharded_watch_matrix) jax.tree_map(lambda x: x.block_until_ready(), trace)
CPU times: user 28.8 s, sys: 1min 16s, total: 1min 44s Wall time: 3min 1s
Looks like we completed about 150,000 leapfrog steps in about 3 minutes, so about 83 leapfrog steps per second! Let's plot the accept ratio and log density of our samples.
fig, axs = plt.subplots(1, len(trace), figsize=(5 * len(trace), 5)) for ax, (key, val) in zip(axs, trace.items()): ax.plot(val[0]) # Indexing into a sharded array, each element is the same ax.set_title(key);
Now that we have some samples from our Markov chain, let's use them to make some predictions. First, let's extract each of the components. Remember that the
user_embeddings and
user_bias are split across device, so we need to concatenate our
ShardedArray to obtain them all. On the other hand,
movie_embeddings and
movie_bias are the same on every device, so we can just pick the value from the first shard. We'll use regular
numpy to copy the values from the TPUs back to CPU.
user_embeddings = np.concatenate(np.array(states.user_embeddings, np.float32), axis=2) user_bias = np.concatenate(np.array(states.user_bias, np.float32), axis=2) movie_embeddings = np.array(states.movie_embeddings[0], dtype=np.float32) movie_bias = np.array(states.movie_bias[0], dtype=np.float32) samples = (user_embeddings, user_bias, movie_embeddings, movie_bias) print(f'User embeddings: {user_embeddings.shape}') print(f'User bias: {user_bias.shape}') print(f'Movie embeddings: {movie_embeddings.shape}') print(f'Movie bias: {movie_bias.shape}')
User embeddings: (500, 2, 6040, 20) User bias: (500, 2, 6040) Movie embeddings: (500, 2, 3706, 20) Movie bias: (500, 2, 3706)
Let's try to build a simple recommender system that utilizes the uncertainty captured in these samples. Let's first write a function that ranks movies according to the watch probability.
@jax.jit def recommend(sample, user_id): user_embeddings, user_bias, movie_embeddings, movie_bias = sample movie_logits = ( jnp.einsum('d,md->m', user_embeddings[user_id], movie_embeddings) + user_bias[user_id] + movie_bias) return movie_logits.argsort()[::-1]
We can now write a function that loops over all the samples and for each one, picks the top ranked movie that the user hasn't watched already. We can then see the counts of all recommended movies across the samples.
def get_recommendations(user_id): movie_ids = [] already_watched = set(jnp.arange(num_movies)[watch_matrix[user_id] == 1]) for i in range(500): for j in range(2): sample = jax.tree_map(lambda x: x[i, j], samples) ranking = recommend(sample, user_id) for movie_id in ranking: if int(movie_id) not in already_watched: movie_ids.append(movie_id) break return movie_ids def plot_recommendations(movie_ids, ax=None): titles = collections.Counter([movie_id_to_title[i] for i in movie_ids]) ax = ax or plt.gca() names, counts = zip(*sorted(titles.items(), key=lambda x: -x[1])) ax.bar(names, counts) ax.set_xticklabels(names, rotation=90)
Let's take the user who has seen the most movies versus the one who has seen the least.
user_watch_counts = watch_matrix.sum(axis=1) user_most = user_watch_counts.argmax() user_least = user_watch_counts.argmin() print(user_watch_counts[user_most], user_watch_counts[user_least])
2314 20
We hope our system has more certainty about
user_most than
user_least, given that we have more information about what sorts of movies
user_most is more likely to watch.
fig, ax = plt.subplots(1, 2, figsize=(20, 10)) most_recommendations = get_recommendations(user_most) plot_recommendations(most_recommendations, ax=ax[0]) ax[0].set_title('Recommendation for user_most') least_recommendations = get_recommendations(user_least) plot_recommendations(least_recommendations, ax=ax[1]) ax[1].set_title('Recommendation for user_least');
We see that there is more variance in our recommendations for
user_least reflecting our additional uncertainty in their watch preferences.
We can also see look at the genres of the recommended movies.
most_genres = collections.Counter([movie_id_to_genre[i] for i in most_recommendations]) least_genres = collections.Counter([movie_id_to_genre[i] for i in least_recommendations]) fig, ax = plt.subplots(1, 2, figsize=(20, 10)) ax[0].bar(most_genres.keys(), most_genres.values()) ax[0].set_title('Genres recommended for user_most') ax[1].bar(least_genres.keys(), least_genres.values()) ax[1].set_title('Genres recommended for user_least');
user_most has seen a lot of movies and has been recommended more niche genres like mystery and crime whereas
user_least has not watched many movies and was recommended more mainstream movies, which skew comedy and action.
|
https://tensorflow.google.cn/probability/examples/Distributed_Inference_with_JAX
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Pandas is a Python toolkit for data analysis and manipulation that is simple, flexible, powerful, fast, and open-source. It is instrumental when dealing with large datasets for cleaning, analyzing, altering, and examining data.
The Python library for pandas allows programmers to examine enormous amounts of data and interpret or make statistical conclusions. It can quickly clean a large dataset to make it understandable, readable, and analyzeable. You can use it to create a relationship or detect a correlation between data. You can use it to conduct any mathematical operation on the data, such as sum, average, max, min, etc.
Pandas also have a data cleaning feature that allows you to eliminate undesirable or unnecessary data, NULL or empty data, and incorrect data from a dataset. You can use the pip pandas command to install it quickly. However, some Python distributions, such as Spyder and Anaconda, come with the pandas library preinstalled. As a result, if you’re using these distributors to write your code, you have to import the panda’s library into your program, and you’re ready to go.
You can now utilize the pandas’ library’s modules and functions in your program after you’ve imported the library. This tutorial will show you how to convert a DateTime to a string in Python using the Pandas package. We’ll show you how to convert DateTime to string in Python utilizing the panda’s module with some basic and easy-to-understand examples. So let’s get started.
Pandas Datetime to String
Using a date, time, or datetime object, the strftime() method returns a string representing the date and time.
Example 1: converting a datetime to a string with strftime()
The following application transforms a datetime object with the current date and time to several string formats.)
What is the working of strftime()?
The format codes in the software, as mentioned above, are %Y, % m, % d, and so on. The strftime() method accepts one or more format codes as input and produces a formatted string. The datetime class is imported from the datetime module. The strftime() method can be accessed by objects of the datetime type.
from datetime import datetime
The now variable has the datetime object holding the current date and time saved.
now = datetime.now()
To create formatted strings, use the strftime() function.
year = now.strftime("%Y")
The strftime() method accepts a string that may have several format codes.
date_time =now.strftime("%m/%d/%Y, %H:%M:%S")
Example 2: Using a timestamp to create a string
from datetime import datetime timestamp = 1528797322 date_time = datetime.fromtimestamp(timestamp) print("Date time object:", date_time) d = date_time.strftime("%m/%d/%Y, %H:%M:%S") print("Output 2:", d) d = date_time.strftime("%d %b, %Y") print("Output 3:", d) d = date_time.strftime("%d %B, %Y") print("Output 4:", d) d = date_time.strftime("%I%p") print("Output 5:", d)
Example 3: The appropriate date and time for the locale
from datetime import datetime timestamp = 1528797322 date_time = datetime.fromtimestamp(timestamp) d = date_time.strftime("%c") print("Output 1:", d) d = date_time.strftime("%x") print("Output 2:", d) d = date_time.strftime("%X") print("Output 3:", d)
The locale’s suitable date and time representation are represented using % c, % x, and % X format codes.
List of Format Codes
All of the codes passed to the strftime() method are listed below.
Example: Convert DateTime to String in Pandas
Let’s say we have the pandas DataFrame below, which shows the sales of a store on four special days:
import pandas as pd #create DataFrame df = pd.DataFrame({'day': pd.to_datetime(pd.Series(['20220102', '20220104', '20220108', '20220109'])), 'sales': [1540, 1945, 2584, 2390]}) #view DataFrame df The dtypes function is used to see the data type of each column in the DataFrame: #view data type of each column df.dtypes The "day" column has a DateTime class, as can be seen. We can use the following syntax to transform "day" to a string: #convert 'day' column to string df['day'] = df['day'].dt.strftime('%Y-%m-%d') #view updated DataFrame df To confirm that the "day" column is now a string, we may use the dtypes function once more: #view data type of each column df.dtypes
Example: Convert the DateTime to string
To convert the DateTime to string in this example, we’ll utilize the lambda and DataFrame.style.format() functions. Take a look at the following example command:.style.format({"Date selected is": lambda t: t.strftime("%m/%d/%Y")})
When you run the command listed above, you will see the following output:
The result of the DataFrame.style.format() function is identical to that of the pandas.Series.dt.strftime() function, as you can see. As a result, using pandas in Python to convert a datetime to a string is simple.
Example: Convert DateTime to String
To convert DateTime to string, we use pandas.Series.dt.strftime() function in the next example. Here’s an example of code:
import pandas as pd PresentationTimeTable = ({ 'Presentations':["Health","Sleep","Writing","Technology"], 'Time' :["00:32:52","14:15:53","21:42:23","11:20:26"], 'Date':["2022/03/05","2022/03/09","2022/03/10","2022/03/07"] }) df = pd.DataFrame(PresentationTimeTable) df['DateTypeCol'] = pd.to_datetime(df.Date) df['Converted_Dates'] = df['DateTypeCol'].dt.strftime('%m/%d/%y') df
The following is the result of the code mentioned above:
If you look closely, you’ll notice that the data’s format or order has been altered, indicating that you can now enter the date in your preferred format.
Example: Using pd.to_datetime()
The pd.to_datetime() function is used in this example.['DateTypeCol'] = pd.to_datetime(df.Date)
You’ll get the following output if you run this command:
Example: Convert a Pandas Series of datetime Objects to their String Equivalents
A Pandas Series refers to a one-dimensional array that can hold any data and labels. Let’s say you’ve got a pandas collection of datetime objects. Using the strftime() function and specific format codes, we can convert a datetime object to its string counterpart. However, converting the datetime objects of a pandas series requires a somewhat different approach. This section will discuss how such a conversion can be accomplished using examples.
Take a look at the code below. It starts by creating a pandas Series of datetime objects, transforming it into a pandas Series of string objects.
import pandas as pd dates = pd.to_datetime(pd.Series([ "01/03/2022", "02/03/2022", "03/03/2022", "04/03/2022", "05/03/2022" ]), format = '%d/%m/%Y') print("Before conversion") print(dates) print("After conversion") dates = dates.dt.strftime('%Y-%m-%d') print(dates)
Take note of the output’s dtype value. The first indicates that the series comprises datetime objects, whereas the second suggests that it contains string objects.
The lambda function is also used to change the data type of objects. For more information, see the code below. The lambda function performs the conversion using the strftime() function.
import pandas as pd dates = pd.to_datetime(pd.Series([ "01/03/2022", "02/03/2022", "03/03/2022", "04/03/2022", "05/03/2022" ]), format = '%d/%m/%Y') print("Before conversion") print(dates) print("After conversion") dates = dates.apply(lambda x: x.strftime('%Y-%m-%d')) print(dates)
Conclusion
In this article, we’ve seen three pandas routines in python used to convert DateTime to string: DataFrame.style.format(), pandas.Series.dt.strftime(), and pd.to_datetime(). We’ve supplied sample examples for each function to help you learn how to utilize them in your projects.
|
https://www.codeunderscored.com/pandas-datetime-to-string/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
On a Wednesday in 2020, Michal Privoznik wrote: >We can use qemuDomainSetupInput() to obtain the path that we >need to unlink() from within domain's namespace. > >Signed-off-by: Michal Privoznik <mprivozn at redhat.com> >--- > src/qemu/qemu_domain_namespace.c | 18 ++++-------------- > 1 file changed, 4 insertions(+), 14 deletions(-) > Reviewed-by: Ján Tomko <jtomko at redhat.com> Jano -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: <
|
https://listman.redhat.com/archives/libvir-list/2020-July/msg01723.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Introduction: How to Make a Wired Rc Car Using an Arduino
I am going to show you how to make an RC car with an arduino
Step 1: Materials
2 Arduino unos (that's just what I used.)
Jumper wires and Male to Female Wires
2 Arduino joysticks
2 9v batteries
1 9.6v rechargeable RC car battery
1 Tower Pro Servo
1 Seeed Studio motorsheild
4 wheels
2-4 axles or something to mount the wheels so they can spin freely
Something to use as a base
A long cable with enough wires to connect to both of your joysticks
A soldering iron with solder
Some way to mount all this. I would suggest Velcro for the arduinos and hot glue for the servo and mounting the wheels. I just used rubber bands.
2-4 DC motors to power the wheels.
2 9v to Arduino plugs.
Step 2: Step 2: Preparing the Base and Wheels
First, take your 2-4 DC motors and solder two jumper wires to each copper lead.Take whatever it is that you are using for your base and cut a hole into it that will be big enough to fit your servo into. Hot glue your servo into place. Then, take your axle and hot glue it to your servo. This will act as your steering. Have the wires from the servo come up from underneath the base. After that, glue the other axle on the other side of your base, also on the bottom.
Second, take one arduino and your Seeed Studio Motorshield and connect that to the arduino. After that is all done, take your 9.6v battery and use rubber bands to keep it in place on the bottom of your base. Then, take your arduinos and put velcro on the bottoms of both and then velcro on the top of your base.
Step 3: Step 3: Wiring
The Servo:
Start off by wiring your regular arduino. Take 2 wires, 1 should be a male to female wire, and plug 1 end into both of the arduinos 5v input. Take 2 more wires and plug one end into both of the arduinos Ground inputs. Now take 2 more wires and plug one end of one wire into Analog 0 and the one end of another wire and plug that into PIN 9. Now, take a male to female wire plugged into Analog 0, and plug it into the joystick pin that is labeled vrY. Next, plug the female end of the wire coming from the 5v into the 5v on the joystick. Next, take a male to female wire plugged into the Ground input on the arduino, and plug the female end into the Ground on the joystick. Now, take the wire plugged into pin 9 and plug it into the yellow wire.(or the lightest colored wire, it is different on some servos). Now, take the wire plugged into the 5v pin on the arduino and plug that wire into the second lightest colored wire. Now, take the final wire plugged into the Ground pin on the arduino and plug it into the darkest colored wire onthe servo. That should now be done.
The Motors:
Start by taking all of the wires from the motors and plugging them into the 4 motor ports on the Seeed studio MotorShield. Now do the same wiring you did on the servo joystick. The only thing that will be different if that you want to plug the Analog 0 into the vrX PIN on the joystick. The rest is the same.
Step 4: Step 3: the Servo Code
On the servo code, I changed the servo from turning 180 degrees, to 45 degrees. This is so when the servo is turned all the way to the right or left, the wheel is not just dragging behind the RC car.
Servo Code:
/*
Controlling a servo position using a potentiometer (variable resistor) by Michal Rinott
modified on 8 Nov 2013 by Scott Fitzgerald */
, 45); // scale it to use it with the servo (value between 0 and 180) myservo.write(val); // sets the servo position according to the scaled value delay(15); // waits for the servo to get there }
Step 5: Step 4: the Motor Code
On the motor code, I added 1 motor to the code so it can run 2 motors at once. If you want to get rid of this, take out everything that says int pinI3=12; int pinI4=13;
int speedpinB=10; pinMode(pinI3,OUTPUT); pinMode(pinI4,OUTPUT); pinMode(speedpinB,OUTPUT);
and digitalWrite(pinI4,LOW); digitalWrite(pinI3,HIGH);
Motor Code:
#include "MotorDriver.h"
const int POT_PIN = A0; const int POT_PIN2 = A1; int motorSpeed = 0; int potVal = 0; int pinI1=8;//define I1 interface int pinI2=11;//define I2 interface int pinI3=12; int pinI4=13; int speedpinA=9;//enable motor A int speedpinB=10;
void setup() { Serial.begin(9600); TCCR1B = TCCR1B & 0b11111000 | 0x01; pinMode(pinI1,OUTPUT); pinMode(pinI2,OUTPUT); pinMode(pinI3,OUTPUT); pinMode(pinI4,OUTPUT); pinMode(speedpinA,OUTPUT); pinMode(speedpinB,OUTPUT);
}
void loop() { potVal = analogRead(POT_PIN); potVal = analogRead(POT_PIN2);
motorSpeed = map(potVal, 0, 1023, 0, 255);
Serial.print(potVal); Serial.print(motorSpeed); Serial.println();
analogWrite(speedpinA, motorSpeed); analogWrite(speedpinB, motorSpeed); digitalWrite(pinI2,LOW);//turn DC Motor A move anticlockwise digitalWrite(pinI1,HIGH); digitalWrite(pinI4,LOW); digitalWrite(pinI3,HIGH);
}
Participated in the
Arduino All The Things! Contest
3 Comments
Question 4 years ago on Step 5
hi, i am currently working on a project called remote operated controller using same basic principle like your project. i have a few questions i would like to ask; What voltage of motor did you use?. What other component did you use other than stated such as resistor or diode and Can i have the schematic diagram of the whole circuit?.. thanks i also realized this post is a long time ago. i hope you take your time reading my question and answering them as well.. thanks from fellow students of engineering.
5 years ago
pretty idea
6 years ago
Awesome Arduino car.
|
https://www.instructables.com/How-to-Make-a-Wired-Rc-Car-Using-an-Arduino/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Building a Real-time Short News App using HuggingFace Transformers and Streamlit
This article was published as a part of the Data Science Blogathon.
Introduction
News apps are one of the most downloaded apps and also they have huge traffic. Everyone is interested in knowing about the things happening in the world. But they may not have the time to go through those lengthy news articles and they may like to know the crux of the article without missing details. The latest developments in the field of artificial intelligence have made such a thing reality. Today people can read a summary of an entire news article in just two or three lines and understand all the details about the article.
Text Summarization is one such task in Natural Language Processing that can enable us to build such short news summaries. There are many famous apps like Inshorts that leverage Artificial Intelligence to deliver short news articles in their app.
In this article, we shall see how to build such an app using Streamlit and HuggingFace transformers and we will also deploy that app on stream cloud. To fetch news into our app we will use Free News API by Newscatcher.
Overview
- Free News API
- Newspaper3k
- HuggingFace Transformers
- Streamlit
- Application setup
- Building our Application
- Testing
- Deploying
- Conclusion
Free News API
Free News API is provided by Newscatcher. It helps us fetch live news based on several input parameters. Unlike many news APIs that are available on the internet, it is free to use. It aggregates news from over 60,000 news websites with up to 15,00,000 news articles daily. The basic version of this API has limited ability. We can not fetch news based on country or given category. But we can make unlimited API calls to fetch news. In this project, we will use the basic version as it is free.
Newspaper3k for extracting News Articles
Newspaper3k is a python library for extracting and curating news articles from the internet. It’s a very useful library to deal with news article links and also extract all the metadata of a news article. In this project, we will use this library to get the news article.
HuggingFace Transformers for Summarizing News Articles
We will use the transformers library of HuggingFace. This library provides a lot of use cases like sentiment analysis, text summarization, text generation, question & answer based on context, speech recognition, etc.
We will utilize the text summarization ability of this transformer library to summarize news articles.
Streamlit for deploying the News App
Streamlit is an open-source python library for building Data science and machine learning apps fastly. We can use this to prototype our data science apps quickly. Streamlit is also very easy to learn. It is good to have this skill to test our app before we take it to production. We can also build an analytics dashboard using this library. Streamlit also offers cloud services where we can deploy our apps. In this project, we will deploy our app on the Streamlit cloud.
Application Setup
Our application works as per the process shown in the below figure
In other words,
- Get news from the internet using Free News API
- Extract the news link and send it to the newspaper3k library to download the news article
- Pass the resulting article through the transformers pipeline to get the summarized version of the article.
- Finally, display the News title and summarized article to the user in a streamlit UI
Let’s start building our app!!
Building our News Application for Discord
First, let’s install all necessary libraries
pip install streamlit
pip install transformers
pip install json
pip install requets
pip install newspaper3k
Now we import all the installed libraries as follows
import streamlit as st from transformers import pipeline import json import requests from newspaper import Article
Before we code our app we need to get an API to fetch news links based on our search from the internet. As discussed earlier we will be using Free news API to get that link. So for this, we get an API from this link. Follow the steps given in the link to get the API. If you have doubts please comment below so that I can clear your doubts.
Now that we have installed and imported all necessary libraries and also got our API key, we code our Streamlit app in the following way. The final code looks like below –
import streamlit as st import json import requests from newspaper import Article from transformers import pipeline st.set_page_config(page_title='Short News App', layout='wide', initial_sidebar_state = 'expanded') st.title('Welcome to Short News App n Tired of reading long articles? This app summarizes news articles for you and gives you short crispy to the point news based on your search n (This is a demo app and hence is deployed on a platform with limited computational resources. Hence the number of articles this app can fetch is limited to 5)') summarizer = pipeline("summarization") article_titles = [] article_texts = [] article_summaries = [] def run(): with st.sidebar.form(key='form1'): search = st.text_input('Search your favorite topic:') submitted = st.form_submit_button("Submit") if submitted: try: url = " querystring = {"q":search, "lang":"en", "page":1, "page_size":5} headers = {'x-rapidapi-host':"free-news.p.rapidapi.com", 'x-rapidapi-key':"your_api_key"} response = requests.request("GET", url, headers=headers, params = querystring) response_dict = json.loads(response.text) links = [response_dict['articles'][i]['link'] for i in range(len(response_dict['articles']))] for link in links: news_article = Article(link, language = 'en') news_article.download() news_article.parse() article_titles.append(news_article.title) article_texts.append(news_article.text) for text in article_texts: article_summaries.append(summarizer(text)[0]['summary_text']) except: print("Try with new search") for i in range(len(article_texts)): st.header(article_titles[i]) st.subheader('Summary of Article') st.markdown(article_summaries[i]) with st.expander('Full Article'): st.markdown(article_texts[i]) if __name__=='__main__': run()
Explanation of the above code
- First, we import all necessary libraries.
- Then we set the page configuration of our app using st.set_page_config(). We give all parameters as shown in the above code.
- Next, we give a title of our app using st. title() as shown in the code. You can give your title and explanation as shown in the above code.
- Now we define three empty lists to store our article titles, article texts, and their corresponding summaries for final display. Next, we create a sidebar in our app for searching the desired topic. We create this search form using st.sidebar().form() of Streamlit as shown in the above code and also we create a submit button using st.form_submit_button().
- So now if the user enters a search term and clicks submit button we have to fetch news articles and display their summaries. We do that as follows –
- First, we get our API URL.
- We give our query parameters like search term, language, number of pages, and number of articles per page.
- Create headers for API as shown code.
- Using requests library we fetch data from the internet using our query parameters and headers.
- We convert the request object from the above step into a dictionary using JSON.
- Next, we extract the links from the dictionary metadata we obtained from the internet using requests.
- For each link we have, we use the newpaper3k library to get the article title and article text and append them into the corresponding empty lists we defined earlier.
- Now that we have article titles, article texts we get our summaries for each article using transformers summarization pipeline as shown in the above code.
- Now we have article titles, article texts, and their corresponding summaries. So we display the article using st.header() and we display the summary of the article using st.markdown(). Before that we create a heading ‘Summary of Article’ and then we display the news summary under this.
- Finally, if the user wants to read the full article, we also give an option to look at the full article using st.expander(). This expander widget of streamlit hides the content in it and displays the content only when the user clicks to expand.
Testing
We built our app. Save the code in a .py file and open the terminal.
Type streamlit run your_app_name.py in the terminal and press enter.
You will see a new tab in your browser with your app.
Deploying
We built and tested our app. It’s time to deploy it. We will deploy this on Streamlit cloud. Streamlit offers to host three apps for free on its cloud.
Go to this link of Streamlit sharing and create an account if you don’t have one already. After you create an account and sign in you will see a web page with the option ‘New App’. Click on it.
It will ask you to connect to a Github repository. So create a Github repository of your app and upload the .py file we just created. We also need a requirements.txt. So create a file with the name ‘requirements.txt’ and type the following in that file,
streamlit transformers tensorflow requests newspaper3k
save the file and commit changes to your repository. Now come to the Streamlit sharing website and connect your newly created Github repository and click deploy. Streamlit does the rest of the work for us and deploys our app.
Finally, our app will look like this –
You can search the news for your favorite celebrity and get a summarized version of each article.
Try this app I created here Short News App · Streamlit
Conclusion
We successfully built our short news app leveraging AI capabilities. AI has made our life simple. More features can be added to this app. Try it.
Read my other articles here –
- Live Twitter Sentiment Analyzer with Streamlit, Tweepy and Huggingface (analyticsvidhya.com)
- NeatText Library | Pre-processing textual data with NeatText library (analyticsvidhya.com)
Image sources:
Image-1:
Image-2:
Leave a Reply Your email address will not be published. Required fields are marked *
|
https://www.analyticsvidhya.com/blog/2021/11/building-a-real-time-short-news-app-using-huggingface-transformers-and-streamlit/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
XSIM¶
Description¶
XSIM provides a near cycle-accurate model of systems built from one or more xCORE devices. Using the simulator, you can output data to VCD files that can be displayed in standard trace viewers such as GTKWave, including a processor’s instruction trace and machine state. Loopbacks can also be configured to model the behavior of components connected to XMOS ports and links.
To run your program on the simulator, enter the following command:
xsim <binary>
To launch the simulator from within the debugger, at the GDB prompt enter the command:
connect -s
You can then load your program onto the simulator in the same way as if using a development board.
Options¶
Overall Options¶
--args
<xe-file> <arg1> <arg2> ... <argn>¶
Provides an alternative way of supplying the XE file which also allows command line arguments to be passed to a program.
--plugin
<name> <args>¶
Loads a plugin DLL. The format of args is determined by the plugin; if args contains any spaces, it must be enclosed in quotes.
--stats
On exit, prints the following:
A breakdown of the instruction counts for each logical core.
The number of data and control tokens sent through the switches.
Warning Options¶
--warn-resources
Prints (on standard error) warning messages for the following:
A timed input or output operation specifies a time in the past.
The data in a buffered port’s transfer register is overwritten before it is input by the processor.
--warn-stack
Turns on warnings about possible stack corruption.
xSIM prints a warning if one XC task attempts to read or write to another task’s workspace. This can happen if the stack space for a task is specified using either ~~#pragma stackfunction~~ or ~~#pragma stackcalls~~.
Tracing Options¶
--trace
-t
Turns on instruction tracing for all tiles (see XSIM Trace output).
--vcd-tracing
<args>¶
Enables signal tracing. The trace data is output in the standard VCD file format.
If
<args>contains any spaces, it must be enclosed in quotes. Its format is:
[global-options] <-tile name <trace-options>>
The global options are:
-pads
Turns on pad tracing.
-o <file>
Places output in <file>.
The trace options are specific to the tile associated with the XN core declaration name, for example
tile[0].
The trace options are:
-ports
Turns on port tracing.
-ports-detailed
Turns on more detailed port tracing.
-cycles
Turns on clock cycle tracing.
-clock-blocks
Turns on clock block tracing.
-cores
Turns on logical core tracing.
-instructions
Turns on instruction tracing.
To output traces from different nodes, tiles or logical cores to different files, this option can be specified multiple times.
For example, the following command configures the simulator to trace the ports on tile[0] to the file trace.vcd.
xsim a.xe --vcd-tracing "-o trace.vcd -start-disabled -tile tile[0] -ports"
Tracing by the VCD plugin can be enabled and disabled using the
_traceStart()and
_traceStop()syscalls. The
-start-disabledargument disables the vcd tracing from the start, allowing the user to enable/disable only those sections of code where tracing is desired. For example:
#include <xs1.h> #include <syscall.h> port p1 = XS1_PORT_1A; int main() { p1 <: 1; p1 <: 0; _traceStart(); p1 <: 1; p1 <: 0; _traceStop(); p1 <: 1; p1 <: 0; return 0; }
Loopback Plugin Options¶
The XMOS Loopback plugin configures any two ports on the target platform to be connected together. The format of the arguments to the plugin are:
-pin
<package> <pin>¶
Specifies the pin by its name on a package datasheet. The value of package must match the
Idattribute of a ~~Package~~ node in the XN file used to compile the program.
-port
<name> <n> <offset>¶
Specifies n pins that correspond to a named port.
The value of name must match the
Nameattribute of a ~~Port~~ node in the XN file used to compile the program.
Setting offset to a non-zero value specifies a subset of the available pins.
-port
<tile> <p> <n> <offset>¶
Specifies n pins that are connected to the port p on a tile.
The value of tile must match the
Referenceattribute of a ~~Tile~~ node in the XN file used to compile the program.
p can be any of the port identifiers defined in
<xs1.h>. Setting offset to a non-zero value specifies a subset of the available pins.
The plugin options are specified in pairs, one for each end of the
connection. For example, the following command configures the simulator
to loopback the pin connected to port
XS1_PORT_1A on
tile[0]
to the pin defined by the port
UART_TX in the program.
xsim uart.xe --plugin LoopbackPort.dll '-port tile[0] XS1_PORT_1A 1 0 -port UART_TX 1 0'
xSCOPE Options¶
--xscope
<args>¶
Enables xSCOPE. file format.
If <args> contains any spaces, it must be enclosed in quotes. One of the following 2 options is mandatory:
-offline <filename>
Runs with xSCOPE in offline mode, placing the xSCOPE output in the given file.
-realtime <URL:port>
Runs with xSCOPE in realtime mode, sending the xSCOPE output in the given URL:port.
The following argument is optional:
-limit <num records>
Limts the xSCOPE output records to the given number.
For example, the following will run xSIM with xSCOPE enabled in offline mode:
xsim app.xe --xscope "-offline xscope.xmt"
For example, the following will run xSIM with xSCOPE enabled in reatime mode:
xsim app.xe --xscope "-realtime localhost:12345"
|
https://www.xmos.ai/documentation/XM-014363-PC-4/html/tools-guide/tools-ref/cmd-line-tools/xsim-manual/xsim-manual.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
It would be nice if there was a function:
#include <cctk.h> const cFunctionData * CCTK_QueryScheduledFunction(const cGH * cctkGH); const cFunctionData * func = QueryScheduledFunction(cctkGH); printf("Currently running: %s::%s\n", func->thorn, func->routine);
to find the function most recently called via CCTK_CallFunction. In the simplest implementation CCTK_CallFunction() (in main/ScheduleInterface.c) would simply store its
attribute arguement in a global variable for later retrieval. A more complete implementation might have stack of called functions (in case CallFunction can be called recurively) or store the information in cctkGH (though CallFunction does not take cctkGH as an argument).
I currently have a hacked version of the first option running to find out who is calling CarpetReduce in local mode. BUt it might be useful also in eg. the interpolator calls (as in "AEILocalInterpolator: point foo out of bounds").
Keyword:
This function is a very good idea. The suggested implementation (using a single global variable) is fine.
I would call the function CCTK_ScheduleQueryCurrentFunction, which follows more closely the naming scheme used elsewhere in the flesh.
attached please find a patch to the flesh implementing CCTK_ScheduleQueryCurrentFunction, as well as documentation updates for it.
Ok to apply (need two "yes" votes since it is the flesh)?
Yes.
Two comments:
1) trivial typo: there -> their
2) Why does CCTK_ScheduleQueryCurrentFunction take an argument (GH)? It does not use it.
3) Is it really necessary to abort when CCTK_CallFunction is called recursively? This is not the case right now and would be a side-effect of this patch.
I think it should take the GH argument to give us the freedom to store the queried information later elsewhere and not in a global variable. We will then need a data structure such as the GH to find this information.
However: if LangNone "should never happen", then this should lead to an abort, not a fall through.
Frank: I am not I understand your comment about it aborting. The warning in CCTK_ScheduleQueryCurrentFunction is level CCTK_WARN_PICKY (level 3) which should not abort as far as I know. The reason for passing cctkGH is as Erik outlined, though I agree that this can be a problem if one is within a function that does not have access to cctkGH. I did not check if there were any such that would be candidates.
The "LangNone" was put there to avoid a compiler warning. It currently reproduces the existing behaviour which is also non-fatal.
Warning with CCTK_WARN_PICKY is probably ok - although calling it recursively is either ok (and then it shouldn't warn at all) or wrong (and then is should probably abort the run).
Concerning GH: I think it would be nice to to have to pass GH to every function where I would like to use this function (on the other hand, using a global for that wouldn't be a big deal either). I do understand the idea of storing the information about the currently executed function within GH though. So, in the end: yes, please commit. However, please use CCTK_ATTRIBUTE_UNUSED for GH until we do use it.
applied.
|
https://bitbucket.org/einsteintoolkit/tickets/issues/817
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Hi
I would like to implement custom functionality based on user clicks/keypress in the slice views. I would like to observe the events separately for clicks/keypress in the red and yellow slices. For this, I was implementing the following code. However, the function does not seem to be called when I press the left mouse button or press a key. Could you please let me know the reason. Is there a better way to independently detect key press events on the red and yellow slices?
def testMethod(caller, event): print("Event detected") color = "Red" sliceNode = slicer.mrmlScene.GetNodeByID("vtkMRMLSliceNode%s" % color) sliceNode.AddObserver(vtk.vtkCommand.LeftButtonPressEvent, testMethod) sliceNode.AddObserver(vtk.vtkCommand.KeyPressEvent, testMethod) sliceNode.AddObserver(vtk.vtkCommand.ModifiedEvent, testMethod)
Thanks
Priya
|
https://discourse.slicer.org/t/vtkmrmlslicenode-observe-events/3480
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
A string is described as a collection of characters. String objects are immutable in Java, which means they can’t be modified once they’ve been created. A popular question asked in Java interviews is how to find the substring of a string. So, I’ll show you how Substring functions in Java.
We’ll be covering the following topics in this tutorial:
Substring in Java: What is a substring in Java?
Programmers often need to retrieve an individual character or a group of characters (substring) from a string. For example, in a word processing program, a part of the string is copied or deleted. While
charAt() method returns only a single character from a string, the
substring() method in the String class can be used to obtain a substring from a string.
Substring in Java: Different methods under substring
There are two separate methods under the substring() method. They are as follows:
• String substring(int begIndex)
• String substring(int beginIndex, int endIndex)
String substring(int beginIndex): This method returns a new string that is a substring of the invoking string. The substring returned contains a copy of characters beginning from the specified index
beginIndex and extends to the end of the string. For example,
Syntax:
public String substring(int begIndex)
Note: The index begins with ‘0,’ which corresponds to String’s first character.
Let’s take a look at an example.
s1.substring(3); // returns substring 'come'
String substring (int beginIndex, int endIndex): This method is another version of previous method. It also returns a substring that begins at the specified begin Index and extends to the character at index
endIndex-1.
Syntax:
public String substring(int begIndex, int endIndex)
Let’s take a look at an example.
s1.substring(3,6); // returns substring "come".
public class SubstringMethods { public static void main(String[] args) { String s1 = "Welcome" ; System.out.println("s1.substring(3) = " + s1.substring(3)); System.out.println("substring(3,6) = " + s1.substring(3)); } }
class StrSubString { public static void main(String args[]) { String k="Hello Dinesh"; String m=""; m = k.substring(6,12); System.out.println(m); } }
|
https://ecomputernotes.com/java/jarray/string-substring
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Thanks to all the folks who showed interest in this little XPath puzzler published here a few weeks ago. Some asked to see the dataset, but I’m not able to release it at this time (but ask me again in 3 months).
Turns out it was a combination of two bugs, one mine, one somebody else’s. Careful observers noted that I wasn’t using any namespace prefixes in the XPath, and since I did specify that it was XPath 1.0, that technically rules out XHTML as the source language. Like nearly all XML I work with these days, the first thing I do is strip off the namespaces to make it easier to work with. Bug #1 was that in a few cases, the namespaces didn’t get stripped.
Bug #2 was in the XPath engine itself. Which one? Uh, whatever one ships with the “XPath” plugin for JEdit. It’s hard to tell directly, but I think it might be an older version of Xalan-J. In the case of the expression
//meta, it properly located only those elements part of no namespace. But in the case of
//meta/@property, it was including all the nodes that would have been selected by
//*[local-name(.)='meta']/@property. Hence, a larger number of returned nodes.
Confusing? You bet! -m
P.S. WebPath would not have this problem, since in the default mode it matches local-names only to begin with.
|
http://dubinko.info/blog/2007/12/31/xpath-puzzler-solution/
|
crawl-001
|
en
|
refinedweb
|
mdubinko_afc
That's 'away from country'.
Minimal updates for a while: One week in Southern France, for a W3C meeting. Yes, work can be pretty demanding sometimes.
That's 'away from country'.
Minimal updates for a while: One week in Southern France, for a W3C meeting. Yes, work can be pretty demanding sometimes. -m
It's beta, but it's here. Will check out soon.
Media coverage with a too-cute headline. The article is pretty blunt in places: "Analysts see InfoPath as part of Microsoft's strategy for locking businesses in to Microsoft enterprise products." Huh. -m
Now online at. Powered by Python, libxml2, and libxslt.
Includes a bookmarklet and several clickable examples, both valid and invalid. -m
This will be announced Monday. You, my faithful seven blog readers, get early notice.
The official home for the GFDL'd O'Reilly XForms Essentials is moving to XForms Institute. Now in "chunked" pages, so you don't have to take a massive megabyte-hit just to find one quick phrase. The huge version, and all the sources, etc. will of course remain available.
If you notice any problems, especially if they look XSLT-inflicted, please let me know. -m
Eric Raymond read Sun's statement that "the open source model is our friend", and seized the opportunity to fire off an open letter to Sun the very next day. He should have run the letter through a few asbestos filters first, though.
Think like a CEO for a minute: How would you react if someone wrote you an open letter that said your strategy is "curiously inconsistent, spotty in ways which suggests that [your company] is confused"? A CEO carefully crafts and values a "strategy" just as much as a hacker carefully crafts and values a favorite piece of code. The key to getting other people to do things is to make them want to do it, not to browbeat them with pompous language. ("Mr. CEO, tear down that wall")
Compared to the ranting, Simon Phipps' (Sun Technology Evangelist) response is positively levelheaded.
Moral: 1. Don't write angry. 2. Get a 2nd opinion before you write a widely-distributed open letter. -m
Described here. Now, to figure out how to always win from a vending machine. -m
Full description here. The source release is up too. Nice. -m
To go with cleaner namespaces, and the bigger issue of multi-namespace documents, one needs a better way to nest XML documents. One proposal uses a "Doctype Instruction", which is technically an XML processing instruction, but fills a similar role to the DOCTYPE declaration.
The Doctype Instruction can appear before the opening tag of the root element, and also at any point where a sub-document is nested inside a document. It is defined as follows:
The target of the PI is the literal "DOCTYPE".
The remainder of the PI is the cleaner namespace prefix of the document, (Note that unlike a DOCTYPE declaration, it does not have to be, and likely will never be, the same as the root element. Determining the root element is easy enough; that information doesn't need to be duplicated), the literal "NS" followed by a quoted string, containing a space-separate list of cleaner namespaces critical to the processing of the document. The namespace of the root element need not be repeated in the string, which may be empty.
For example, here is a document that uses XHTML+XForms+XML Events+SVG, but the SVG is not critical to processing of the document:
<?DOCTYPE org.w3.html NS "org.w3.xforms org.w3.xml-events"?>
<?ns-import org.w3.html?>
<?ns-import org.w3.svg?>
<?ns-import org.w3.xforms?>
<?ns-import org.w3.xml-events?>
...
Another example, here is an RDF document representing an RSS 1.0 feed:
<?DOCTYPE org.w3.rdf NS "org.purl.dc1.1"?>
<?ns-import org.w3.rdf?>
<?ns-import org.purl.dc1.1?>
<?ns-import com.xmlns.foaf0.1?>
...
Unlike a traditional DOCTYPE declaration, the Doctype Instruction can reoccur inside the document.
<p>Refer to figure 1-1 for details</p>
<?DOCTYPE org.w3.svg NS ""?>
<?ns-import org.w3.svg?>
<svg>...</svg>
<p>Figure 1-1</p>
-m
This is a variation on Tom Bradford's clean namespaces pattern.
Instead of a there-or-not PI <?ns clean?>, each namespace gets a separate 'import' statement. For example, author-friendly XHTML+SVG+XForms would look like this:
<?using org.w3.html?>
<?using org.w3.svg?>
<?using org.w3.xforms?>
<?using org.w3.xml-events?>
<?using org.w3.xlink?>
<html>
<head>
<title>Virtual Library</title>
</head>
<body>
<a href=">a link</a>
</body>
</html>
The idea is that an application that has foreknowledge of a namespace, say 'org.w3c.html', can notice the unique string and recognize elements accordingly. A 'generic XML' application, not having foreknowledge of a given namespace, would need to access a machine-readable description of what's in the namespace (details not given here). Namespaces are declared once, at the top of a given document, with no arbitrary scoping rules where things can get redefined under your feet.
Instead of declaring a prefix, it can be spelled out on each use, like <org.w3.html:body>. Since prefixes are based on DNS names, they are persistent and unique, but carry none of the URI baggage. This gives two-part names, and avoids the URI vs. QName/three-part name permathread, as well as the battle for 'what goes at the end of the namespace'.
Look at the example again. Does this look like something you could sit down at a blank screen and bang out without peeking at a reference? Computers are getting more and more powerful. The people behind the keyboards aren't. More work needs to be done in leveraging plain-text data formats for mere mortal human authors. -m
Announcement - Good job Claus!
We are proud to announce that as of today, DENG is opensource software
distributed under GPL. You can now download all Flash Actionscript sources
from sourceforge.net via Anonymous CVS:
cvs -d:pserver:[email protected]:/cvsroot/dengmx login
cvs -z3 -d:pserver:[email protected]:/cvsroot/dengmx co src
DENG is a modular class library written in OOP Actionscript 1, turning the
Macromedia Flash Player 6 into a webbased, zero-install, cross
browser/platform, modular and standards compliant XML/CSS2 Browser.
Supported are subsets of CSS2, CSS3, XHTML, XForms, XFrames and SVG.
Project page on Sourceforge (for a laugh, try /projects/deng and see what has already claimed the more obvious name)
This might be enough to convince me to pick up Flash MX. Are there any open source compilers? I don't care about the GUI (having jEdit), just something to turn a pile of *.as files into a *.swf. Mail me if you know of anything. -m
A kernel of an idea; I need to develop it more.
'Standard practice' of x.y.z versioning, where x is major, y is minor, and z is sub-minor (often build number) is not best practice. If you look at how systems actually evolve over time, a more 'organic' approach is needed.
For example, look at how browser user agent strings have evolved. Take this, for example:
Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows 98) Opera 7.02 [en]
Wow, if detection code is looking for "Mozilla" or "Mozilla/4.0", or "MSIE" or "MSIE 6.0" or "Opera" it will hit. If you look at the kind of code to determine what version of Windows is running, or the exact make and model of processor, you will see a similar pattern.
Since this is the way of nature, don't fight it with artificial major.minor versioning. Embrace organically growing versions.
The first version of anything should be "1." (letters will work in practice too) All sample code, etc. that checks versions must stop at the first dot character; anything beyond that is on a 'needs-to-know' basis..".
Now, as long as compatible revisions keep coming out, the version string gets longer and longer. This is the key benefit, and why fixed-field version numbers are so inflexible. (and why you get silly things like Samba reporting itself as "Windows 4.9") -m
Thanks to Claus, XForms Institute is now running the latest DENG applet, which supports the final XForms namespace. All the example files there have been updated. (Without a hitch, except that jEdit 4.2 beta insisted on writing silly DOS-style newlines, which the view-source program treated as double-spacing). Formal announcement Monday. -m
I created a GFDL community on Orkut. If you've ever written anything currently released under the GNU Free Documentation License, join up! -m
Find out using the DuCharme method.
[email protected]
|
http://dubinko.info/blog/2004_02_01_archive.html
|
crawl-001
|
en
|
refinedweb
|
The QImage class provides a hardware-independent pixmap representation with direct access to the pixel data. More...
#include <qimage.h>
List of all member functions.
It is one of the two classes Qt provides for dealing with images, the other being QPixmap. QImage is designed and optimized for I/O and for direct pixel access/manipulation. QPixmap is designed and optimized for drawing. There are (slow) functions to convert between QImage and QPixmap: QPixmap::convertToImage() and QPixmap::convertFromImage().
An image has the parameters width, height and depth (bits per pixel, bpp), a color table and the actual pixels. QImage supports 1-bpp, 8-bpp and 32-bpp image data. 1-bpp and 8-bpp images use a color lookup table; the pixel value is a color table index.
32-bpp images encode an RGB value in 24 bits and ignore the color table. The most significant byte is used for the alpha buffer.
An entry in the color table is an RGB triplet encoded as a uint. Use the qRed(), qGreen() and qBlue() functions (qcolor.h) to access the components, and qRgb to make an RGB triplet (see the QColor class documentation).
1-bpp (monochrome) images have a color table with a most two colors. There are two different formats: big endian (MSB first) or little endian (LSB first) bit order. To access a single bit you will must do some bit shifts:
QImage image; // sets bit at (x,y) to 1 if ( image.bitOrder() == QImage::LittleEndian ) *(image.scanLine(y) + (x >> 3)) |= 1 << (x & 7); else *(image.scanLine(y) + (x >> 3)) |= 1 << (7 - (x & 7));
If this looks complicated, it might be a good idea to convert the 1-bpp image to an 8-bpp image using convertDepth().
8-bpp images are much easier to work with than 1-bpp images because they have a single byte per pixel:
QImage image; // set entry 19 in the color table to yellow image.setColor( 19, qRgb(255,255,0) ); // set 8 bit pixel at (x,y) to value yellow (in color table) *(image.scanLine(y) + x) = 19;
32-bpp images ignore the color table; instead, each pixel contains the RGB triplet. 24 bits contain the RGB value; the most significant byte is reserved for the alpha buffer.
QImage image; // sets 32 bit pixel at (x,y) to yellow. uint *p = (uint *)image.scanLine(y) + x; *p = qRgb(255,255,0);
On Qt/Embedded, scanlines are aligned to the pixel depth and may be padded to any degree, while on all other platforms, the scanlines are 32-bit aligned for all depths. The constructor taking a uchar* argument always expects 32-bit aligned data. On Qt/Embedded, an additional constructor allows the number of bytes-per-line to be specified.
QImage supports a variety of methods for getting information about the image, for example, colorTable(), allGray(), isGrayscale(), bitOrder(), bytesPerLine(), depth(), dotsPerMeterX() and dotsPerMeterY(), hasAlphaBuffer(), numBytes(), numColors(), and width() and height().
Pixel colors are retrieved with pixel() and set with setPixel().
QImage also supports a number of functions for creating a new image that is a transformed version of the original. For example, copy(), convertBitOrder(), convertDepth(), createAlphaMask(), createHeuristicMask(), mirror(), scale(), smoothScale(), swapRGB() and xForm(). There are also functions for changing attributes of an image in-place, for example, setAlphaBuffer(), setColor(), setDotsPerMeterX() and setDotsPerMeterY() and setNumColors().
Images can be loaded and saved in the supported formats. Images are saved to a file with save(). Images are loaded from a file with load() (or in the constructor) or from an array of data with loadFromData(). The lists of supported formats are available from inputFormatList() and outputFormatList().
Strings of text may be added to images using setText().
The QImage class uses explicit sharing, similar to that used by QMemArray.
New image formats can be added as plugins.
See also QImageIO, QPixmap, Shared Classes, Graphics Classes, Image Processing Classes, and Implicitly and Explicitly Shared Classes.
This enum type is used to describe the endianness of the CPU and graphics hardware.
The functions scale() and smoothScale() use different modes for scaling the image. The purpose of these modes is to retain the ratio of the image if this is required.
See also isNull().
Using this constructor is the same as first constructing a null image and then calling the create() function.
See also create().
Using this constructor is the same as first constructing a null image and then calling the create() function.
See also create().
If format is specified, the loader attempts to read the image using the specified format. If format is not specified (which is the default), the loader reads a few bytes from the header to guess the file format.
If the loading of the image failed, this object is a null image.
The QImageIO documentation lists the supported image formats and explains how to add extra formats.
See also load(), isNull(), and QImageIO. (e.g. when the code is in a shared library) and ROMable when the application is to be stored in ROM.
If the loading of the image failed, this object is a null image.
See also loadFromData(), isNull(), and imageFormat().
If colortable is 0, a color table sufficient for numColors will be allocated (and destructed later).
Note that yourdata must be 32-bit aligned.
The endianness is given in bitOrder.
If colortable is 0, a color table sufficient for numColors will be allocated (and destructed later).
The endianness is specified by bitOrder.
Warning: This constructor is only available on Qt/Embedded.
This function is slow for large 16-bit (Qt/Embedded only) and 32-bit images.
See also isGrayscale().
Returns the bit order for the image.
If it is a 1-bpp image, this function returns either QImage::BigEndian or QImage::LittleEndian.
If it is not a 1-bpp image, this function returns QImage::IgnoreEndian.
See also depth().
Returns a pointer to the first pixel data. This is equivalent to scanLine(0).
See also numBytes(), scanLine(), and jumpTable().
Example: opengl/texture/gltexobj.cpp.
Returns the number of bytes per image scanline. This is equivalent to numBytes()/height().
See also numBytes() and scanLine().
Returns the color in the color table at index i. The first color is at index 0.
A color value is an RGB triplet. Use the qRed(), qGreen() and qBlue() functions (defined in qcolor.h) to get the color value components.
See also setColor(), numColors(), and QColor.
Example: themes/wood.cpp.
Returns a pointer to the color table.
See also numColors().
Returns *this if the bitOrder is equal to the image bit order, or a null image if this image cannot be converted.
See also bitOrder(), systemBitOrder(), and isNull().
The depth argument must be 1, 8, 16 (Qt/Embedded only) or 32.
Returns *this if depth is equal to the image depth, or a null image if this image cannot be converted.
If the image needs to be modified to fit in a lower-resolution result (e.g. converting from 32-bit to 8-bit), use the conversion_flags to specify how you'd prefer this to happen.
See also Qt::ImageConversionFlags, depth(), and isNull().
If the image needs to be modified to fit in a lower-resolution result (e.g. converting from 32-bit to 8-bit), use the conversion_flags to specify how you'd prefer this to happen.
Note: currently no closest-color search is made. If colors are found that are not in the palette, the palette may not be used at all. This result should not be considered valid because it may change in future implementations.
Currently inefficient for non-32-bit images.
See also Qt::ImageConversionFlags.
See also detach().
Returns a deep copy of a sub-area of the image.
The returned image is always w by h pixels in size, and is copied from position x, y in this image. In areas beyond this image pixels are filled with pixel 0.
If the image needs to be modified to fit in a lower-resolution result (e.g. converting from 32-bit to 8-bit), use the conversion_flags to specify how you'd prefer this to happen.
See also bitBlt() and Qt::ImageConversionFlags.
Returns a deep copy of a sub-area of the image.
The returned image always has the size of the rectangle r. In areas beyond this image pixels are filled with pixel 0.
The width and height is limited to 32767. depth must be 1, 8, or 32. If depth is 1, bitOrder must be set to either QImage::LittleEndian or QImage::BigEndian. For other depths bitOrder must be QImage::IgnoreEndian.
This function allocates a color table and a buffer for the image data. The image data is not initialized.
The image buffer is allocated as a single block that consists of a table of scanline pointers (jumpTable()) and the image data (bits()).
See also fill(), width(), height(), depth(), numColors(), bitOrder(), jumpTable(), scanLine(), bits(), bytesPerLine(), and numBytes().
See QPixmap::convertFromImage() for a description of the conversion_flags argument.
The returned image has little-endian bit order, which you can convert to big-endianness using convertBitOrder().
See also createHeuristicMask(), hasAlphaBuffer(), and setAlphaBuffer().
The four corners vote for which color is to be masked away. In case of a draw (this generally means that this function is not applicable to the image), the result is arbitrary.
The returned image has little-endian bit order, which you can convert to big-endianness using convertBitOrder().
If clipTight is TRUE the mask is just large enough to cover the pixels; otherwise, the mask is larger than the data pixels.
This function disregards the alpha buffer.
See also createAlphaMask().
Returns the depth of the image.
The image depth is the number of bits used to encode a single pixel, also called bits per pixel (bpp) or bit planes of an image.
The supported depths are 1, 8, 16 (Qt/Embedded only) and 32.
See also convertDepth().
If multiple images share common data, this image makes a copy of the data and detaches itself from the sharing mechanism. Nothing is done if there is just a single reference.
See also copy().
Example: themes/wood.cpp.
Returns the number of pixels that fit horizontally in a physical meter. This and dotsPerMeterY() define the intended scale and aspect ratio of the image.
See also setDotsPerMeterX().
Returns the number of pixels that fit vertically in a physical meter. This and dotsPerMeterX() define the intended scale and aspect ratio of the image.
See also setDotsPerMeterY().
If the depth of this image is 1, only the lowest bit is used. If you say fill(0), fill(2), etc., the image is filled with 0s. If you say fill(1), fill(3), etc., the image is filled with 1s. If the depth is 8, the lowest 8 bits are used.
If the depth is 32 and the image has no alpha buffer, the pixel value is written to each pixel in the image. If the image has an alpha buffer, only the 24 RGB bits are set and the upper 8 bits (alpha value) are left unchanged.
Note: QImage::pixel() returns the color of the pixel at the given coordinates; QColor::pixel() returns the pixel value of the underlying window system (essentially an index value), so normally you will want to use QImage::pixel() to use a color from an existing image or QColor::rgb() to use a specific color.
See also invertPixels(), depth(), hasAlphaBuffer(), and create().
See also QMimeSourceFactory, QImage::fromMimeSource(), and QImageDrag::decode().
Returns TRUE if alpha buffer mode is enabled; otherwise returns FALSE.
See also setAlphaBuffer().
Returns the height of the image.
See also width(), size(), and rect().
Examples: canvas/canvas.cpp and opengl/texture/gltexobj.cpp.
The QImageIO documentation lists the guaranteed supported image formats, or use QImage::inputFormats() and QImage::outputFormats() to get lists that include the installed formats.
See also load() and save().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.inputFormatList(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also outputFormatList(), inputFormats(), and QImageIO.
Example: showimg/showimg.cpp.
See also outputFormats(), inputFormatList(), and QImageIO.
If the depth is 32: if invertAlpha is TRUE, the alpha bits are also inverted, otherwise they are left unchanged.
If the depth is not 32, the argument invertAlpha has no meaning.
Note that inverting an 8-bit image means to replace all pixels using color index i with a pixel using color index 255 minus i. Similarly for a 1-bit image. The color table is not changed.
See also fill(), depth(), and hasAlphaBuffer().
For 8-bpp images, this function returns TRUE if color(i) is QRgb(i,i,i) for all indices of the color table; otherwise returns FALSE.
See also allGray() and depth().
Returns TRUE if it is a null image; otherwise returns FALSE.
A null image has all parameters set to zero and no allocated data.
Example: showimg/showimg.cpp.
Returns a pointer to the scanline pointer table.
This is the beginning of the data block for the image.
See also bits() and scanLine().FromData(), save(), imageFormat(), QPixmap::load(), and QImageIO.(), save(), imageFormat(), QPixmap::loadFromData(), and QImageIO.
Loads an image from the QByteArray buf.
Returns a mirror of the image, mirrored in the horizontal and/or the vertical direction depending on whether horizontal and vertical are set to TRUE or FALSE. The original image is not changed.
See also smoothScale().
Returns the number of bytes occupied by the image data.
See also bytesPerLine() and bits().
Returns the size of the color table for the image.
Notice that numColors() returns 0 for 16-bpp (Qt/Embedded only) and 32-bpp images because these images do not use color tables, but instead encode pixel values as RGB triplets.
See also setNumColors() and colorTable().
Example: themes/wood.cpp.
Returns the number of pixels by which the image is intended to be offset by when positioning relative to other images.
See also operator=().
See also copy().
Sets the image bits to the pixmap contents and returns a reference to the image.
If the image shares data with other images, it will first dereference the shared data.
Makes a call to QPixmap::convertToImage().
See also operator=().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.outputFormatList(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also inputFormatList(), outputFormats(), and QImageIO.
See also inputFormats(), outputFormatList(), and QImageIO.
Example: showimg/showimg.cpp.
If (x, y) is not on the image, the results are undefined.
See also setPixel(), qRed(), qGreen(), qBlue(), and valid().
Examples: canvas/canvas.cpp and qmag/qmag.cpp.
If (x, y) is not valid, or if the image is not a paletted image (depth() > 8), the results are undefined.
See also valid() and depth().
Returns the enclosing rectangle (0, 0, width(), height()) of the image.
See also width(), height(), and size().
Returns TRUE if the image was successfully saved; otherwise returns FALSE.
See also load(), loadFromData(), imageFormat(), QPixmap::save(), and QImageIO.
This function writes a QImage to the QIODevice, device. This can be used, for example, to save an image directly into a QByteArray:
QImage image; QByteArray ba; QBuffer buffer( ba ); buffer.open( IO_WriteOnly ); image.save( &buffer, "PNG" ); // writes image into ba in PNG format
If either the width w or the height h is 0 or negative, this function returns a null image.
This function uses a simple, fast algorithm. If you need better quality, use smoothScale() instead.
See also scaleWidth(), scaleHeight(), smoothScale(), and xForm().
The requested size of the image is s.
If h is 0 or negative a null image is returned.
See also scale(), scaleWidth(), smoothScale(), and xForm().
Example: table/small-table-demo/main.cpp.
If w is 0 or negative a null image is returned.
See also scale(), scaleHeight(), smoothScale(), and xForm().
Returns a pointer to the pixel data at the scanline with index i. The first scanline is at index 0.
The scanline data is aligned on a 32-bit boundary.
Warning: If you are accessing 32-bpp image data, cast the returned pointer to QRgb* (QRgb has a 32-bit size) and use it to read/write the pixel value. You cannot use the uchar* pointer directly, because the pixel format depends on the byte order on the underlying platform. Hint: use qRed(), qGreen() and qBlue(), etc. (qcolor.h) to access the pixels.
Warning: If you are accessing 16-bpp image data, you must handle endianness yourself. (Qt/Embedded only)
See also bytesPerLine(), bits(), and jumpTable().
Example: desktop/desktop.cpp.
An 8-bpp image has 8-bit pixels. A pixel is an index into the color table, which contains 32-bit color values. In a 32-bpp image, the 32-bit pixels are the color values.
This 32-bit value is encoded as follows: The lower 24 bits are used for the red, green, and blue components. The upper 8 bits contain the alpha component.
The alpha component specifies the transparency of a pixel. 0 means completely transparent and 255 means opaque. The alpha component is ignored if you do not enable alpha buffer mode.
The alpha buffer is used to set a mask when a QImage is translated to a QPixmap.
See also hasAlphaBuffer() and createAlphaMask().
Sets a color in the color table at index i to c.
A color value is an RGB triplet. Use the qRgb() function (defined in qcolor.h) to make RGB triplets.
See also color(), setNumColors(), and numColors().
Examples: desktop/desktop.cpp and themes/wood.cpp.
If the color table is expanded all the extra colors will be set to black (RGB 0,0,0).
See also numColors(), color(), setColor(), and colorTable().
If (x, y) is not valid, the result is undefined.
If the image is a paletted image (depth() <= 8) and index_or_rgb >= numColors(), the result is undefined.
See also pixelIndex(), pixel(), qRgb(), qRgba(), and valid().
Returns the size of the image, i.e. its width and height.
See also width(), height(), and rect().
For 32-bpp images and 1-bpp/8-bpp color images the result will be 32-bpp, whereas all-gray images (including black-and-white 1-bpp) will produce 8-bit grayscale images with the palette spanning 256 grays from black to white.
This function uses code based on pnmscale.c by Jef Poskanzer.
pnmscale.c - read a portable anymap and scale it.
See also scale() and mirror().
The requested size of the image is s.
See also systemByteOrder().
See also systemBitOrder().
Returns the string recorded for the keyword and language kl.
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.textKeys(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also textList(), text(), setText(), and textLanguages().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QStringList list = myImage.textLanguages(); QStringList::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also textList(), text(), setText(), and textKeys().
Note that if you want to iterate over the list, you should iterate over a copy, e.g.
QValueList<QImageTextKeyLang> list = myImage.textList(); QValueList<QImageTextKeyLang>::Iterator it = list.begin(); while( it != list.end() ) { myProcessing( *it ); ++it; }
See also width(), height(), and pixelIndex().
Examples: canvas/canvas.cpp and qmag/qmag.cpp.
Returns the width of the image.
See also height(), size(), and rect().
Examples: canvas/canvas.cpp and opengl/texture/gltexobj.cpp.
The transformation matrix is internally adjusted to compensate for unwanted translation, i.e. xForm() returns the smallest image that contains all the transformed points of the original image.
See also scale(), QPixmap::xForm(), QPixmap::trueMatrix(), and QWMatrix.
Copies a block of pixels from src to dst. The pixels copied from source (src) are converted according to conversion_flags if it is incompatible with the destination (dst).
sx, sy is the top-left pixel in src, dx, dy is the top-left position in dst and sw, \sh is the size of the copied block.
The copying is clipped if areas outside src or dst are specified.
If sw is -1, it is adjusted to src->width(). Similarly, if sh is -1, it is adjusted to src->height().
Currently inefficient for non 32-bit images.
Writes the image image to the stream s as a PNG image, or as a BMP image if the stream's version is 1.
Note that writing the stream to a file will not produce a valid image file.
See also QImage::save() and Format of the QDataStream operators.
Reads an image from the stream s and stores it in image.
See also QImage::load() and Format of the QDataStream operators.
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
|
http://doc.trolltech.com/3.3/qimage.html
|
crawl-001
|
en
|
refinedweb
|
The AlarmServer class allows alarms to be scheduled and unscheduled. More...
#include <qtopia/alarmserver.h>
List of all member functions.
Applications can schedule alarms with addAlarm() and can unschedule alarms with deleteAlarm(). When the time for an alarm to go off is reached the specified QCop message is sent on the specified channel (optionally with additional data).
Scheduling an alarm using this class is important (rather just using a QTimer) since the machine may be asleep and needs to get woken up using the Linux kernel which implements this at the kernel level to minimize battery usage while asleep.
See also QCopEnvelope and Qtopia Classes.
If this function is called with exactly the same data as a previous call the subsequent call is ignored, so there is only ever one alarm with a given set of parameters.
See also deleteAlarm().
Passing null values for when, channel, or for the QCop message, acts as a wildcard meaning "any". Similarly, passing -1 for data indicates "any".
If there is no matching alarm, nothing happens.
See also addAlarm().
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved.
|
http://doc.trolltech.com/qtopia2.2/html/alarmserver.html
|
crawl-001
|
en
|
refinedweb
|
The QKeyEvent class contains describes a key event. More...
#include <qevent.h>
Inherits QEvent.
List of all member functions..
Sets the accept flag of the key event object.
Setting the accept parameter indicates that the receiver of the event wants the key event. Unwanted key events are sent to the parent widget.
The accept flag is set by default.
See also ignore().
Returns the ASCII code of the key that was pressed or released. We recommend using text() instead.
See also text().
Example: picture/picture.cpp.
Returns the number of single keys for this event. If text() is not empty, this is simply the length of the string.
See also QWidget::setKeyCompression().
Clears the accept flag parameter of the key event object.
Clearing the accept parameter indicates that the event receiver does not want the key event. Unwanted key events are sent to the parent widget.
The accept flag is set by default.
See also accept().
Returns TRUE if the receiver of the event wants to keep the key; otherwise returns FALSE. text().isNull == TRUE, which is the case when pressing or releasing modifying keys as Shift, Control, Alt and Meta. In these cases key() will contain a valid value.
See also QWidget::setKeyCompression().
This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
|
http://doc.trolltech.com/3.1/qkeyevent.html
|
crawl-001
|
en
|
refinedweb
|
task globally available to the system by specifying an instance of that task within a Jython file (usually called
__init__.pyfor HIPE internal modules) that is read by HIPE at startup (see My first Java HIPE task). For example:# __init__.py file compute = ComputeTask( at least an input cannot be registered and should not be used (create a parameterless function instead).Tasks without at least an)
For naming tasks follow the Java conventions, that is, use camelCase and avoid underscores:For naming tasks follow the Java conventions, that is, use camelCase and avoid underscores:
reduceLightis valid,
reduce_lightis not. HIPE displays info log messages for invalid task names. Note that task names are used as variable names (of type
Task) automatically created when starting HIPE.
In most cases, the type of the prime input is not enough to determine whether a task is applicable to a variable. Your task may only run on a
SpecificProductif it has from herschel.ia.gui.kernel import ParameterValidatorAdapter, ParameterValidationException class MyVal(ParameterValidatorAdapter): def validate(self, val): if (val < 0 or val > 10): raise ParameterValidationException("Bad value " + str(val) + ", must be between 0 and 10")And you can assign instances to your parameters# Jython p.parameterValidator = MyVal()Also, a prebuilt confirming (always returns true) Validator is available in TaskParameter :// Java parameter.setParameterValidator(TaskParameter.TRUE_VALIDATOR);Note how the validation logic is now within the parameter validation block, rather than in the preamble or execution block of your task. One advantage is that the execution block of your task can concentrate on the core algorithm. A validator is mandatory for the task to appear in the Applicable category of the Tasks view. If your task really is very general and applies to a given variable type with no exceptions, write a dummy validator that always accepts input values (or use
TaskParameter.TRUE_VALIDATOR).):
This automatic layout may not suit your task for several reasons:This automatic layout may not suit your task for several reasons:
Below, a hand-madeBelow, a hand-made
- A parameter may need several input fields (see for instance the
angleparameter of the
rotatetask, in the figure below).
- You may only want to provide a sub-set of parameters (and leave the full set to expert users on the command line).
- You may want to organise your parameters by types
Boolean,
Integer,
Float,
Long,
Double,
Stringand few more, so there is still a lot of room for improvements and contributions. You can find the general available modifiers in the
herschel.ia.gui.apps.modifierpackage; please consult the Javadoc of your HIPE installation. If no specific modifier fits your task parameter, the default modifier will be used. The default modifier only has a dot to drop variables. Others modifiers include an editor at the right of the dot to enter values. You can implement a custom
Modifierand register it to the system to use your own editors. You can also write your specific
Modifierfor one of the already available-in
Task.getCustomModifiers().
- While you can always write
SomeTask(param = null)on the command line, using a task dialog you will just get
SomeTask(): modifiers use
nullto signify that they have no value and task command generation interprets this as "not using this parameter".
- Modifiers have no notion of the optionality of parameters: if they have a valid value, they will return it. Task command generation for GUIs will not generate a parameter assignment if the value equals the default. See Task Preferences to change this default behaviour.
- Modifiers will mark as erroneous any variables incompatible with the type (dynamically), but will accept not reject dropping it in the dot.
- If you want your parameter to support dropping a set of variables (multiple selection in Variables View) your parameter must be of type
java.util.List(or
Object). [Note that list or Pylist will not work]
- Modifiers have no notion of default values. This implies that there is no way to know what command will be generated seeing a GUI unleSee the next section
Log View. "Full call" includes all optionals:
Log Viewupon task execution
Outline Viewwhen a task is selected in the
Task View that to fully support editing modifiers, that is, task parameter GUI elements that allow to generate-edit values (vs just pass variables) full support for generating Jython values must be provided. Most basic and usual types are supported but if this is not the case then you need to provide a Jython converter that supports your type.
JythonConverterfor X. Note that the mechanism for showing defaults does not rely on toString() working for all types, so it generates safe Jython to avoid unwanted errors. So, if you task has a parameter with an editable (that generates values, vs only using already defined variables) modifier the system may not generate the expected values for this type from the modifier. If that is the case, then the module that defines the type (class) of the modifier can:);
Your modifier should extend
AbstractModifier, which in turn implements the
Modifierinterface. Both reside in the
ia.gui.apps.modifierpackage. The
Modifierinterface consists of two explicit contracts:
Modifiers must also honour two implicit contracts:Modifiers must also honour two implicit contracts:
- Support of drag and drop features (by indirectly extending
setVariableSelectionand
getVariableSelectionin the
ia.gui.kernelpackage)
- Support object inspection (via the
setObjectand
getObjectmethods)
- Deriving from
JComponent
- If registered in the Extension Registry, providing an empty constructor
If you want to share your modifier, you can register it via the Extension Registry with the following syntax (please note the name of the factory:
factory.modifier) ... Also that as the automatic selection in the registries is based on most specific type,).)); ... } // Customise your modifiers @Override public Map<String, Modifier> getCustomModifiers() { Map<String, Modifier> map = new LinkedHashMap<String, Modifier>(); map.put("someInput", new MyModifier()); return map; }. The
JTaskSignatureComponentclass implements the
TaskSignatureComponentinterface, which consists of four explicit contracts:
Task signature components must also honour two implicit contracts:Task signature components must also honour two implicit contracts:
-
- Deriving from
JComponent
- If registered in the Extension Registry, providing. For example, if you want to use a custom Signature Component that just wants to use
JFilePathModifierfor". Note that you can use this function to add listeners and special behaviours to your Modifiers.
herschel.ia.gui.kernel.SiteEventHandlerpassing a
herschel.ia.gui.kernel.event.CommandExecutionRequestEvent.
You can create this event through the
herschel.ia.task.gui.dialog.TaskCommandExecutionEventFactory. Note that the factory uses preference Editors & Viewers/Task Dialog/fullCommand to generate lomg or short forms for the commands.
- state
herschel.ia.gui.kernel.ToolParameterinitiated
EditorComponent(by extending
AbstractEditorComponent ... }
Example = null; private boolean _flag = true; public SimpleButtonTool() { super("simpleButton", new ToolParameter("data", ArrayData.class)); } void setData(ArrayData data) { _data = data; } void updateLabel(JButton button) { boolean hasData = _data != null; button.setEnabled(hasData); if (hasData) { int size = _data.getSize(); int rank = _data.getRank(); button.setText("Data has " + (_flag? "size " + size : "rank " + rank)); _flag = !_flag; } else { button.setText("No data selected"); } } }3. The registration3. The registrationpublic class SimpleButtonToolComponent extends AbstractEditorComponent<ToolSelection> { private static final long serialVersionUID = 1L; private static int _counter = 1; private SimpleButtonTool _tool; protected Class<ToolSelection> getSelectionType() { return ToolSelection.class; } protected boolean makeEditorContent() { final JButton button = new JButton(); setName("Button Tool " + _counter++); ToolSelection toolSelection = getSelection(); _tool = (SimpleButtonTool)toolSelection.getTool(); Selection selection = toolSelection.getSelection(); if (selection != null) { _tool.setData((ArrayData)selection. deprecated during a release
- Rename the parameter to the NEWNAME
- Copy the parameter at the end of the signature (with the old name OLDNAME) as optional
Changing the name of a task
- Rename the task (NEWNAME, both in source and
init.py)
- Add another instance in the
init.py, changing the
name(OLDNAME) and
deprecatedWithattributes. Export the new
|
http://herschel.esac.esa.int/twiki/bin/view/Public/DpHipeTools?cover=print;rev=91
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Spring Integration provides a couple of ways for integrating with a HTTP endpoint -
1. Http Outbound adapter - to send the messages to an http endpoint
2. Http Outbound gateway - to send messages to an http endpoint and to collect the response as a message
My first instinct to poll the http endpoint was to use a Http Inbound channel adapter, the wrong assumption that I made was that the adapter will be responsible for getting the information from an endpoint - what Http Inbound Gateway actually does is to expose an Http endpoint and wait for requests to come in! , this is why I started by saying that it was a little non-intuitive to me that to poll a URL and collect content from it I will actually have to use a Http Outbound gateway
With this clarified, consider an example where I want to poll the USGS Earth Quake information feed available at this url -
This is how my sample http Outbound component looks like:
<int:channel <int:queue </int:channel> <int:channel</int:channel> <int-http:outbound-gateway </int-http:outbound-gateway>
Here the http outbound gateway waits for messages to come into the quakeinfotrigger channel, sends out a GET request to the "" url, and places the response json string into the "quakeinfo.channel" channel
Testing this is easy:
@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration("httpgateway.xml") public class TestHttpOutboundGateway { @Autowired @Qualifier("quakeinfo.channel") PollableChannel quakeinfoChannel; @Autowired @Qualifier("quakeinfotrigger.channel") MessageChannel quakeinfoTriggerChannel; @Test public void testHttpOutbound() { quakeinfoTriggerChannel.send(MessageBuilder.withPayload("").build()); Message<?> message = quakeinfoChannel.receive(); assertThat(message.getPayload(), is(notNullValue())); } }
What I am doing here is getting a reference to the channel which triggers the outbound gateway to send a message to the http endpoint and reference to another channel where the response from the http endpoint is placed. I am triggering the test flow by placing a dummy empty message in the trigger channel and then waiting on message to be available on the response channel and asserting on the contents.
This works cleanly, however my original intent was to write a poller which would trigger polling of this endpoint once every minute or so, to do this what I have to do is essentially place a dummy message into the "quakeinfotrigger.channel" channel every minute and this is easily accomplished using a Spring Integration "poller" and a bit of Spring Expression language:
<int:inbound-channel-adapter <int:poller</int:poller> </int:inbound-channel-adapter>
Here I have a Spring inbound-channel-adapter triggered attached to a poller , with the poller triggering an empty message every minute.
All this looks a little convoluted but works nicely - here is a gist with a working code
References:
1. Based on a question I had posed at the Spring forum
This was very helpful, thanks
Great, but how do you mock the web service you are calling? In automated tests, you most certainly don't want to call an external service.
Thank you.
|
http://www.java-allandsundry.com/2012/11/polling-http-end-point-using-spring.html?showComment=1478440456899
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Your applications can use Service Accounts to run automated tasks and interact with other Google Cloud APIs. Allowing your applications to manage their own SSH keys and connect to instances can be useful for automating system management processes. This tutorial shows how to configure apps to access your instances over SSH connections. The sample app in this tutorial uses a service account and OS Login for SSH key management.
All code used in this tutorial is hosted on the GoogleCloudPlatform/python-docs-samples GitHub page.
Objectives
The tutorial covers the following tasks:
-.
- Run the app on an instance where the service account is associated.
- Run the app outside of a Compute Engine where you must provide the service account key manually and specify additional SSH parameters..
- On your personal user account, obtain the following IAM roles for your project:
compute.instanceAdmin.v1
compute.networkAdmin
compute.osAdminLogin
iam.serviceAccountAdmin
iam.serviceAccountKeyAdmin
iam.serviceAccountUser
- Learn how to use Cloud Shell to run
gcloudcommand-line tool commands.
Create and configure the service account and the example instances
Create a service account and two instances to use for this tutorial. You use the service account to grant SSH access to your application, and that application will connect from one instance to the other over SSH.
Use the following steps to configure the test environment:
Open Cloud Shell in the console:
Export an environment variable to set your project ID for future commands:
export PROJECT_ID='[PROJECT_ID]'
Create a new service account named
ssh-account:
gcloud iam service-accounts create ssh-account --project $PROJECT_ID \ --display-name "ssh-account"
Create a network named
ssh-exampleto use for this tutorial:
gcloud compute networks create ssh-example --project $PROJECT_ID
Create a firewall rule that enables all SSH connections to instances on the
ssh-examplenetwork:
gcloud compute firewall-rules create ssh-all --project $PROJECT_ID \ --network ssh-example --allow tcp:22
Create an instance in
us-central1-fnamed
target. This instance serves as the remote instance that your service account will connect to over SSH. Use the
--metadataflag to enable OS Login on this specific instance. Include the
--no-service-accountand
--no-scopesflags because this instance does not need to run any API requests for this specific example:
gcloud compute instances create target --project $PROJECT_ID \ --zone us-central1-f --network ssh-example \ --no-service-account --no-scopes \ --machine-type f1-micro --metadata=enable-oslogin=TRUE
Grant the
compute.osAdminLoginIAM role to the service account, so it can establish SSH connections specifically to the instance named
target. The
compute.osAdminLoginrole also grants your service account superuser privileges on the instance. Although you could grant this role at the project level so that it applies to all instances in your project, grant this role at the instance level to control SSH access in a more granular way. You can grant additional permissions to your service account later if you find that your applications require access to other resources in your project:
gcloud compute instances add-iam-policy-binding target \ --project $PROJECT_ID --zone us-central1-f \ --member serviceAccount:ssh-account@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/compute.osAdminLogin
Create an instance in
us-central1-fnamed
source. Associate the instance with the
ssh-accountservice account. Also, specify the
cloud-platformscope, which is required for the service account to execute API requests on this instance:
gcloud compute instances create source \ --project $PROJECT_ID --zone us-central1-f \ --service-account ssh-account@$PROJECT_ID.iam.gserviceaccount.com \ --scopes \ --network ssh-example --machine-type f1-micro
The service account can now manage its own SSH key pairs and can use SSH
to connect specifically to the
target instance. Because the
source instance
is associated with the
ssh-account service account that you created, the
Cloud Client Libraries for Python can use
Application Default Credentials
to authenticate as the service account and use the roles that you granted to
that service account earlier.
Next, configure and run an app that can SSH from one instance to another instance.
Run an SSH app on an instance
When apps running on your instances require SSH access to other instances, you can manage the SSH key pairs for your service account and execute SSH commands programmatically. For this example, run a sample app using the following process:
Connect to the
sourceinstance using the
gcloudcommand-line tool:
gcloud compute ssh source --project $PROJECT_ID --zone us-central1-f
On the
sourceinstance, install
pipand the Python client library:
my-username@source:~$ sudo apt update && sudo apt install python-pip -y && pip install --upgrade google-api-python-client
Download the
service_account_ssh.pysample app from GoogleCloudPlatform/python-docs-samples:
my-username@source:~$ curl -O
Run the sample app, which uses
argparseto accept variables from the command line. In this example, instruct the app to install and run
cowsayon the
targetinstance. For this command, add your project ID manually:
my-username@source:~$ python service_account_ssh.py \ --cmd 'sudo apt install cowsay -y && cowsay "It works!"' \ --project [PROJECT_ID] --zone us-central1-f --instance target ⋮ ___________ It works! ----------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
If the app runs correctly, you receive the output from the
cowsay app. You can modify the
--cmd flag to include any
command that you want. Alternatively, you can write your own app that
imports
service_account_ssh.py and calls it directly.
Run
exit to disconnect from the
source instance and return to
Cloud Shell.
Run an SSH app outside of Compute Engine
In the previous example, you ran the app on a Compute Engine
instance where the Cloud Client Libraries for Python could use
Application Default Credentials to use the
service account that is associated with the
source instance. If you run this
app outside of a Compute Engine instance,
the client library can't access the service account and its permissions unless
you provide the service account key manually.
Obtain the external IP address for the
targetinstance that you created earlier in this tutorial. You can find this address either in the console on the Instances page or by running the following command from the
gcloudcommand-line tool:
gcloud compute instances describe target \ --project $PROJECT_ID --zone us-central1-f
Create a service account key for the
ssh-accountservice account that you used in the previous example, and download the key file to your local workstation.
Copy the service account key to the system where you want to run this example.
Open a terminal on the system where you want to run this example.
Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to point to the path where your service account key
.jsonfile is located. If your key is in your
Downloadsfolder, you might set an environment variable like the following example:
$ export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/key.json"
Install the prerequisites on this system:
Download the sample app:
$ curl -O
Run the sample app. When you run the app outside of Compute Engine, the metadata server is not available, so you must specify the service account email manually. You must also specify the external IP address for the
targetinstance that you obtained earlier.
$ python service_account_ssh.py \ --cmd 'sudo apt install cowsay -y && cowsay "It works!"' \ --account ssh-account@[PROJECT_ID].iam.gserviceaccount.com \ --project [PROJECT_ID] --hostname [TARGET_EXTERNAL_IP] ⋮ ___________ It works! ----------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
If the app runs correctly, you receive the output from the
cowsay app.
How the sample app works
The
service_account_ssh.py sample app operates using the following
process:
- Initialize the OS Login API object.
- If you don't provide the service account email address manually, the app reads instance metadata to identify the service account that's associated with the instance. If you run this app outside of Compute Engine, you must provide the service account address manually.
- Call the
create_ssh_key()method to generate a temporary SSH key for the service account on the instance where this example runs and add the public key to the service account with an expiration timer that you can specify.
- Call the
getLoginProfile()method from the OS Login API to get the POSIX user name that the service account uses.
- Call the
run_ssh()method to execute a remote SSH command as the service account.
- Print the response from the remote SSH command.
- Remove the temporary SSH key files.
- OS Login removes the public key files automatically when they pass the expiration time.
def main(cmd, project, instance=None, zone=None, oslogin=None, account=None, hostname=None): """Run a command on a remote system.""" # Create the OS Login API object. oslogin = oslogin or googleapiclient.discovery.build('oslogin', 'v1') # Identify the service account ID if it is not already provided. account = account or requests.get( SERVICE_ACCOUNT_METADATA_URL, headers=HEADERS).text if not account.startswith('users/'): account = 'users/' + account # Create a new SSH key pair and associate it with the service account. private_key_file = create_ssh_key(oslogin, account) # Using the OS Login API, get the POSIX user name from the login profile # for the service account. profile = oslogin.users().getLoginProfile(name=account).execute() username = profile.get('posixAccounts')[0].get('username') # Create the hostname of the target instance using the instance name, # the zone where the instance is located, and the project that owns the # instance. hostname = hostname or '{instance}.{zone}.c.{project}.internal'.format( instance=instance, zone=zone, project=project) # Run a command on the remote instance over SSH. result = run_ssh(cmd, private_key_file, username, hostname) # Print the command line output from the remote instance. # Use .rstrip() rather than end='' for Python 2 compatability. for line in result: print(line.decode('utf-8').rstrip('\n\r')) # Shred the private key and delete the pair. execute(['shred', private_key_file]) execute(['rm', private_key_file]) execute(['rm', private_key_file + '.pub']) if __name__ == '__main__': parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument( '--cmd', default='uname -a', help='The command to run on the remote instance.') parser.add_argument( '--project', help='Your Google Cloud project ID.') parser.add_argument( '--zone', help='The zone where the target instance is locted.') parser.add_argument( '--instance', help='The target instance for the ssh command.') parser.add_argument( '--account', help='The service account email.') parser.add_argument( '--hostname', help='The external IP address or hostname for the target instance.') args = parser.parse_args() main(args.cmd, args.project, instance=args.instance, zone=args.zone, account=args.account, hostname=args.hostname)
The
create_ssh_key() method generates a new SSH key pair. Then, the method
calls
users().importSshPublicKey() from the OS Login API to associate the
public key with the service account. The
users().importSshPublicKey() method
also accepts an expiration value, which indicates how long the public key
remains valid.
def create_ssh_key(oslogin, account, private_key_file=None, expire_time=300): """Generate an SSH key pair and apply it to the specified account.""" private_key_file = private_key_file or '/tmp/key-' + str(uuid.uuid4()) execute(['ssh-keygen', '-t', 'rsa', '-N', '', '-f', private_key_file]) with open(private_key_file + '.pub', 'r') as original: public_key = original.read().strip() # Expiration time is in microseconds. expiration = int((time.time() + expire_time) * 1000000) body = { 'key': public_key, 'expirationTimeUsec': expiration, } oslogin.users().importSshPublicKey(parent=account, body=body).execute() return private_key_file
As a best practice, configure your service accounts to regularly generate new key pairs for themselves. In this example, the service account creates a new key pair for each SSH connection that it establishes, but you could modify this to run on a schedule that better meets the needs of your app.
The request body for
users().importSshPublicKey() includes the
expirationTimeUsec value, which tells OS Login when the key should expire.
Each account can have only up to 32 KB of
SSH key data, so it is best to configure your public SSH keys to expire
shortly after your service account has completed its operations.
After your service account configures its SSH keys, it can execute remote
commands. In this example, the app uses the
run_ssh() method to
execute a command on a remote instance and return the command output.
def run_ssh(cmd, private_key_file, username, hostname): """Run a command on a remote system.""" ssh_command = [ 'ssh', '-i', private_key_file, '-o', 'StrictHostKeyChecking=no', '{username}@{hostname}'.format(username=username, hostname=hostname), cmd, ] ssh = subprocess.Popen( ssh_command, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) result = ssh.stdout.readlines() return result if result else ssh.stderr.readlines()
Cleaning up
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
Open Cloud Shell in the console:
Delete the instance named
source:
gcloud compute instances delete source \ --project $PROJECT_ID --zone us-central1-f
Delete the instance named
target:
gcloud compute instances delete target \ --project $PROJECT_ID --zone us-central1-f
Delete the
ssh-accountservice account:
gcloud iam service-accounts delete ssh-account --project $PROJECT_ID
Delete the network named
ssh-example:
gcloud compute networks delete ssh-example --project $PROJECT_ID
What's next
- Download and view the full code sample. The full sample includes a small example of using all of these methods together. Feel free to download it, change it, and run it to suit your needs.
- Review the Compute Engine API reference and OS Login API reference to learn how to perform other tasks with these APIs.
- Start creating your own apps.
|
https://cloud.google.com/compute/docs/tutorials/service-account-ssh?hl=vi
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Mastering Oracle+Python, Part 8: Python for Oracle DBAs
by Przemyslaw Piotrowski, Published December 2011
Achieve extreme database management productivity with rapid prototyping in Python.
Traditionally, Bash or Perl are the tools of choice when operating systems need some scripting. Given their ease of use, they have become virtually ubiquitous and seeped into other software, including Oracle Database - which relies on them extensively for all kinds of administrative and management tasks.
The Mastering Oracle+Python Series.
Interacting with Filesystems
The core library for interacting with operating systems is the os module, with which you can handle system processes, recognize platforms, deal with OS pipes, and work with environment variables - in the form of over a hundred functions and variables.
Detecting the current platform is as easy as reaching to a predefined string in the os module. The following example illustrates the outcome on Oracle Linux 6.1 and also shows the default path separator for this OS.
A list of all Oracle’s environment variables is accessible through os.environ. The following example makes use of an inline generator expression:
>>> import os >>> oracle_vars = dict((a,b) for a,b in os.environ.items() if a.find('ORACLE')>=0) >>> from pprint import pprint >>> pprint(oracle_vars) {'ORACLE_HOME': '/u01/app/oracle/product/11.2.0/xe', 'ORACLE_SID': 'XE'}
which would correspond to
if written in SQL.
As we probe further, we begin checking the filesystem and looking at where we are. The table below lists the most common filesystem access functions and their descriptions..
Listing 1. walk.py: old log and trace files under the Oracle diagnostic directory
import datetime import os import sys import time from pprint import pprint def readable(size): si=('B','KB','MB','GB','TB', 'PB', 'EB', 'ZB', 'YB') div = [n for n, m in enumerate(si) if pow(1024, n+1)>size][0] return "%.1f%s"%(size/float(pow(1024, div)), si[div]) total = {"log":0, "trace":0} for path, dirs, files in os.walk(sys.argv[1]): for f in files: filepath = path+os.sep+f if os.stat(filepath).st_mtime>time.time()-(3600*24*int(sys.argv[2])): size = readable(os.path.getsize(filepath)) age = datetime.datetime.fromtimestamp(os.stat(filepath).st_mtime) if f in ("log.xml", "alert.log", "listener.log"): filetype = "log" elif f.endswith("trc") or f.endswith("trm"): filetype = "trace" else: filetype = None if filetype: total[filetype] += os.path.getsize(filepath) for a, b in total.items(): total[a] = readable(b) pprint(total)
Running walk.py gives output similar to:
Within the os namespace there is another module that addresses path name manipulations called os.path. It contains platform-sensitive implementations for different systems, so importing os.path will always get the right version for your operating system.
Commonly used function from the os.path module include:
- basename(path,) for getting the leaf name of given path
- dirname(path), for getting the directory part of a file path; it is supplemented by the split(path) function returning a tuple containing separated directory and file parts
- exists(path), to check if a file under path exists, returning False for unresolvable symbolic links
- getsize(path), for quickly checking the number of bytes under a path
- isfile(path) and isdir(path) to resolve the path type
Even though we’ve seen some extensive filesystem browsing capabilities so far, we've only scratched the surface as there are multiple other modules available. For example, the filecmp module is capable of comparing both files and directories, tempfile provides easy temporary file management, glob resolves file paths matching a Unix-style pattern (as in ora_pmon_*.trc, log_*.xml, etc.), and the very useful shutil module implements high-level filesystem operations like copying and removing multiple files or whole file trees.
Talking Processes
The os module is not just restricted to file management. It can also be used to interact with and spawn system processes, and also to perform system kill and nice calls. The table below lists the most useful process management functions. These are only valid for Unix and Linux platforms, though there is some work under way to get them working on Windows in the Python 3.2 branch.
While many of these functions might come in handy on older Python releases, starting with version 2.4, there is a dedicated subprocess module created specifically with process management in mind. Initially submitted to the Python Enhancement Proposal Index (PEP) in 2003, the new module is now the preferred way of communicating with system processes.
Subprocess replaces the os.popen, os.spawn*, and os.system functions with a usable, straightforward interface that is also quite versatile. Listing 2 shows the code for the ps.py program, which executes a
ps aux command and moves the results into a Python dictionary. A pipe is used as the target for stdout to capture all information and suppresses output to the screen.
Listing 2. ps.py: moving the system process map into a Python dictionary
import re import subprocess args = ['ps', 'aux'] ps = subprocess.Popen(args, stdout=subprocess.PIPE) processes = ps.stdout.readlines() header = re.split('\s+', processes.pop(0))[:-1] header.remove('COMMAND') PS = {} for process in processes: columns = re.split('\s+', process) if columns[0]!='oracle': continue PS[int(columns[1])] = {} for position, column in enumerate(columns[:9]): PS[int(columns[1])][header[position].lower()] = column PS[int(columns[1])]['command'] = ' '.join(columns[10:]) from pprint import pprint pprint(PS)
The output is similar to:
... 25892: {'%cpu': '0.0', '%mem': '3.9', 'command': 'xe_w000_XE ', 'pid': '25892', 'rss': '23672', 'start': '16:02', 'stat': 'Ss', 'tty': '?', 'user': 'oracle', 'vsz': '457240'}, 26142: {'%cpu': '2.0', '%mem': '0.9', 'command': 'python proc.py ', 'pid': '26142', 'rss': '5732', 'start': '16:36', 'stat': 'S+', 'tty': 'pts/2', 'user': 'oracle', 'vsz': '160776'}, 26143: {'%cpu': '0.0', '%mem': '0.1', 'command': 'ps aux ', 'pid': '26143', 'rss': '1100', 'start': '16:36', 'stat': 'R+', 'tty': 'pts/2', 'user': 'oracle', 'vsz': '108044'}}
The popen function accepts a number of keyword arguments like stdin/stdout/stderr descriptors, cwd for setting the working directory for a process, or env which sets the environment variables of the child process. To check the status of a command, you just peek at the returncode attribute. The process identifier is available under the pid property.
Methods on an already created process include poll() for checking whether it is still running, wait() for resuming upon program completion, send_signal() for sending a particular signal, and terminate() or kill() for sending SIGTERM or SIGKILL signals, respectively. Finally, to fully interact with the spawned child process, we use the communicate() function to send stdin input.
To illustrate this, let's create a simple SQL*Plus wrapper that thrives on a bequeathed SYSDBA connection.
Listing 3. sp.py: communicating with a SQL*Plus process from Python
import os from subprocess import Popen, PIPE sqlplus = Popen(["sqlplus", "-S", "/", "as", "sysdba"], stdout=PIPE, stdin=PIPE) sqlplus.stdin.write("select sysdate from dual;"+os.linesep) sqlplus.stdin.write("select count(*) from all_objects;"+os.linesep) out, err = sqlplus.communicate() print out
This return output similar to:
A Reporting Service
One of the most daunting tasks that involves stepping out of the database is sending alerts or pushing out recurring reports pulled from a data warehouse. The good news is that not only has Python been used to implement one of the world’s popular mailing list systems - Mailman - but that it also offers a rich email handling library supporting MIME, attachments, message encoding, and literally every aspect related to processing electronic mail. The email module separates the protocol intrinsics from presentation layer to focus purely on constructing messages, while the delivery work is handled via the smtplib module.
The Message class from email.message represents the core class for working with emails. Handlers from the email.mime namespace are used to deal with different attachment types. In this example however, we’ll use the most generic one: MIMEBase from email.mime.base. There will be also some cheating on our part, cashing in on the fact that spreadsheet software will open HTML files in tabular format if they have an .xls extension. We will also take advantage of the help of the tempfile module.
Oracle Linux doesn’t have the cx_Oracle module preinstalled, so you'll need to get it from cx-oracle.sourceforge.net Also, to be able to import cx_Oracle and use network configuration files, ORACLE_HOME and LD_LIBRARY_PATH need to be set up before launching the Python interpreter.
[root@xe ~]# rpm -ivh cx_Oracle-5.1-11g-py26-1.x86_64.rpm Preparing... ########################################### [100%] 1:cx_Oracle ########################################### [100%] [root@xe ~]# [root@xe ~]# su - oracle [oracle@xe ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe [oracle@xe ~]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib
See Listing 4 for a complete program that connects to Oracle Database 11g XE, fetches employee data, and packages it as a spreadsheet attachment sent to an email group.
Listing 4. report.py: an email reporting service
import cx_Oracle import datetime import smtplib import tempfile from email.message import Message from email.encoders import encode_base64 from email.mime.base import MIMEBase from email.mime.multipart import MIMEMultipart today = datetime.datetime.now() msg = MIMEMultipart() msg['From'] = 'Reports Service <[email protected]>' msg['To'] = '[email protected]' msg['Subject'] = 'Monthly employee report %d/%d ' % (today.month, today.year) db = cx_Oracle.connect('hr', 'hrpwd', 'localhost/xe') cursor = db.cursor() cursor.execute("select * from employees order by 1") report = tempfile.NamedTemporaryFile() report.write("<table>") for row in cursor: report.write("<tr>") for field in row: report.write("<td>%s</td>" % field) report.write("</tr>") report.write("</table>") report.flush() cursor.close() db.close() attachment = MIMEBase('application', 'vnd.ms-excel') report.file.seek(0) attachment.set_payload(report.file.read()) encode_base64(attachment) attachment.add_header('Content-Disposition', 'attachment;filename=emp_report_%d_%d.xls' % (today.month, today.year)) msg.attach(attachment) emailserver = smtplib.SMTP("localhost") emailserver.sendmail(msg['From'], msg['To'], msg.as_string()) emailserver.quit()
If we were to take this example even further, we could use the Python Imaging Library (PIL) to grab statistic charts, attach thumbnails of BLOBs stored in a database, or generate PDF reports with ReportLab to be sent across groups of interest. The email module is powerful enough to handle every possible scenario.
Wrapping Up
Python’s extensive library of crossplatform modules will definitely complement a DBA’s portfolio of technologies used for watching over the entire database stack, offering rapid development speed with little upkeep overhead. The ubiquity of Python, shipping in every modern Linux platform, could further increase its adoption rate and over time, help it become the new language of choice for all database administration needs.
Przemyslaw Piotrowski is an information technology specialist working with emerging technologies and dynamic, agile development environments. Having a strong IT background that includes administration, development and design, he finds many paths of software interoperability.
|
https://developer.oracle.com/dsl/mastering-oracle-python-dba.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Path Class
Definition
public ref class Path abstract sealed
[System.Runtime.InteropServices.ComVisible(true)] public static class Path
type Path = class
Public Class Path
- Inheritance
-
- Attributes
-
Examples
The following example demonstrates some of the main members of the
Path class.
using namespace System; using namespace System::IO; int.):" ); Collections::IEnumerator^ myEnum = Path::InvalidPathChars->GetEnumerator(); while ( myEnum->MoveNext() ) { Char c = *safe_cast<Char^>(myEnum->Current); Console::WriteLine(. End Sub End Class
Remarks.
.NET Core 1.1 and later versions and .NET Framework 4.6.2 and later versions also support access to file system objects that are device names, such as "\?\C:".
For more information on file path formats on Windows, see File path formats on Windows systems. exception if the string contains characters that are not valid in path strings, as defined in the characters returned from the GetInvalidPathChars method. a list of common I/O tasks, see Common I/O Tasks.
|
https://docs.microsoft.com/en-us/dotnet/api/system.io.path?view=netframework-1.1
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
table of contents
NAME¶CURLOPT_SSH_PUBLIC_KEYFILE - set public key file for SSH auth
SYNOPSIS¶
#include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_SSH_PUBLIC_KEYFILE, char *filename);
DESCRIPTION¶Pass a char * pointing to a filename for your public key. If not used, libcurl defaults to $HOME/.ssh/id_dsa.pub if the HOME environment variable is set, and just "id_dsa.pub" in the current directory if HOME is not set.
If NULL (or an empty string) is passed, libcurl will pass no public key to libssh2, which then tries to compute it from the private key. This is known to work with libssh2 1.4.0+ linked against OpenSSL._PUBLIC_KEYFILE, "/home/clarkkent/.ssh/id_rsa.pub"); ret = curl_easy_perform(curl); curl_easy_cleanup(curl); }
|
https://manpages.debian.org/unstable/libcurl4-doc/CURLOPT_SSH_PUBLIC_KEYFILE.3.en.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Using Azure Traffic Manager for Private Endpoint Failover – Automation
In a recent blog post, I described that Azure Traffic Manager (ATM) can be useful in failover scenarios for applications with private endpoints, e.g. internal web apps running in an Internal Load Balancer (ILB) App Service Environment (ASE). In the previous post, I described how the failover can be done manually in the portal and in this blog post, I will describe how the failover can be automated. If you have not done so yet, please read 1) the Azure Traffic Manager documentation, and b) my previous blog post.
I have expanded the previous scenario to look something like this:
There is a lot going on here, but the basic principle is that I have added an Azure Function App in each of the ASEs. The function runs on a regularly scheduled trigger and probes the web app to see if it is up. The result of this probe is written as a heart beat (more details below) in a blob storage container. An Azure Logic App is then used to read the heart beat information from the blob storage and make a decision on which of the endpoints to direct traffic to. It then updates the traffic manager endpoints accordingly. Let's take a look at some of the details of each component.
The Azure function code is running in both ASEs to probe each of the sites individually. The code is very simple:
[csharp]
#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"
using System;
using System.Net;
using System.IO;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Newtonsoft.Json;
public static async Task<HttpResponseMessage> Run(TimerInfo myTimer, TraceWriter log)
{
string probeEndpoint = GetEnvironmentVariable("ProbeEndpoint");
string connectionString = GetEnvironmentVariable("StorageConnectionString");
string heartBeatContainer = GetEnvironmentVariable("HeartBeatContainer");
log.Info(connectionString);
HttpClient client = new HttpClient();
HttpResponseMessage response = await client.GetAsync(probeEndpoint);
HeartBeat beat = new HeartBeat();
beat.Time = DateTime.Now;
beat.Status = (int)response.StatusCode;
using (HttpContent content = response.Content)
{
string result = await content.ReadAsStringAsync();
beat.Content = result;
}
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(heartBeatContainer);
container.CreateIfNotExists();
CloudBlockBlob blockBlob = container.GetBlockBlobReference("beat.json");
blockBlob.UploadText(JsonConvert.SerializeObject(beat));
log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
return new HttpResponseMessage(HttpStatusCode.OK);
}
public static string GetEnvironmentVariable(string name)
{
return System.Environment.GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);
}
public class HeartBeat
{
public DateTime Time { get; set; }
public int Status { get; set; }
public string Content { get; set; }
}
[/csharp]
It uses an HTTP client to probe the endpoint. It does not make any decisions based on this result, it simply stores a heart beat in a storage account. The result of that would look something like this:
[js]
{
"Time": "2018-05-24T19:40:00.0492292+00:00",
"Status": 200,
"Content": "...RESPONSE..."
}
[/js]
In this scenario, the probe is placed inside the virtual network of the web app. One could argue that this does not test whether the application can be reached from some other point on the network, but the nice thing is that you can really define what available means. Specifically, you could place the probing code anywhere on your organizations network and update the blobs accordingly. It could even involve multiple probes if appropriate.
The failover orchestration itself is handled by a Logic App. Her is the overview of the flow:
It may look a little complicated, but it is actually fairly simple. The sequence is:
Set up Boolean variables (one for each region), indicating whether the region is up or not.
(For each region). Read the heart beat blob.
(For each region). Parse the JSON contents according to schema.
(For each region). Determine if the region is up or down. This is where you would have to define what that means, but my definition is that there has to be a heart beat with an HTTP status code of 200 (OK) within the last 5 minutes. You can define such an expression with a combination of dynamic content (from the JSON parsing) and expressions, e.g. for the Texas region, it would look like:
[plain]
and(equals(body('TxParse')?['Status'],200),greater(ticks(addMinutes(body('TxParse')?['Time'],5)),ticks(utcNow())))
[/plain]
The two paths (for each region) are combined with a logic expression. Again, the specific logic will depend on what makes sense for your application, but my logic is that if the Virginia region is up or both regions are down, I will point to Virginia. In other cases (only Texas is up), I will point to Texas.
The Traffic Manager endpoints are updated using a "Create or update resource" action:
You can configure the Resource Manager connection with your Azure credentials or use a Service Principal.
You can validate this setup by verifying a) traffic is routed to the primary site as long as it is up, b) if you turn off the primary web app (or the probe), traffic will flow to the secondary site.
There are a number of improvements that one could add to the setup. The logic app should be running in both regions to ensure proper behavior in the event of complete regional failure. There are also some storage account outages that should maybe be handled as edge cases, but with geo redundant storage, they would be exceedingly rare. Things have been kept simple here to mane the scenario easier to digest, but you can add all the bells and whistles you would like.
In the Logic App, you can also include steps such as email alerts and/or push data to Log Analytics. This would create the needed visibility and alerts in connection with failover events.
And that's it. Between this blog post and the previous one, we have looked at how to use Azure Traffic Manager to orchestrate failover for private endpoints; either manually or automated. All the services used are managed services that are available in both Azure Government and Azure Commercial.
|
https://docs.microsoft.com/en-us/archive/blogs/mihansen/using-azure-traffic-manager-for-private-endpoint-failover-automation
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Jordan Featherstone5,973 Points
0 Results on .tables
I'm not sure why, but the tables don't seem to be saving in the database, i have scanned through my code and couldn't notice any errors. So i'm guessing its something very small that i cant put my finger on.
from peewee import * import datetime db = SqliteDatabase('dairy.db') class Entry(Model): content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) class Meta: database = db def init(): db.connect() db.create_tables([Entry], safe=True) def menu_loop(): """ loop """ def add_entry(): """add""" def view_entries(): """View pervious entries.""" def delete_entry(entry): """Delete an entry.""" if __name__ == '__main__': init() menu_loop()
Robin Burkett3,642 Points
Robin Burkett3,642 Points
I noticed 2 typos, but they probably aren't the problem.
What solved my problem when this happened to me was using the command "sqlite3 diary.db" to start sqlite3 instead of just "sqlite3".
|
https://teamtreehouse.com/community/0-results-on-tables
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Now you won't be issued a demand notice from the Income Tax Department automatically if the details given in your income tax return do not match with your Form 26AS, Form 16A or Form 16 details.
Good news for taxpayers. Now you won’t be issued a demand notice from the Income Tax Department automatically if the details given in your income tax return (ITR) do not match with your Form 26AS, Form 16A or Form 16 details. The Union Budget 2018 has, in fact, proposed to withdraw the practice of ITR assessment where an assessee used to get an income tax notice automatically if his/her Form 26AS, Form 16A or Form 16 details did not match with the income furnished in the ITR.
It may be noted that as per the current income tax laws, the clause (a) of section 143(1 provides that at the time of processing of income tax return, the total income or loss shall be computed only after making the adjustments which are specified in sub-clauses (i) to (vi) thereof. The Sub-clause (vi) of the above-mentioned clause provides for adjustment with respect to mismatch in the income of Form 26AS or Form 16A or Form 16 and the income tax return filed.
In simple words, “the Income Tax Department can compare the income returned in ITR with the income appearing in Form 26AS or Form 16A or Form 16 and provides for addition of income accordingly. Due to this power, the I-T department used to send system-generated intimations to the taxpayers. The assessee was required to respond to it online within 30 days,” says CA Abhishek Soni, Founder, tax2win.in.
Now, with a view to restrict the scope of adjustments, it is proposed to insert a new proviso to the clause (vi) of subsection (1) of Sec 143 to provide that no adjustment in regard to mismatch shall be made in respect of any return furnished on or after the assessment year commencing on the first day of April 2018.
“The earlier provision was to probe the genuine cases where an individual has forgotten to give complete details to his employer or where the TDS return has been filed with error or the like cases. With the insertion of proviso, the unnecessary cost and time which used to be involved in the genuine cases as per the existing provisions shall surely be mitigated,” says Soni.
To sum up, we can say that if this tax proposal of the Budget 2018 is passed by the Parliament, then it will restrict the tax authorities from making any adjustments in regard to the mismatch between ITR and Form 26AS, Form 16 and Form 16A for the returns filed for AY 2018-19 (FY 2017-18) and after. However, “the adjustments for the I-T returns filed up to AY 2017-18 (FY 2016-17) shall continue to be applicable as per the current law,” informs Soni.
Get live Stock Prices from BSE and NSE and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.
|
https://www.financialexpress.com/money/income-tax-return-itr-filing-mismatch-in-form-26as-itr-details-you-wont-get-demand-notice-from-taxmen-now/1053505/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Generics in .NET.
generic<typename T> public ref class Generics { public: T Field; };
public class Generic<T> { public T Field; }
Public Class Generic(Of T) Public Field As T End Class
When you create an instance of a generic class, you specify the actual types to substitute for the type parameters. This establishes a new generic class, referred to as a constructed generic class, with your chosen types substituted everywhere that the type parameters appear. The result is a type-safe class that is tailored to your choice of types, as the following code illustrates.
static void Main() { Generics<String^>^ g = gcnew Generics<String^>(); g->Field = "A string"; //... Console::WriteLine("Generics.Field = \"{0}\"", g->Field); Console::WriteLine("Generics.Field.GetType() = {0}", g->Field->GetType()->FullName); }
public static void Main() { Generic<string> g = new Generic<string>(); g.Field = "A string"; //... Console.WriteLine("Generic.Field = \"{0}\"", g.Field); Console.WriteLine("Generic.Field.GetType() = {0}", g.Field.GetType().FullName); }
Public Shared Sub Main() Dim g As New Generic(Of String) g.Field = "A string" '... Console.WriteLine("Generic.Field = ""{0}""", g.Field) Console.WriteLine("Generic.Field.GetType() = {0}", g.Field.GetType().FullName) End Sub
Generics terminology
The following terms are used to discuss generics in .NET:<TKey,TValue> generic type has two type parameters,
TKey.
Constraints are limits placed on generic type parameters. For example, you might limit a type parameter to types that implement the System.Collections.Generic.IComparer<T> generic interface, to ensure that instances of the type can be ordered. You can also constrain type parameters to types that have a particular base class, that have a parameterless.
generic<typename T> T Generic(T arg) { T temp = arg; //... return temp; }
T Generic<T>(T arg) { T temp = arg; //... return temp; }
Function Generic(Of T)(ByVal arg As T) As T Dim temp As T = arg '... Return temp End Function.
ref class A { generic<typename T> T G(T arg) { T temp = arg; //... return temp; } }; generic<typename T> ref class Generic { T M(T arg) { T temp = arg; //... return temp; } };
class A { T G<T>(T arg) { T temp = arg; //... return temp; } } class Generic<T> { T M(T arg) { T temp = arg; //... return temp; } }
Class A Function G(Of T)(ByVal arg As T) As T Dim temp As T = arg '... Return temp End Function End Class Class Generic(Of T) Function M(ByVal arg As T) As T Dim temp As T = arg '... Return temp End Function End Class
Advantages and disadvantages of generics:
LinkedList<String^>^ llist = gcnew LinkedList<String^>();
LinkedList<string> llist = new LinkedList<string>();
Dim llist As New LinkedList(Of String)(), FindLast, and Find.
Note
A nested type that is defined by emitting code in a dynamic assembly or by using the Ilasm.exe (IL Assembler) is not required to include the type parameters of its enclosing types; however, if it does not include them, the type parameters are not in scope in the nested class.
For more information, see "Nested Types" in MakeGenericType.
Class Library and Language Support
.NET provides a number of generic collection classes in the following namespaces:
The System.Collections.Generic namespace contains most of the generic collection types provided by .NET, such as the List<T> and Dictionary<TKey,TValue> generic classes.
The System.Collections.ObjectModel namespace contains, Introduction to Generics,.
Related Topics
Reference
System.Collections.Generic
System.Collections.ObjectModel
System.Reflection.Emit.OpCodes
Feedback
|
https://docs.microsoft.com/en-us/dotnet/standard/generics/index
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
od - Writes the contents of a file to standard output
od [-v] [-Q] [-A address_base] [-j skip] [-N count] [-t
type_string...] [file...]
od [-abBcCdDefFhHiIlLoOpPSvxX] [-s[number]] [-w[number]]
[file...] [+] [offset] [.] [b | B] [label] [.] [b | B]
The od command reads file (standard input by default), and
writes the information stored in file to standard output
using the format specified by the first option. If you do
not specify the first option, the -o option is the
default.
Interfaces documented on this reference page conform to
industry standards as follows:
od: XCU5.0
Refer to the standards(5) reference page for more information
about industry standards and associated tags.
Format characters are as follows: [Tru64 UNIX] Displays
quadwords as hexadecimal values. This option applies only
to the operating system for Alpha AXP systems. [Tru64
UNIX] Displays bytes as characters and displays them with
their ASCII names. If the p character is also given,
bytes with even parity are underlined. The P character
causes bytes with odd parity to be underlined. Otherwise,
parity is ignored. Specifies the input offset base with
the single-character address_base argument. The characters
d, o, and x specify that the offset base be written
in decimal, octal, or hexadecimal, respectively. The
character n specifies that the offset not be written at
all. Displays bytes as octal values. [Tru64 UNIX] Displays
short words as octal values. Displays bytes as
characters using the current setting of the LC_CTYPE variable.
The following nongraphic characters appear as C
escape sequences: Null [Tru64 UNIX] Alarm (or bell)
Backspace Formfeed Newline character Enter Tab [Tru64
UNIX] Vertical tab
Other nongraphic characters appear as 3-digit octal
numbers. Bytes with the parity bit set are displayed
in octal. [Tru64 UNIX] Displays any
extended characters as standard printable ASCII
characters using the appropriate character escape
string. Displays short words as unsigned decimal
values. [Tru64 UNIX] Displays long words as
unsigned decimal values. [Tru64 UNIX] Displays
long words as double-precision, floating-point.
(Same as -F.) [Tru64 UNIX] Displays long words as
single-precision, floating-point. [Tru64
UNIX] Displays long words as double-precision,
floating-point. [Tru64 UNIX] Displays short words
as unsigned hexadecimal values. [Tru64 UNIX] Displays
long words as unsigned hexadecimal values.
[Tru64 UNIX] Displays short words as signed
decimal values. [Tru64 UNIX] Display long words
as signed decimal values. (The three options are
identical.) Jumps over (reading or seeking) skip
bytes from the beginning of the concatenated input
files. If the input is not at least skip bytes
long, od writes a diagnostic message to standard
error and returns a nonzero exit value.
The skip argument is interpreted as a decimal number
by default. If you include a leading offset of
0x or 0X, skip is interpreted as a hexadecimal number.
A leading offset of 0 (zero) causes skip to
be interpreted as an octal number.
If you append the character b, k, or m to skip, the
number is interpreted as a multiple of 512, 1024,
or 1,048,576 bytes, respectively. If b is appended
to a skip interpreted as hexadecimal, it is recognized
as the last digit of the skip, not a block
indicator. Causes od to format no more than count
bytes of input.
The count argument is interpreted as a decimal number
by default. If you include a leading offset of
0x or 0X, count is interpreted as a hexadecimal
number. A leading offset of 0 (zero) causes count
to be interpreted as an octal number. If there are
not count bytes of input available (after successfully
skipping bytes as specified by -j), od formats
the available input. Displays short words as
octal values. [Tru64 UNIX] Displays long words as
unsigned octal values. [Tru64 UNIX] Indicates
even parity on -a conversion. [Tru64 UNIX] Indicates
odd parity on -a conversion. [Tru64
UNIX] Looks for strings of ASCII graphic characters,
terminated with a null byte. The number
argument specifies the minimum length string to be
recognized. By default, the minimum length is 3
characters. Allowable characters are those between
blank (040) and tilde (0176), as well as backspace,
tab, linefeed, formfeed, and carriage-return (010
through 015, except 013). If the environment variable
CMD_ENV is set to svr4, displays signed words
(32-bit or Tru64 UNIX short words) as signed decimal
values.
[Tru64 UNIX] If the environment variable CMD_ENV
is set to xpg4, action is the same as using the -i
option. [Tru64 UNIX] Displays long words as
signed decimal values. Specifies one or more output
types. The type_string argument is a string
that specifies the types to be used when writing
the input data. The type_string argument consists
of the following type specification characters:
Named character Character Signed decimal Floating
point Octal Unsigned decimal Hexadecimal
The type specification characters d, f, o, u, and x
can be followed by an optional unsigned decimal
integer that specifies the number of bytes to be
transformed by each instance of the output type.
The type specification character f can be followed
by one of the following optional characters, which
indicate the type of the item to which the conversion
should be applied. float double long double
The type specification characters d, o, u, and x
can be followed by one of the following optional
characters, which indicate the type of the item to
which the conversion should be applied: char int
long short
You can concatenate multiple types within the same
type_string argument and you can specify multiple
-t arguments. The od command writes the output
lines for each type specified in the order in which
you entered the type specification characters.
Shows all data. By default, display lines that are
identical to the previous line are not output
(except for the byte offsets), but are indicated
with an * (asterisk) in column 1. [Tru64
UNIX] Specifies the number of input bytes to be
interpreted and displayed on each output line. If
-w is not specified, 16 bytes are read for each
display line. If number is not specified, it
defaults to 32. Displays short words as unsigned
hexadecimal values. (Same as -h.) [Tru64
UNIX] Displays long words as unsigned hexadecimal
values. (Same as -H.)
[Tru64 UNIX] An uppercase format character implies the
long or double-precision form of the object.
A path name of a file to be written. If no file operands
are specified, the standard input will be used. If the
first character of file is a plus sign (+) or the first
character of the first file operand is numeric, no more
than two operands are given, and none of the -A, -j, -N,
or -t options is specified, the operand is assumed to be
an offset. Specifies the point in the file at which the
output starts. The offset argument is interpreted as
octal bytes. If a . (dot) is added to offset, it is
interpreted in decimal. If offset begins with x or 0x, it
is interpreted in hexadecimal. If b (B) is appended to a
nonhexadecimal offset, the offset is interpreted as a
block count, where a block is 512 (1024) bytes. If b (B)
is appended to a hexadecimal offset, the b (B) is interpreted
as part of the offset and the offset is not interpreted
as a block count; a block count can be specified
only with a decimal or an octal offset. Interpreted as a
pseudoaddress for the first byte displayed. It is shown
in parentheses following the file offset. It is intended
to be used with core images to indicate the real memory
address. The syntax for label is identical to that for
offset.
The output continues until the end of the file.
When od reads standard input, the offset and label
operands must be preceded by a + (plus sign).
If you omit the file argument and do not specify -A, -j,
-N, or -t, you must precede the offset argument by a +
(plus sign) character.
To be sure that od assumes the argument to be an offset:
Make the first character of file a + sign, or the first
character of the first file argument numeric. Give no
more than two arguments. Specify none of the -A, -j, -N,
or -t options.
The od command has the following restrictions: You cannot
use the command with disks that have a capacity of more
than 4 GB. You cannot specify an offset of more than
(2**32)-1 as a starting point.
[Tru64 UNIX] The -i option displays short words as signed
decimal values. The -i option used to be -s in System V.
The following exit values are returned: All input files
were processed successfully. An error occurred.
To display a file in octal word format, a page at a time,
enter: od a.out | more To translate a file into several
formats at once, enter: od -cx a.out >a.xcd
This writes a.out in hexadecimal format (the -x
option) into the file a.xcd, giving also the ASCII
character equivalent, if any, of each byte (the -c
option). To start in the middle of a file, enter:
od -bcx a.out +100.
This displays a.out in octal-byte, character, and
hexadecimal formats, starting from the 100th byte.
The . (dot) after the offset makes it a decimal
number. Without the (dot), the dump starts from the
64th (100 octal) byte.
The following environment variables affect the execution
of od:: sed(1)
Files: locale(4)
Standards: standards(5)
od(1)
|
http://nixdoc.net/man-pages/Tru64/man1/od.1.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Hi,
I have defined a workflow so that it starts asynchronously. And there is a custom activity that implements ExternalActivity. In the execute method, I create Runnable object and submit to a custom thread pool, and then call execution.waitForSignal(). The new thread runs its own tasks and signals the execution in "work".
In the new thread, where it looks for the active execution in "work" sometimes throws an exception:
Execution execution = processInstance.findActiveExecutionIn("work");
The reason is because the state of the process instance is STATE_ASYNC and not STATE_ACTIVE_ROOT. Does anyone know why?
This is web application and we start multiple process instances concurrently. We are using jBPM JobExecutor with default configuration. When I start about 10 process instances, I see one or two process instances failed to move on to the next activity because we were not able to signal the execution.
1) Does anyone know what's going wrong here?
2) Any suggestion how to make process instance move from STATE_ASYNC to STATE_ACTIVE_ROOT?
3) Or is there a way to signal process instance in STATE_ASYNC?
public class StartWorkAndWait implements ExternalActivity {
public void execute(ActivityExecution execution) {
// created Runnable and submitted to a custom ThreadPool
execution.waitForSignal();
}
public void signal(ActivityExecution execution,
String signalName,
Map<String, Object> parameters) {
execution.take(signalName);
}
}
Thanks,
Jee
|
https://developer.jboss.org/thread/175943
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Recommended ways to use WELD in JUnit (SE), Tomcat (Servlet) and JBossAlain Pannetier May 27, 2016 6:54 AM
I have project relying on CDI in two places.
The target environment of the project is EAP 6.2 but we're also targeting a "lighter" tomcat 7 environment.
In addition each maven module has its bunch of associated JUnit tests.
All these executions will leverage WELD at some point.
1/ back end adapters are discovered at run time by their adapter manager (named core)
2/ JSF based (stored in jars) GUI contributions are also discovered at run time by their shell GUI (the war end product).
With a bottom up approach we tested and used CDI 1.1 based on WELD 2.3.4 for adapter discovery.
The dependency was on the SE flavour:
<dependency> <groupId>org.jboss.weld.se</groupId> <artifactId>weld-se</artifactId> <version>2.3.4</version> </dependency>
So far so good, till the day when silly me deployed in tomcat and the weld-se jar triggered a clash with tomcat's own javax.el classes.
I got away with it by playing on the maven dependency exclusions and adding the weld-servlet dependency on the war module
<dependency> <groupId>com.acme.proj</groupId> <artifactId>core</artifactId> <version>1.0</version> <scope>runtime</scope> <exclusions> <exclusion> <groupId>org.jboss.weld.se</groupId> <artifactId>weld-se</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.jboss.weld.servlet</groupId> <artifactId>weld-servlet</artifactId> <version>2.3.4</version> </dependency>
But I forgot that in the code we're explicitly doing things like
import org.jboss.weld.environment.se.Weld; import org.jboss.weld.environment.se.WeldContainer; [...] private AdapterRegister adapterRegister; private Weld weld; private WeldContainer container; [...] this.weld = new Weld(); this.container = weld.initialize(); adapterRegister = this.getBean(AdapterRegister.class);
So I'm looking for suggestions:
Ideally I would need to still be able to run my unit tests, many of them implicitly rely on CDI.
Yet also be able to write portable code (previous code was obviously not portable) that can be included in a war...
that can run on both JBoss and tomcat...
1. Re: Recommended ways to use WELD in JUnit (SE), Tomcat (Servlet) and JBossTomas Remes May 27, 2016 7:22 AM (in response to Alain Pannetier)
Hi Alain,
I think it would be good idea to setup an extra Maven profile for execution on Tomcat (you can put this weld-servlet dependency to this profile) and some other profile for execution on EAP (no Weld dependency needed here). I am not sure how the usage of Weld SE fits in but I think you can try to use Arquillian Embedded container - GitHub - arquillian/arquillian-container-weld: Arquillian Weld Containers
|
https://developer.jboss.org/thread/270037
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
How to configure Exchange Server on-premises to use Hybrid Modern Authentication
Hybrid Modern Authentication (HMA), is a method of identity management that offers more secure user authentication and authorization, and is available for Exchange server on-premises hybrid deployments.
FYI
Before we begin, I call: HMA on.
Adding on-premises web service URLs as Service Principal Names (SPNs) in Azure AD.-reqs
Since many prerequisites are common for both Skype for Business and Exchange, review Hybrid Modern Authentication overview and prerequisites for using it with on-premises Skype for Business and Exchange servers. Do this before you begin any of the steps in this article. (AAD) must be registered in AAD (this includes both internal and external namespaces).
First, gather all the URLs that you need to add in AAD. Run these commands on-premises:
Get-MapiVirtualDirectory | FL server,*url*
Get-WebServicesVirtualDirectory | FL server,*url*
Get-ActiveSyncVirtualDirectory | FL server,*url*
Get-OABVirtualDirectory | FL server,*url*
Ensure the URLs clients may connect to are listed as HTTPS service principal names in AAD.
First, connect to AAD with these instructions. https:// URLs from your on-premises that are missing we will need to add those specific records to this list.
- If you don't see your internal and external MAPI/HTTP, EWS, ActiveSync, OAB and Autodiscover records in this list, you must add them using the command below (the example URLs are '
mail.corp.contoso.com' command from step 2 again, and looking through the output. Compare the list / screenshot from before to the new list of SPNs (you may also screenshot then you need to add it using the relevant commands before proceeding.
Confirm the EvoSTS Auth Server Object is Present
Return to the on-premises Exchange Management Shell for this last command. Now you can validate that your on-premises has an entry for the evoSTS authentication provider:
Get-AuthServer | where {$_.Name -eq "EvoSts"}
Your output should show an AuthServer of the Name EvoSts and the 'Enabled' state should be True. If you don't see this, you should download and run the most recent version of the Hybrid Configuration Wizard.
Important If you're running Exchange 2010 in your environment, the EvoSTS authentication provider won't be created.
Enable HMA
Run the following command in the Exchange Management Shell, on-premises:
Set-AuthServer -Identity EvoSTS -IsDefaultAuthorizationEndpoint $true Set-OrganizationConfig -OAuth2ClientProfileEnabled $true..
Related topics
Hybrid Modern Authentication overview and prerequisites for using it with on-premises Skype for Business and Exchange servers
|
https://docs.microsoft.com/en-us/office365/enterprise/configure-exchange-server-for-hybrid-modern-authentication?redirectSourcePath=%252fbg-bg%252farticle%252f%2525D0%2525BA%2525D0%2525B0%2525D0%2525BA-%2525D0%2525B4%2525D0%2525B0-%2525D0%2525BA%2525D0%2525BE%2525D0%2525BD%2525D1%252584%2525D0%2525B8%2525D0%2525B3%2525D1%252583%2525D1%252580%2525D0%2525B8%2525D1%252580%2525D0%2525B0%2525D1%252582%2525D0%2525B5-%2525D1%252581%2525D1%25258A%2525D1%252580%2525D0%2525B2%2525D1%25258A%2525D1%252580-%2525D0%2525BD%2525D0%2525B0-exchange-%2525D0%2525BB%2525D0%2525BE%2525D0%2525BA%2525D0%2525B0%2525D0%2525BB%2525D0%2525BD%2525D0%2525BE-%2525D0%2525B4%2525D0%2525B0-%2525D0%2525B8%2525D0%2525B7%2525D0%2525BF%2525D0%2525BE%2525D0%2525BB%2525D0%2525B7%2525D0%2525B2%2525D0%2525B0%2525D1%252582%2525D0%2525B5-%2525D1%252585%2525D0%2525B8%2525D0%2525B1%2525D1%252580%2525D0%2525B8%2525D0%2525B4%2525D0%2525BD%2525D0%2525B8-%2525D0%2525BC%2525D0%2525BE%2525D0%2525B4%2525D0%2525B5%2525D1%252580%2525D0%2525B5%2525D0%2525BD-%2525D1%252583%2525D0%2525B4%2525D0%2525BE%2525D1%252581%2525D1%252582%2525D0%2525BE%2525D0%2525B2%2525D0%2525B5%2525D1%252580%2525D1%25258F%2525D0%2525B2%2525D0%2525B0%2525D0%2525BD%2525D0%2525B5-cef3044d-d4cb-4586-8e82-ee97bd3b14ad
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
....
from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax1 = fig.add_subplot(111, projection='3d') xpos = [1,2,3,4,5,6,7,8,9,10] ypos = [2,3,4,5,1,6,2,1,7,2] num_elements = len(xpos) zpos = [0,0,0,0,0,0,0,0,0,0] dx = np.ones(10) dy = np.ones(10) dz = [1,2,3,4,5,6,7,8,9,10] ax1.bar3d(xpos, ypos, zpos, dx, dy, dz, color='#00ceaa') plt.savefig('test.png')
Asked: 2014-06-21 03:13:52 -0500
Seen: 402 times
Last updated: Jun 21 '14
Error when trying to access MATLAB from Sage
can I create isosurface contours from list data?
Convert a sage.interfaces.matlab.MatlabElement object
Plot Series of 3D Direction Vectors (Not All from Origen)
z-transform and inverse z-transform in SageMath
filling in an area under a function or curve in 3 dimensions
Adding arrows in vector fields
How to install seaborn in sagemath cloud?
plotting a plane section in sage
canvas3d plot does not change.
|
https://ask.sagemath.org/question/10938/is-there-a-bar3-from-matlab-equivalent-in-sage/?answer=16133
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Pass function through the QML ListModel
import QtQuick 2.4 import QtQuick.Controls 1.1 Item { Column { TextArea { id: txt height: 20 } Repeater { model: ListModel { Component.onCompleted: { append({index: 0, f: function() { txt.forceActiveFocus() }}) } } Component { id: d Button { text: "btn" } } delegate: Loader { sourceComponent: d onLoaded: { item.onClicked.connect(f) } } } } }
The button, generated from a ListModel, should be able to trigger an invokable that is specified in the model. How to do that?
This example should get TextArea focused when the button is clicked. Instead, it gives: 'Error: Function.prototype.connect: target is not a function'.
(In the real example, there is a C++ object with Q_INVOKABLE instead of TextArea. And the ListModel has a couple of additional fields and functions to pass.)
- p3c0 Moderators
Hi @devel,
AFAIK
ListElementroles cannot have scripts as its values.
Values must be simple constants; either strings (quoted and optionally within a call to QT_TR_NOOP), boolean values (true, false), numbers, or enumeration values (such as AlignText.AlignHCenter).
ListModel.append()was a workaround to overcome this limitation: at least the
ids can be passed with it.
I'm going to pass the
Actionobjects there instead of plain functions for now.
|
https://forum.qt.io/topic/67498/pass-function-through-the-qml-listmodel
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
GameFromScratch.com
Picking up where our last section left off, by the end of this chapter you will be able to send a high score across the wire and process it in node. In order to do this, we need to create a common format both our client and server understand. In this particular case we are going to use JSON. If you have done any recent web program, you are probably familiar with JSON already. Basically JSON is a micro-format designed for transferring data on the web with a lighter footprint than XML. All told it is a pretty simple format, here is the JSON we are going to use for storing high scores:
{
"Scores" :
[
{"Name" : "Mike", "Score" : 2},
{"Name" :"Bob", "Score" : 14},
{"Name" :"Steve", "Score" : 12},
{"Name" :"John", "Score" : 10},
{"Name" :"Henry", "Score" : 8}
]
}
This JSON represents an object named “Scores” composed of an array of 5 objects that in turn are made up of a string field “Name” and a integer field “Score”. Javascript and JSON go together like peanut butter and jam, but what about C++? Well, you could encode your data into a string with very little effort ( one of the big advantageous of JSON ), but “little effort” is still effort, and I’m a lazy guy! Therefore we are going to use an existing library. I wanted a light weight and extremely simple JSON library, so I went with the aptly named SimpleJSON. Installation really couldn’t be simple, just add the 4 cpp files ( 2 headers, 2 source ) to your project and you are done.
Now lets take a look at our SFML client. It is going to be a simple command line utility for now, from a dos prompt simply pass in the name and high score as parameters, and it will send them across to the node server. Lets take a look at Scoreboard.cpp:
#include "SFML/Network.hpp"
#include "JSON.h"
#include <iostream>
int main(int argc, char* argv[])
{
if(argc != 3)
{
std::cout << "Invalid usage, proper format is player name then score, for example:" << std::endl;
std::cout << "Scoreboard \"Player Name\" 42" << std::endl;
return -1;
}
sf::IPAddress ip("127.0.0.1");
sf::SocketUDP socket;
sf::Packet packet;
JSONObject data;
data[L"action"] = new JSONValue(L"AddScore");
data[L"name"] = new JSONValue(std::wstring(argv[1],argv[1] + strlen(argv[1])));
data[L"score"] = new JSONValue(atof(argv[2]));());
unsigned short port = 1000;
if(socket.Send(packet,ip,port) != sf::Socket::Done)
{
std::cout << "An error ocurred sending packet" << std::endl;
}
socket.Close();
return 0;
}
One annoyance of the library I chose for JSON is it works with UTF-8 wide strings, but the string that we send we want encoded as standard ascii, so there is a bit of gunk as we create the JSON object using wide character strings, then after turning it into a JSON string, we encode it back to ascii. Otherwise the code is quite straight forward.
First we verify we got the proper number of command line arguments, declare our various SFML and JSON related variables. We are setting the ip address to 127.0.0.1, which is the loopback address, or the equivalent of saying “this machine”. Next we build up our JSON string. If you have worked with XML before, the process will be very familiar. We create a JSONObject named data, which is essentially a map of key value pairs of other JSONValues. When then populate it with our data, then in turn use it as the parameter in creating a new JSONValue. All the heavy lifting is done in JSONValue’s constructor. Stringify() is the method that does the actual re-encoding returning a std::wstring. Of course, we actually want a std:string, so we create one. Obviously in time sensitive code, we would alter SimpleJSON to use std::string instead. Our end result is a JSON string that looks like this:
{"action":"AddScore","name":"Bob Dole","score":23}
Now that we have our data in JSON encoded string format, it’s time to send it. We simply append our string data to our packet and send it using our Socket. If an error occurs, report it. Otherwise, we are done, close our Socket and exit. If you strip away all the wide character string annoyances, the process is actually quite straight forward.
Now lets take a look at the Node side of things. The code is fairly long, so instead of walking through it I have simply commented it. If you have any questions not covered by the comments, fire away. So here is the contents of Server.js
var dgram = require('dgram'),
fileSystem = require('fs'),
highScores,
server;
//Load high scores from file
fileSystem.readFile("HighScores.txt", 'utf8', function(err,data){
if(err){
//Error occurred loading file, spit out error message then die
console.log("Error occurred loading file");
process.exit();
}
console.log("Loading high scores from file");
try{
// use JSON to turn file contents back into a Javascript array object
highScores = JSON.parse(data);
}catch(e)
{
// Exception occurred parsing file contents as JSON, error out and die.
console.log("Exception occured parsing data");
process.exit();
}
// Now sort the high scores by score, high to low
highScores.Scores.sort(function(a,b){
return b.Score - a.Score;
});
// Display the sorted high scores to the console
console.log(highScores);
});
//Alternative way to read file in NodeJS
//file.on("error",function(exception){
// process.exit();
// }
//);
//file.on("data",function(data){
// fileData = data;
// }
//);
//file.on("close",function(){
// highScores = JSON.parse(fileData);
//});
//Create a UDP socket
server = dgram.createSocket('udp4');
console.log("Socket created");
// Add a handler for incoming traffic on the socket. This will be called each time something connects to the socket
server.on("message",function (msg,rinfo) {
//console.log(parseInt(msg).toString());
console.log(rinfo);
// SFML sends two packets, one with the size of the following packet ( as a 4 byte number )
// We don't need it, so as a crude-hack, we ignore any 4 byte packets
if(rinfo.size != 4)
{
console.log("Received message:" + msg.toString());
// Socket data comes in as a JSON encoded array of objects, turn back into a JS object
var jsonData,i;
try{
jsonData = JSON.parse(msg);
}
catch( exception ) {
console.log("Invalid JSON request received");
return; // Non lethal error, just stop processing packet
}
// The action parameter determines what you should do with this packet
switch(jsonData.action)
{
// action==AddScore, add the score to the highscore array if it's higher than an existing score
case "AddScore":
console.log("AddScore called\n");
// Make sure highscore has been initialize... order can be a weird thing in node
if(highScores != undefined){
// Loop through current highScores ( which should be sorted )
// and insert score if a lower match found
for(i=0;i < highScores.Scores.length;++i)
{
if(highScores.Scores[i].Score < jsonData.score){
highScores.Scores.splice(i,0,{"Name" : jsonData.name, "Score" : jsonData.score});
console.log("Inserted highscore by: " + jsonData.name);
break; // match found, stop looping
}
}
}
// Display newly created highscore array
console.log(highScores.Scores);
break;
}
}
//
//
});
// Called when socket starts listening for packets. besides logging, currently serves no purpose
server.on("listening", function () {
var address = server.address();
console.log("server listening " +
address.address + ":" + address.port);
});
// Finally, bind the server to port 1000. 1000 was randomly chosen. Think of this as saying GO!
// Now we are listening for UDP connections on port 1000
server.bind(1000);
Now start the server at the command line ( node server.js ) and run the client from a different command line. It should look like this:
As you can see, we are successfully sending data from SFML over a socket to our node based server. In the next part, we will look at sending data the other way.
You can download the complete project right here. The scripts are in a sub-folder named NodeJS.
Click for part 3
Programming
SFML Node
CryTek’s freely available CryEngine has just had an update released.
Easily the biggest new feature is integration of Autodesk Scaleform, a Flash based UI solution, that was also recently integrated in the competing Unity engine. In addition to Scaleform support, they announced to following new features: download it here.
News
CryEngine.
Blender 2.62 was released today. The 2.6 series of releases is all about adding in the various branches that have been in development recently and this one is no exception. Key new features include:
For full details, you can go here with the bug fixes listed here. To download the newest release head on over here. Have some patience though, as always with every new release days, their servers are getting hammered.
The next release (in April) is the one I am really waiting for, as it’s the one that finally adds BMesh support! There is also a new team focusing on improving COLLADA support. The future is looking extremely good!
Nice work Blender team, keep ‘em coming!
Art News
Blender
|
http://www.gamefromscratch.com/2012/02/default.aspx
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Simple tool for interacting with AWS in Python
Project description
FabulAWS began as a tool to create ephemeral EC2 instances using Python, like so:
from fabulaws.ec2 import MicroLucidInstance with MicroLucidInstance(): run('uname -a')
FabulAWS is now a fully-featured tool for deploying Python web applications to autoscaling-enabled AWS EC2 environments.
Please refer to the documentation for details.
Development by Caktus Consulting Group.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/fabulaws/0.3.0a18/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Introduction
This document provides an overview of EPiServer Full-Text Search Service (FTS Service). The FTS Service is a stand-alone WCF REST Service on top of an unmodified version of the open source search engine Lucene .NET and uses Atom with extensions described in Atom and Extensions as data protocol. The service can be deployed from EPiServer Deployment Center and is automatically installed as part of an EPiServer Relate installation. Services are hosted by the IIS and are invoked by the LuceneService.svc file, which means that multiple services can exists within the same environment. One service can also host multiple indexes as described in Named Indexes and Multi-Search. A typical setup is shown below:
Assemblies and Namespaces
The EPiServer.Search.IndexingService assembly contains the following namespaces:
- EPiServer.Search.IndexingService contains core classes for the EPiServer FTS Service, most notably are the NamedIndex,IndexingService and IndexingServiceHandler classes.
- EPiServer.Search.IndexingService.Configuration contains configuration classes for the EPiServer FTS Service.
- EPiServer.Search.IndexingService.Security contains core classes for client authentication
- EPiServer.Search.IndexingService.IndexFieldSerializers contains core classes for serializing and de-serializing Atom feed items and Lucene Document Fields.
Service REST API
The EPiServer FTS Service defines the following REST endpoints:
Feature Highlights
ReferenceId
When the ReferenceId extension contains a value, the EPiServer FTS Service will perceive this as not a standalone item, but an item that belongs to another item already in the index. An example is comments in EPiServer Community. The default behavior there is that when a comment is added to the index, the ReferenceId is set to the commented entity and when a query expression matched the contents of the comment, the commented entity will be returns as the IndexResponseItem and not the comment itself.
Any IndexRequestItem with IndexAction.Add and with a ReferenceId set, will be added to the reference index automatically created for all configured indexes with the suffix “_ref”. The search engine will internally find the parent item corresponding to the ReferenceId (by ID) and update it so that all contents of any items in the “_ref” index for the parent item is added to the parent item searchable meta data. In this way there is no need to send all data over and over when updating the entity client side. This means that you cannot search specific field in reference data since all default searchable fields chunked together and added to the main items metadata.
You cannot update the reference id property for an item once it is added. All items with the IndexAction Update or Remove will automatically update its parent in the index if it was originally added with the reference ID set.
Content in files will be indexed, the installed Ifilters will decide which files will be included.
DataUri
When the DataUri extension contains a value, the EPiServer FTS Service will immediately enqueue. The behavior may be overridden by overriding the GetFileUriContent(Uri) method and/or GetNonFileUriContent(Uri) in the IndexingServiceHandler.
Named Indexes and Multi-Search
The EPiServer FTS Service allows for configure multiple indexes which can be used where there is an obvious separation of indexed content or where the there is an existing (Lucene compatible) index that is updated from a different source. Multiple named indexes needs to be configured so that the fields for the index documents maps to the pre-defined field names in the service, see the Configuring TFS Service document.
When updating the index, the target index (by name) is specified in the NamedIndex attribute extension of the Atom formatted request. When searching the index, multiple indexes may be specified in the request and the EPiServer FTS Service will search each one of the specified index and return a merged result set. If no named index is specified, the default index will always be used.
VirtualPath
The VirtualPath feature enables structuring of indexed content in a tree structure where searches can be made hit when searching for documents with the path “node1” or “node1/node2”. However, it would not be considered if the path only specified “node2”. If the IndexRequestItem.AutoUpdateVirtualPath attribute is set when updating or removing an item the VirtualPath is used to update or remove all items under the provided node.
Paging
Paging of search results can be made at the service or at the client. Default configuration (useIndexingServicePaging=true) states that the paging should be done in the service hence only returning max numbers if items equal to the passed pageSize. Changing this setting to client paging can potentially imply that the max numbers of items returned by the service is equal to the configured maxHitsFromIndexingService (default 500). The client paging option may be considered when filter providers are configured, and paging needs to be intact.
Limitations
The EPiServer FTS Service only handles plain text and does not understand any markup language as HTML and cannot thus calculate relevance based on markup (such as <h1> <b> etc).
The EPiServer FTS Service does not do web crawling nor any automatic updates. All indexed content is pushed into the service and thus handing over all the responsibility of keeping the index updated to the client.
Third Party Search Engines
The loose couplings between the FTS Client and Service allows for third party search engines to implement solutions compatible with the FTS Client independent of platform. The only requirement is to comply with the REST service endpoint specifications. For .NET environments, this can be done by overriding the UpdateIndex, GetSearchResults, GetNamedIndexes and ResetNamedIndex methods in the IndexingServiceHandler, thus using the existing WCF REST service implementation..
|
https://world.episerver.com/documentation/Items/Developers-Guide/Episerver-Framework/7/Search/About-EPiServer-Full-Text-Search-Service/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
A lot of people add their own context processor but there is a better and safer way to do this in Django.
The necessary steps
- Set
DEBUG = Truein your settings.py
- Add your current IP to INTERNAL_IPS in your settings.py
Your settings.py file
# settings.py # Set DEBUG = True DEBUG = True # Add the IP of your computer INTERNAL_IPS = ["127.0.0.1"]
And in your template
{% if debug %} <script src="{{ APP_URL }}js/app.js?_={{ RELEASE_TAG }}"></script> {% else %} <script src="{{ APP_URL }}js/app.min.js?_={{ RELEASE_TAG }}"></script> {% endif %}
Why is this better than creating a custom context processor?
This will make sure to serve only the non-debug parts of your site if you accidentally were to push to your production environment with
DEBUG = True.
You can think of it as an extra safety measure so that your site/app does not leak sensitive data!
Read more in the docs
|
https://tech.willandskill.se/debug-mode-in-django-templates/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Demystify Backbone.js Series
Links of interest:
- BAREBONE.js source code — barebone implementation of Backbone.js
- RSVP App — basic RSVP app built with BAREBONE.js (check it out!)
- RSVP App source code — source code for RSVP app
- Backbone.js source code — awesomely annotated source code by Jeremy Ashkenas
Web Frameworks
Web frameworks exist to support the development of dynamic web applications by abstracting away the details for common activities.
Code Modularity
Many follow the MVC architectural pattern which keeps code modular and promotes separation of concerns. The model (M) contains the data and the logic surrounding it. The view (V) is the output representation of information. The controller (C) accepts user input and translates it into intent for the model or view.
Some popular front-end frameworks are Backbone, Angular, and React. Each framework/library takes a different approach. All are abstractions of the JavaScript language, but the amount to which they look and feel like JavaScript differs greatly.
Web frameworks may seem like these magical black boxes of code, wrapped in a handy API that makes them easy to use. A majority of developers stay at the API level — meaning they learn the syntax required to get the job done and call it a day.
If you’re like me, perhaps you’re able to appreciate a good magic trick here and there, but you’re left feeling curious and/or unsatisfied with an itch to understand what the hell happened!
Backbone.js Under the Hood
Backbone was my first introduction to an MVC framework and I have a deep appreciation for its relative simplicity and light-weight nature.
What’s awesome about it is that if you have a solid understanding of JavaScript fundamentals, you can easily understand how the framework operates. You’ll be able to jump into the Backbone.js source code (which is beautifully annotated) and grasp the inner workings of the framework.
Don’t be deterred by this notion of vast complexity under the hood (as I was initially). The Backbone.js source code is likely more approachable than you might think. In fact, you could probably write your own mini-implementation of Backbone.js no sweat!
If that’s something that interests you, read on! That’s what we’ll do here together :)
Backbone.js Components
These are the core components of the Backbone.js framework:
- BAREBONE.js Models
- BAREBONE.js Collections
- BAREBONE.js Event System
- BAREBONE.js Views (view-controller)
- BAREBONE.js extend method
Barebone Backbone.js framework
Throughout this post, we will cover the basic functionality of Backbone.js (models, collections, and views). Together we will construct pieces of a mini-framework called BAREBONE.js (a barebone reimplementation of Backbone.js). You can view the annotated source for BAREBONE.js here. It’s only about 120 lines of code!
Use it to create an RSVP application
In this blog series, together we will construct an RSVP application using the framework code that we write together (view app here). The annotated source code for the project can be found here.
You will get insight into MVC framework architecture and understand the inner workings of Backbone.js, meanwhile reviewing core features of the JavaScript language and solidifying your understanding and implementation of JS fundamentals.
Goals and takeaways
Demystify Backbone.js
- shed light on what’s going under the hood (it’s not magic)
- understand how the components of the framework work together
Takeaway: Understanding Backbone.js and other frameworks is more approachable than one might think!
Review JavaScript Fundamentals
- scopes & closures
- this keyword
- instantiation
- subclassing
- event system
Takeaway: If you understand these, you can write your own MVC framework!
BAREBONE.js source code:
Posts in this series:
Are you ready to dive in?
We will start off by creating a closure to avoid polluting the global scope by wrapping the framework code in an IIFE (immediately invoked function expression) and placing the BAREBONE object on the window object.
Function Scope and IFFEs
JavaScript (the current version ES5) doesn’t have block scope, it only has function scope. So in order to create private variables (not in the global scope) we need to declare them in a function. Any variables declared in a function are only accessible within that function’s scope.
Since we are only using the function for its scope, there’s no need to name it and invoke it at a later point. We can simply create an anonymous function and immediately invoke it, hence the term IIFE (immediately invoked function expression).
We’ll put all of our classes (Events, Model, Collection, View) on the ‘BAREBONE’ namespace which we will stick on the window object. We do this so that we can access it from the global scope. The BAREBONE object will be the only variable from the entire library that we can access from the global scope.
/* BAREBONE.js source code */
(function() {
var BAREBONE = {};
BAREBONE.Events = function(attributes) { ... };
BAREBONE.Model = function(attributes) { ... };
BAREBONE.Collection = function(models) { ... };
BAREBONE.View = function(options) { ... };
window.BAREBONE = BAREBONE;
})();
Cool, so now let’s take a closer look at BAREBONE.js Models.
Next: BAREBONE.js Models!
Next we will explore BAREBONE.js Models and how they allow us to store our data and the logic that surrounds it.
|
https://medium.com/@katrinauychaco/demystify-backbone-js-series-introduction-736ee355cb08
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Redux Simplified
Flux is a pattern proposed by facebook to build ReactJS applications. Flux is not a library but a pattern like MVC. Flux mandates an uni-directional data flow.
A typical flux implementation has three components.
Action : ‘Action’ represents a certain action that the application can perform. For example, a bookmark application can allow users to ‘Add a boomark’. So ‘Add a bookmark’ is an action.
Dispatcher : A dispatcher is a component which dispatches the actions to the store.
Store : Store holds the state. The store handles various actions and changes the state based on handling these actions. In typical flux implementation, we could have multiple stores in an application.
Redux
Redux is a flux library which helps to manage application state in JavaScript applications. Even though it’s mostly used with ReactJS, it’s a framework agnostic solution for JavaScript state management. This post aims to introduce redux concepts with a very minimalistic example.
Three fundamental parts of Redux
- The State
- Actions
- Reducers
Redux uses only a single store
Redux doesn’t have a dispatcher component
The State.
But why? For example, when you build web applications, multiple components are interested in a piece of information. For example, whether the user is authorized or not. So a global state is an ideal way to store such information so each individual UI components can access.
Redux maintains state in a single atom (Single object)
Actions
User interface can trigger actions. When an action is triggered, it could change the application state.
An action in Redux has two properties, the type of the action and the data required to change the state. Type of the action is a string value. But this value should be unique among all the actions. Two actions can’t have the same type.
An Example redux action. The type of the below action is ‘SET_AUTHORIZE’ and the data is ‘true’.
{ type: 'SET_AUTHORIZE', data: true }
Action Creator
Action Creator is a place an action is created. Action creator is simply a function which returns an action object. SetAuthorization() is an action creator which creates the ‘SET_AUTHORIZE’ action.
function SetAuthorization() { return { type: 'SET_AUTHORIZE', data: true } }
Reducers
So far we have a state and actions with the information on how to change the state. But who is going to change the state? Reducer is.
Despite the fancy name, Reducer is just another function with a switch case.
But the very important part is, a reducer is responsible for managing a branch of your global state. If we go back to our previous global state, we need two reducers to manage this state as follows.
- App Reducer - To manage the state.app branch or sub state
- User Reducer - To manage the state.user branch or sub state
Each reducer will have it’s initial state. This initial state is the part of the global state or sub-state they are responsible of managing.
So how a reducer would look like ? Let’s look at the App Reducer which can handle the ‘SET_AUTHORIZE’ action.
const initialStateOfAppReducer = { name: 'My Awesome Application', version: '1.0', authorized: false } export default function appReducer(state = initialStateOfAppReducer, action) { switch (action.type) { case 'SET_AUTHORIZE': return Object.assign({}, state, { authorized: action.data }); default: return state; } }
You might ask what’s
Object.assign({}, state, { authorized: action.data }); is all about. We can’t we just do
state.authorized = action.data?
Well, We can’t because doing
state.authorized = action.data? is directly mutating the state which is not recommended. What we should be doing is returning a new state itself which is what
Object.assign({}, state, { authorized: action.data }); does.
Container Component
React Dump component
A dump React component doesn’t really care about from where it’s getting its data from. A dump component accepts everything through props.
React Smart Component
A container component is also known as a smart component. A container component provides the information needed for the dump components.
Redux Container Component
A redux container component connects the dump components with actions and the central state. To do this redux has a ‘connect’ method. This connection method accepts two function which tells how to map the actions and state to the dump component.
import { bindActionCreators } from 'redux'; import { connect } from 'react-redux'; import Home from '../components/Home'; // Dump home component import { SetAuthorization } from '../actions/actions'; function mapStateToProps(state) { return { name: state.app.name }; } function mapDispatchToProps(dispatch) { return bindActionCreators({ SetAuthorization: SetAuthorization }, dispatch); } export default connect(mapStateToProps, mapDispatchToProps)(Home);
In Summary
Redux is an opinionated flux implementation and widely adopted by the react community. Redux helps to develop easily maintainable large scale react applications. Redux video tutorials created by the master mastermind behind redux, Dan Abramov, is a great place to start.
Repository With Sample
You could also check out this Github repository which has a very minimal React/Redux sample.
|
http://raathigesh.com/Redux-Simplified/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Generally, users of regular expressions. We'll now explore the trade-offs in complexity and performance of these two approaches.
A common processing need is to match certain parts of a string and perform some processing. So, here's an example that matches words within a string and capitalizes them:
using System; using System.Text.RegularExpressions; class ProceduralFun { static void Main( ) { string txt = "the quick red fox jumped over the lazy brown dog."; Console.WriteLine("text=["+txt+"]"); string res = ""; string pat = @"\w+|\W+"; // Loop through all the matches foreach (Match m in Regex.Matches(txt, pat)) { string s = m.ToString( ); // If the first char is lower case, capitalize it if (char.IsLower(s[0])) s = char.ToUpper(s[0])+s.Substring(1, s.Length-1); res += s; // Collect the text } Console.WriteLine("result=["+res+"]"); } } previous example is by providing a MatchEvaluator, which processes it as a single result set.
So the new sample looks like:
using System; using System.Text.RegularExpressions; class ExpressionFun { static string CapText(Match m) { // Get the matched string string s = m.ToString( ); // If the first char is lower case, capitalize it if (char.IsLower(s[0])) return char.ToUpper(s[0]) + s.Substring(1, s.Length-1); return s; } static void Main( ) { string txt = "the quick red fox jumped over the lazy brown dog."; Console.WriteLine("text=[" + txt + "]"); string pat = @"\w+"; MatchEvaluator me = new MatchEvaluator(CapText); string res = Regex.Replace(txt, pat, me); Console.WriteLine("result=[" + res + "]"); } }
Also of note is that the pattern is simplified, since we need only to modify the words, not the nonwords.
|
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+6.+String+Handling/6.5+Procedural-+and+Expression-Based+Patterns/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
A channel for sending messages. More...
#include <sender.hpp>
A channel for sending messages.
Open the sender.
Open the sender.
Unsettled API - Return all unused credit to the receiver in response to a drain request.
Has no effect unless there has been a drain request and there is remaining credit to use or return.
Close the endpoint.
Suspend the link without closing it.
A suspended link may be reopened with the same or different link options if supported by the peer. A suspended durable subscription becomes inactive without cancelling it.
Unsettled API - True for a receiver if a drain cycle has been started and the corresponding
on_receiver_drain_finish event is still pending.
True for a sender if the receiver has requested a drain of credit and the sender has unused credit.
|
http://qpid.apache.org/releases/qpid-proton-0.21.0/proton/cpp/api/classproton_1_1sender.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Managed LookupId property of the term in the TaxonomyHiddenList list. The TaxField highlighted in yellow contains the GUID of the hidden column.
<Field Type="TaxonomyFieldType" DisplayName="Language" List="{df9afe98-76ae-45ff-9367-e8c94fgb72c2}" WebId="e6b4333c-617b-4875-bd5d-10e745d5caa3" ShowField="Term1033" Required="FALSE" EnforceUniqueValues="FALSE" ID="{10c90ea3-94e8-453f-b046-ccf3d7b82c3b}" SourceID="{8b16fadf-a200-429d-9f77-b218863efdba}" StaticName="Language" Name="Language" ColName="int1" RowOrdinal="0" Version="1">
<Default />
<Customization>
<ArrayOfProperty>
- <Property>
<Name>SspId</Name>
<Value xmlns:a3c8fc5c-a9ca-46c5-b0c4-fd3f1d1b82f3</Value>
</Property>
- <Property>
<Name>GroupId</Name>
</Property>
- <Property>
<Name>TermSetId</Name>
<Value xmlns:23cc4396-6de3-47a7-97a6-111675414126</Value>
</Property>
- <Property>
<Name>AnchorId</Name>
<Value xmlns:00000000-0000-0000-0000-000000000000</Value>
</Property>
- <Property>
<Name>UserCreated</Name>
<Value xmlns:false</Value>
</Property>
- <Property>
<Name>Open</Name>
<Value xmlns:false</Value>
</Property>
- <Property>
<Name>TextField</Name>
<Value xmlns:{6a8b96d5-78a1-4e98-875f-52bcd3ebd95a}</Value>
</Property>
- <Property>
<Name>IsPathRendered</Name>
<Value xmlns:false</Value>
</Property>
- <Property>
<Name>IsKeyword</Name>
<Value xmlns:false</Value>
</Property>
- <Property>
<Name>TargetTemplate</Name>
</Property>
- <Property>
<Name>CreateValuesInEditForm</Name>
<Value xmlns:false</Value>
</Property>
- <Property>
<Name>FilterAssemblyStrongName</Name>
<Value xmlns:Microsoft.SharePoint.Taxonomy, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Value>
</Property>
- <Property>
<Name>FilterClassName</Name>
<Value xmlns:Microsoft.SharePoint.Taxonomy.TaxonomyField</Value>
</Property>
- <Property>
<Name>FilterMethodName</Name>
<Value xmlns:GetFilteringHtml</Value>
</Property>
- <Property>
<Name>FilterJavascriptProperty</Name>
<Value xmlns:FilteringJavascript</Value>
</Property>
</ArrayOfProperty>
</Customization>
</Field>
As stated previously, the the TextField is of type Note. The XML for the TextField is similar to the following:
<Field Type="Note" DisplayName="Language_0" StaticName="LanguageTaxHTField0" Name="LanguageTaxHTField0" ID="{6a8b96d5-78a1-4e98-875f-52bcd3ebd95a}" ShowInViewForms="FALSE" Required="FALSE" Hidden="TRUE" CanToggleHidden="TRUE" SourceID="{8b13fadf-a200-429d-9f77-b218843efdba}" ColName="ntext2" RowOrdinal="0" />
Taxonomy Field Data
The data in taxonomy fields is stored in the following format:
The WSS identifier is a 32-bit integer that uniquely identifies list’s list item containing the taxonomy field. This property behaves similarly to the LookupId property and is used as the lookup identifier on the TaxonomyHiddenList list.
Updating the Manage Metadata Column by Using the Client Object Model
To update the taxonomy column, the two associated columns need to be updated with appropriate values.
To update those columns, perform the following steps:
1. Find the WSS identifier of the term that needs to be set as the value. You can use the following code to get the value of WSS identifier of the term:
Microsoft.SharePoint.Client.List taxonomyList = clientContext.Site.RootWeb.Lists.GetByTitle("TaxonomyHiddenList");
CamlQuery camlQueryForTerm = new CamlQuery();
camlQueryForTerm.ViewXml = @"<View>
<Query>
<Where>
<Eq>
<FieldRef Name='IdForTerm'/>
<Value Type='Text'>" + TermGuidId + @"</Value>
</Eq>
</Where>
</Query>
</View>";
Microsoft.SharePoint.Client.ListItemCollection termItems = taxonomyList.GetItems(camlQueryForTerm);
clientContext.Load(termItems);
clientContext.ExecuteQuery();
TermGuidId is the GUID of the term to be set as the value.
2. If there is an entry in the TaxonomyHiddenList list for the given term then the values can be updated by using the WSS identifier. If the TaxonomyHiddenList list does not contain an entry for the given term then the scenario is different.
If there is no entry for the term then the WSS identifier must be set as -1. The following table describes the sample format in which to save the data:
The following is the code to update the value based on the CAML query results from step 1:
if(termItems.Count > 0)
{
Microsoft.SharePoint.Client.ListItem termItem = termItems[0];
splistItem[ColumnName] = termItem["ID"] + ";#English";
splistItem[TextColumnName] = "English |c61d9028-824f-446e-9389-eb9515813a42";
}
else
{
splistItem[ColumnName] = "-1;#English|c61d9028-824f-446e-9389-eb9515813a42";
splistItem[TextColumnName] = "English|c61d9028-824f-446e-9389-eb9515813a42";
}
splistItem.Update();
context.ExecuteQuery();
For metadata columns with multiple values, the values should be separated by ;#. For example, more than one value can be saved in the format given below, based on whether the terms already exist in the TaxonomyHiddenList list.
- If there is entry in the TaxonomyHiddenList list, use the following:
splistItem[ColumnName] = "2;#English;#3;#French";
splistItem[TextColumnName] = "English|c61d9028-824f-446e-9389-eb9515813a42;#
French|de1d9028-824f-556e-9389-ac9515813a56";
- If there is no entry in the TaxonomyHiddenList list, use the following:
splistItem[ColumnName] = "-1;#English|c61d9028-824f-446e-9389-eb9515813a42;#
-1;#French|de1d9028-824f-556e-9389-ac9515813a56";
splistItem[TextColumnName]= English|c61d9028-824f-446e-9389-eb9515813a42;# French|de1d9028-824f-556e-9389-ac9515813a56";
In this post, I discussed how managed metadata columns can be handled by using the client object model to update the different columns with appropriate values.
Additional Resources
For more information on the topics covered in this post, see the following resources:
Managing Enterprise Metadata in SharePoint Server 2010 (ECM)
Using the Client Object Model
Metadata and Taxonomy Programming Model in SharePoint Server 2010 (ECM)
Master piece, I realy feel lucky to see such a wonderful article that is <a href=" ">hardwood floor refinishing toronto</a> what i want.
Great article. One question though, does the same approach works for Enterprise Keywords also? I see that there are two fields for them: TaxKeyword and TaxKeywordTaxHTField. However, setting both of them does not sets the keyword value.
Any clues if setting Enterprise Keywords via the Client Object Model differs in any way from the normal managed metadata keywords?
Incredible article, it was an incredible time-saver for me!
Thanks!
Your post really saves me a lot of time. Thanks a lot!
tiffany-joyas-precios.com
Great article! Is there a way to discover the name of the hidden note field? The example you use of fieldnameTaxHTField0 only works for programmatically-created columns. If you hand-create the metadata column, the hidden note field has a guid as the name (without the dashes). How can I find out what that field name is?
Chris, you can get the hidden field name from SchemaXML and property name is TextField
In my environment there are absolutely no columns ending with "TaxHTField0". I used Sharepoint Manager 2010 to have a look and I can see that for each Managed Metadata column I added to my list, I have another column with the same name ending with "_0". Whatever I tried though, whatever the format I tried. Nothing worked and I don't even get any exception. Values are simply not assigned.
After looking at that for like 4 hours I decide to post a comment and find my solution right after posting!
Finally figured out that the columns I had, ending with "_0" are actually the column with the Note Column Type that you are mentionning but instead of having a static name ending with "TaxHTField0" mine have a completely random static name like "b2f2926f0257424bac1db0fa8cf8d4ef"
Your post really helped me a lot, Thanks!
Alex, Great that you found the solution. I just wanted to point out one thing, while finding the hidden column, try to find based on guid in TextField (Check the highlighted xml in the post). Finding column using guid is correct way rather than relying on static name.
Its exactly what I ended up doing! Thanks a lot!
Hi ,
This concept doesn't work for documnet library managed maeatada field.
Will you please kindly suggest on this???????????
Regards
yes it works
the concept worked for a custom list and was not working for a sharepoint document library .can some one help me with this ?
It should work with document library. Please let me know the issue your are getting with document library.
Not sure about a custom list as I am focusing on setting MMS field values in a document library which works, for me at least. However, what I have found is that if you try and set field values in both the taxonomy field and the hidden note field (as prescribed here) then the value is not set i.e. it does not work. It seems that all you have to do is to set the value in the taxonomy field. I presume some magic goes on with event handlers in the background.
So when working with document libraries the process seems to be a bit simpler as now you don't have to deal with the inconsistent naming of the internal name of the hidden note field which is sometimes the taxonomy field internal name with "TaxHTField0" appended and sometimes a Guid like string. As a point of interest it seems that when an internal name is a Guid like string then it is derived from the unique ID of the taxonomy field i.e. get the Guid of the taxonomy field as a string, remove all dashes and then replace the first character of the string (whatever it is) with 'h' (standing for hidden is my guess). Well that's my guess as to what's going on.
Excellent article BTW 🙂
After looking at this again it appears that just setting the taxonomy field (and ignoring the hidden note field) works for document libraries in most cases it does not work consistently. I persevered with setting the prescribed value in the hidden note field and got it working. As with others the main challenge was getting a reference to the associated hidden note field as it seems that the field naming convention is inconsistent depending on whether the field was provisioned by code or through the UI.
As Kaushalendra points out the best approach is to get the Id of the associated TextField property which is stored in the SchemaXML property of the field, but how? To make life a bit easier I have abstracted this task out to a couple of extension methods which can be parked in a static utilities class somewhere.
using System.Xml.Linq;
using System.Xml.XPath;
public static string SchemaXmlPropertyValue(this Field SourceField, string PropertyName)
{
XDocument xDoc = XDocument.Parse(SourceField.SchemaXml);
XElement PropertyElement = (from XElement xElem in xDoc.XPathSelectElements("//Property")
where xElem.Element("Name").Value == PropertyName
select xElem).FirstOrDefault();
if (PropertyElement == default(XElement) || PropertyElement.Element("Value") == null)
return string.Empty;
else
return PropertyElement.Element("Value").Value;
}
The above allows you to get any property value from the SchemaXML, whereas the following uses the above to return the TextField property as a Guid which is the Id of the hidden note field associated with the taxonomy field
public static Guid HiddenTextFieldId(this Field SourceField)
{
//Get the TextField property value as a string as stored in the SchemaXML, trim off the leading
//and trailing brace characters and create a Guid to return from the resulting string
try
{
if (string.IsNullOrEmpty(SourceField.SchemaXmlPropertyValue("TextField")))
return Guid.Empty;
else
return new Guid(SourceField.SchemaXmlPropertyValue("TextField").TrimStart('{').TrimEnd('}'));
}
catch
{
return Guid.Empty;
}
}
Then call the extension method like this:
Guid TextFieldId = TaxField.HiddenTextFieldId();
Field HiddenNoteField = ParentList.Fields.GetById(TextFieldId);
I have an issue with Taxonomy Metadata Columns where the title contains a space in it. For example, I have yet to populate the following column. However, I have been successful in populating the same type of column that contains no space in it.
InternalName: Job_x0020_Title, StacicName: Job_x0020_Title, Type: TaxonomyFieldType, ID: fbda96f1-3c1d-4f9d-b44b-488cfd405581, DisplayName: Job Title
Any ideas? I am using the copy web service CopyIntoItems.
|
https://blogs.msdn.microsoft.com/sharepointdev/2011/11/18/how-to-work-with-managed-metadata-columns-by-using-the-sharepoint-client-object-model-kaushalendra-kumar/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Trenaman, Adrian wrote:
> First problem: I didn't realise that I needed to have a
> package-info.java in the package. Without this, my Person object (which
> represents the payload of the PUT) has null contents in the serverside
> code. Through a lot of trial and error I discovered that I hadn't
> included package-info.java file in the Java package (it's still not
> clear to me why I should need it...).
>
>
I hate how JAXB defaults to UNQUALIFIED, its rather silly.
Did you file a JIRA for this? Sounds like a bug in the schema parsing code.
> Second problem: for some reason CXF insists that the payload document's
> root element is not prefixed with an XML namespace prefix. For example;
> the following valid XML results in a server-side NullPointerException.
>
Did you file a JIRA for this? Sounds like a bug in an interceptor. I
would like to see the stack trace and see if we can get this fixed for
2.0.2.
>
> Third problem: in the update scenario, if I send the XML to /people/123
> then the ID (123) gets injected into the payload over the existing id
> (42). I think this behaviour, where we override the data in the payload,
> may lead to a lot of confusion: what if someone wants to update the id
> (or is that RESTful heresy)? what if someone has sent the payload to the
> wrong HTTP URL? Would it be better if we simply reject the call if the
> "injecting" parameters don't match the payload?
>
This is part of the problem of trying to fulfill the WSDL2 HTTP Binding.
I should probably look up the details of whats required here.
<pontification>
My original intention was to support the WSDL2 HTTP binding, but its
ended up in soooo many confusing things that I'm beginning to think the
whole HTTP binding as it currently stands is a mistake.
After playing with Jersey (the Sun JSR 311 impl) for a bit, I'm much
more inclined to go down that route. The WSDL model just doesn't work
with REST at all. Right now we have all sorts of weirdness -
wrapped/unwrapped mode being the biggest one. Its a source of
never-ending confusion to users.
Which brings us to the question - how do you properly integrate RESTful
support into a web service framework which operates on a WSDL model :-)
Regarding the future of our REST support, I suppose we have two roads to
go down:
1. Build our own JSR 311 impl. Cons: mapping the internal WSDL like
model to a RESTful one
2. Create bridges to something like Jersey. i.e. we would just proxy the
request. Although at this point, I'm a bit confused as to what value
we're really providing.
Thoughts?
</pontification>
- Dan
--
Dan Diephouse
MuleSource |
|
http://mail-archives.apache.org/mod_mbox/cxf-dev/200708.mbox/%[email protected]%3E
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
- A Huge Sheet of Text
- Seams
- Seam Types
Seam Types
The types of seams available to us vary among programming languages. The best way to explore them is to look at all of the steps involved in turning the text of a program into running code on a machine. Each identifiable step exposes different kinds of seams.
Preprocessing Seams
In most programming environments, program text is read by a compiler. The compiler then emits object code or bytecode instructions. Depending on the language, there can be later processing steps, but what about earlier steps?
Only a couple of languages have a build stage before compilation. C and C++ are the most common of them.
In C and C++, a macro preprocessor runs before the compiler. Over the years, the macro preprocessor has been cursed and derided incessantly. With it, we can take lines of text as innocuous looking as this:
TEST(getBalance,Account) { Account account; LONGS_EQUAL(0, account.getBalance()); }
and have them appear like this to the compiler.
class AccountgetBalanceTest : public Test { public: AccountgetBalanceTest () : Test ("getBalance" "Test") {} void run (TestResult& result_); } AccountgetBalanceInstance; void AccountgetBalanceTest::run (TestResult& result_) { Account account; { result_.countCheck(); long actualTemp = (account.getBalance()); long expectedTemp = (0); if ((expectedTemp) != (actualTemp)) { result_.addFailure (Failure (name_, "c:\\seamexample.cpp", 24, StringFrom(expectedTemp), StringFrom(actualTemp))); return; } } }
We can also nest code in conditional compilation statements like this to support debugging and different platforms (aarrrgh!):
... m_pRtg->Adj(2.0); #ifdef DEBUG #ifndef WINDOWS { FILE *fp = fopen(TGLOGNAME,"w"); if (fp) { fprintf(fp,"%s", m_pRtg->pszState); fclose(fp); }} #endif m_pTSRTable->p_nFlush |= GF_FLOT; #endif ...
It's not a good idea to use excessive preprocessing in production code because it tends to decrease code clarity. The conditional compilation directives (#ifdef, #ifndef, #if, and so on) pretty much force you to maintain several different programs in the same source code. Macros (defined with #define) can be used to do some very good things, but they just do simple text replacement. It is easy to create macros that hide terribly obscure bugs.
These considerations aside, I'm actually glad that C and C++ have a preprocessor because the preprocessor gives us more seams. Here is an example. In a C program, we have dependencies on a library routine named db_update. The db_update function talks directly to a database. Unless we can substitute in another implementation of the routine, we can't sense the behavior of the function.
#include <DFHLItem.h> #include <DHLSRecord.h> extern int db_update(int, struct DFHLItem *); void account_update( int account_no, struct DHLSRecord *record, int activated) { if (activated) { if (record->dateStamped && record->quantity > MAX_ITEMS) { db_update(account_no, record->item); } else { db_update(account_no, record->backup_item); } } db_update(MASTER_ACCOUNT, record->item); }
We can use preprocessing seams to replace the calls to db_update. To do this, we can introduce a header file called localdefs.h.
#include <DFHLItem.h> #include <DHLSRecord.h> extern int db_update(int, struct DFHLItem *); #include "localdefs.h" void account_update( int account_no, struct DHLSRecord *record, int activated) { if (activated) { if (record->dateStamped && record->quantity > MAX_ITEMS) { db_update(account_no, record->item); } else { db_update(account_no, record->backup_item); } } db_update(MASTER_ACCOUNT, record->item); }
Within it, we can provide a definition for db_update and some variables that will be helpful for us:
#ifdef TESTING ... struct DFHLItem *last_item = NULL; int last_account_no = -1; #define db_update(account_no,item) {last_item = (item); last_account_no = (account_no);} ... #endif
With this replacement of db_update in place, we can write tests to verify that db_update was called with the right parameters. We can do it because the #include directive of the C preprocessor gives us a seam that we can use to replace text before it is compiled.
Preprocessing seams are pretty powerful. I don't think I'd really want a preprocessor for Java and other more modern languages, but it is nice to have this tool in C and C++ as compensation for some of the other testing obstacles they present.
I didn't mention it earlier, but there is something else that is important to understand about seams: Every seam has an enabling point. Let's look at the definition of a seam again:
Seam
A seam is a place where you can alter behavior in your program without editing in that place.
When you have a seam, you have a place where behavior can change. We can't really go to that place and change the code just to test it. The source code should be the same in both production and test. In the previous example, we wanted to change the behavior at the text of the db_update call. To exploit that seam, you have to make a change someplace else. In this case, the enabling point is a preprocessor define named TESTING. When TESTING is defined, the localdefs.h file defines macros that replace calls to db_update in the source file.
Enabling Point
Every seam has an enabling point, a place where you can make the decision to use one behavior or another.
Link Seams
In many language systems, compilation isn't the last step of the build process. The compiler produces an intermediate representation of the code, and that representation contains calls to code in other files. Linkers combine these representations. They resolve each of the calls so that you can have a complete program at runtime.
In languages such as C and C++, there really is a separate linker that does the operation I just described. In Java and similar languages, the compiler does the linking process behind the scenes. When a source file contains an import statement, the compiler checks to see if the imported class really has been compiled. If the class hasn't been compiled, it compiles it, if necessary, and then checks to see if all of its calls will really resolve correctly at runtime.
Regardless of which scheme your language uses to resolve references, you can usually exploit it to substitute pieces of a program. Let's look at the Java case. Here is a little class called FitFilter:
package fitnesse; import fit.Parse; import fit.Fixture; import java.io.*; import java.util.Date; import java.io.*; import java.util.*; public class FitFilter { public String input; public Parse tables; public Fixture fixture = new Fixture(); public PrintWriter output; public static void main (String argv[]) { new FitFilter().run(argv); } public void run (String argv[]) { args(argv); process(); exit(); } public void process() { try { tables = new Parse(input); fixture.doTables(tables); } catch (Exception e) { exception(e); } tables.print(output); } ... }
In this file, we import fit.Parse and fit.Fixture. How do the compiler and the JVM find those classes? In Java, you can use a classpath environment variable to determine where the Java system looks to find those classes. You can actually create classes with the same names, put them into a different directory, and alter the classpath to link to a different fit.Parse and fit.Fixture. Although it would be confusing to use this trick in production code, when you are testing, it can be a pretty handy way of breaking dependencies.
Suppose we wanted to supply a different version of the Parse class for testing. Where would the seam be?
The seam is the new Parse call in the process method.
Where is the enabling point?
The enabling point is the classpath.
This sort of dynamic linking can be done in many languages. In most, there is some way to exploit link seams. But not all linking is dynamic. In many older languages, nearly all linking is static; it happens once after compilation.
Many C and C++ build systems perform static linking to create executables. Often the easiest way to use the link seam is to create a separate library for any classes or functions you want to replace. When you do that, you can alter your build scripts to link to those rather than the production ones when you are testing. This can be a bit of work, but it can pay off if you have a code base that is littered with calls to a third-party library. For instance, imagine a CAD application that contains a lot of embedded calls to a graphics library. Here is an example of some typical code:
void CrossPlaneFigure::rerender() { // draw the label drawText(m_nX, m_nY, m_pchLabel, getClipLen()); drawLine(m_nX, m_nY, m_nX + getClipLen(), m_nY); drawLine(m_nX, m_nY, m_nX, m_nY + getDropLen()); if (!m_bShadowBox) { drawLine(m_nX + getClipLen(), m_nY, m_nX + getClipLen(), m_nY + getDropLen()); drawLine(m_nX, m_nY + getDropLen(), m_nX + getClipLen(), m_nY + getDropLen()); } // draw the figure for (int n = 0; n < edges.size(); n++) { ... } ... }
This code makes many direct calls to a graphics library. Unfortunately, the only way to really verify that this code is doing what you want it to do is to look at the computer screen when figures are redrawn. In complicated code, that is pretty error prone, not to mention tedious. An alternative is to use link seams. If all of the drawing functions are part of a particular library, you can create stub versions that link to the rest of the application. If you are interested in only separating out the dependency, they can be just empty functions:
void drawText(int x, int y, char *text, int textLength) { } void drawLine(int firstX, int firstY, int secondX, int secondY) { }
If the functions return values, you have to return something. Often a code that indicates success or the default value of a type is a good choice:
int getStatus() { return FLAG_OKAY; }
The case of a graphics library is a little atypical. One reason that it is a good candidate for this technique is that it is almost a pure "tell" interface. You issue calls to functions to tell them to do something, and you aren't asking for much information back. Asking for information is difficult because the defaults often aren't the right thing to return when you are trying to exercise your code.
Separation is often a reason to use a link seam. You can do sensing also; it just requires a little more work. In the case of the graphics library we just faked, we could introduce some additional data structures to record calls:
std::queue<GraphicsAction> actions; void drawLine(int firstX, int firstY, int secondX, int secondY) { actions.push_back(GraphicsAction(LINE_DRAW, firstX, firstY, secondX, secondY); }
With these data structures, we can sense the effects of a function in a test:
TEST(simpleRender,Figure) { std::string text = "simple"; Figure figure(text, 0, 0); figure.rerender(); LONGS_EQUAL(5, actions.size()); GraphicsAction action; action = actions.pop_front(); LONGS_EQUAL(LABEL_DRAW, action.type); action = actions.pop_front(); LONGS_EQUAL(0, action.firstX); LONGS_EQUAL(0, action.firstY); LONGS_EQUAL(text.size(), action.secondX); }
The schemes that we can use to sense effects can grow rather complicated, but it is best to start with a very simple scheme and allow it to get only as complicated as it needs to be to solve the current sensing needs.
The enabling point for a link seam is always outside the program text. Sometimes it is in a build or a deployment script. This makes the use of link seams somewhat hard to notice.
Usage Tip
If you use link seams, make sure that the difference between test and production environments is obvious.
Object Seams
Object seams are pretty much the most useful seams available in object-oriented programming languages. The fundamental thing to recognize is that when we look at a call in an object-oriented program, it does not define which method will actually be executed. Let's look at a Java example:
cell.Recalculate();
When we look at this code, it seems that there has to be a method named Recalculate that will execute when we make that call. If the program is going to run, there has to be a method with that name; but the fact is, there can be more than one:
Figure 4.1 Cell hierarchy.
Which method will be called in this line of code?
cell.Recalculate();
Without knowing what object cell points to, we just don't know. It could be the Recalculate method of ValueCell or the Recalculate method of FormulaCell. It could even be the Recalculate method of some other class that doesn't inherit from Cell (if that's the case, cell was a particularly cruel name to use for that variable!). If we can change which Recalculate is called in that line of code without changing the code around it, that call is a seam.
In object-oriented languages, not all method calls are seams. Here is an example of a call that isn't a seam:
public class CustomSpreadsheet extends Spreadsheet { public Spreadsheet buildMartSheet() { ... Cell cell = new FormulaCell(this, "A1", "=A2+A3"); ... cell.Recalculate(); ... } ... }
In this code, we're creating a cell and then using it in the same method. Is the call to Recalculate an object seam? No. There is no enabling point. We can't change which Recalculate method is called because the choice depends on the class of the cell. The class of the cell is decided when the object is created, and we can't change it without modifying the method.
What if the code looked like this?
public class CustomSpreadsheet extends Spreadsheet { public Spreadsheet buildMartSheet(Cell cell) { ... cell.Recalculate(); ... } ... }
Is the call to cell.Recalculate in buildMartSheet a seam now? Yes. We can create a CustomSpreadsheet in a test and call buildMartSheet with whatever kind of Cell we want to use. We'll have ended up varying what the call to cell.Recalculate does without changing the method that calls it.
Where is the enabling point?
In this example, the enabling point is the argument list of buildMartSheet. We can decide what kind of an object to pass and change the behavior of Recalculate any way that we want to for testing.
Okay, most object seams are pretty straightforward. Here is a tricky one. Is there an object seam at the call to Recalculate in this version of buildMartSheet?
public class CustomSpreadsheet extends Spreadsheet { public Spreadsheet buildMartSheet(Cell cell) { ... Recalculate(cell); ... } private static void Recalculate(Cell cell) { ... } ... }
The Recalculate method is a static method. Is the call to Recalculate in buildMartSheet a seam? Yes. We don't have to edit buildMartSheet to change behavior at that call. If we delete the keyword static on Recalculate and make it a protected method instead of a private method, we can subclass and override it during test:
public class CustomSpreadsheet extends Spreadsheet { public Spreadsheet buildMartSheet(Cell cell) { ... Recalculate(cell); ... } protected void Recalculate(Cell cell) { ... } ... } public class TestingCustomSpreadsheet extends CustomSpreadsheet { protected void Recalculate(Cell cell) { ... } }
Isn't this all rather indirect? If we don't like a dependency, why don't we just go into the code and change it? Sometimes that works, but in particularly nasty legacy code, often the best approach is to do what you can to modify the code as little as possible when you are getting tests in place. If you know the seams that your language offers and how to use them, you can often get tests in place more safely than you could otherwise.
The seams types I've shown are the major ones. You can find them in many programming languages. Let's take a look at the example that led off this chapter again and see what seams we can see:
bool CAsyncSslRec::Init() { if (m_bSslInitialized) { return true; } m_smutex.Unlock(); m_nSslRefCount++; m_bSslInitialized = true; FreeLibrary(m_hSslDll1); m_hSslDll1=0; FreeLibrary(m_hSslDll2); m_hSslDll2=0; if (!m_bFailureSent) { m_bFailureSent=TRUE; PostReceiveError(SOCKETCALLBACK, SSL_FAILURE); } CreateLibrary(m_hSslDll1,"syncesel1.dll"); CreateLibrary(m_hSslDll2,"syncesel2.dll"); m_hSslDll1->Init(); m_hSslDll2->Init(); return true; }
What seams are available at the PostReceiveError call? Let's list them.
PostReceiveError is a global function, so we can easily use the link seam there. We can create a library with a stub function and link to it to get rid of the behavior. The enabling point would be our makefile or some setting in our IDE. We'd have to alter our build so that we would link to a testing library when we are testing and a production library when we want to build the real system.
We could add a #include statement to the code and use the preprocessor to define a macro named PostReceiveError when we are testing. So, we have a preprocessing seam there. Where is the enabling point? We can use a preprocessor define to turn the macro definition on or off.
We could also declare a virtual function for PostRecieveError like we did at the beginning of this chapter, so we have an object seam there also. Where is the enabling point? In this case, the enabling point is the place where we decide to create an object. We can create either an CAsyncSslRec object or an object of some testing subclass that overrides PostRecieveError.
It is actually kind of amazing that there are so many ways to replace the behavior at this call without editing the method:
bool CAsyncSslRec::Init() { ... if (!m_bFailureSent) { m_bFailureSent=TRUE; PostReceiveError(SOCKETCALLBACK, SSL_FAILURE); } ... return true; }
It is important to choose the right type of seam when you want to get pieces of code under test. In general, object seams are the best choice in object-oriented languages. Preprocessing seams and link seams can be useful at times but they are not as explicit as object seams. In addition, tests that depend upon them can be hard to maintain. I like to reserve preprocessing seams and link seams for cases where dependencies are pervasive and there are no better alternatives.
When you get used to seeing code in terms of seams, it is easier to see how to test things and to see how to structure new code to make testing easier.
|
http://www.informit.com/articles/article.aspx?p=359417&seqNum=3
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.