text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Unique lock for writing. More...
#include <UniqueWriteLock.hpp>
Unique lock for writing.
A UniqueWriteLock is an object manages a RWMutex with unique ownership in both states.
On construction (or by move-assigning to it), the object locks a RWMutex object on writing side, for whose locking and unlocking operations becomes responsible.
The object supports both states: locked and unlocked. 39 of file UniqueWriteLock.hpp.
Construct from mutex.
Definition at line 62 of file UniqueWriteLock UniqueWriteLock.hpp.
Destructor.
If write lock has done by the UniqueLock, unlock it
Definition at line 105 of file UniqueWriteLock.hpp.
References samchon::library::RWMutex::writeUnlock().
Lock on writing.
Changes writing flag to true.
If another write_lock or read_lock is on a progress, wait until them to be unlocked.
Definition at line 117 of file UniqueWriteLock.hpp.
References samchon::library::RWMutex::writeLock().
Referenced by UniqueWriteLock().
Unlock on writing.
Definition at line 129 of file UniqueWriteLock.hpp.
References samchon::library::RWMutex::writeUnlock().
Referenced by samchon::library::EventDispatcher::EventDispatcher(), and samchon::templates::parallel::ParallelSystemArray< SlaveDriver >::sendPieceData().
Managed mutex.
Definition at line 45 of file UniqueWriteLock.hpp.
Referenced by UniqueWriteLock().
Whether the mutex was locked by UniqueLock.
Definition at line 50 of file UniqueWriteLock.hpp.
|
http://samchon.github.io/framework/api/cpp/d1/dc0/classsamchon_1_1library_1_1UniqueWriteLock.html
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Opened 4 years ago
Closed 3 years ago
#29810 closed Bug (fixed)
Left outer join using FilteredRelation failed on empty result
Description (last modified by )
When I try to join table with using
FilteredRelation clause it failed if the result of a joined table is null.
This error can be reproduce by adding the following test case to the corresponding test module:
tests.filtered_relation.tests.FilteredRelationTests:
def test_select_related_empty_join(self): self.assertFalse( Author.objects.annotate( empty_join=FilteredRelation('book', condition=Q( book__title='not existing book')), ).select_related('empty_join') )
Error:
............E.s................ ====================================================================== ERROR: test_select_related_empty_join (filtered_relation.tests.FilteredRelationTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.6/unittest/case.py", line 59, in testPartExecutor yield File "/usr/lib/python3.6/unittest/case.py", line 605, in run testMethod() File "/home/adv/projects/github/django/tests/filtered_relation/tests.py", line 46, in test_select_related_empty_join ).select_related('empty_join') File "/usr/lib/python3.6/unittest/case.py", line 674, in assertFalse if expr: File "/home/adv/projects/github/django/django/db/models/query.py", line 271, in __bool__ self._fetch_all() File "/home/adv/projects/github/django/django/db/models/query.py", line 1232, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/home/adv/projects/github/django/django/db/models/query.py", line 67, in __iter__ rel_populator.populate(row, obj) File "/home/adv/projects/github/django/django/db/models/query.py", line 1876, in populate self.local_setter(from_obj, obj) File "/home/adv/projects/github/django/django/db/models/sql/compiler.py", line 892, in local_setter f.remote_field.set_cached_value(from_obj, obj) File "/home/adv/projects/github/django/django/db/models/fields/mixins.py", line 23, in set_cached_value instance._state.fields_cache[self.get_cache_name()] = value AttributeError: 'NoneType' object has no attribute '_state' ----------------------------------------------------------------------
This bug is typical since version 2.0 and exists until now.
Change History (11)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
Last edited 4 years ago by (previous) (diff)
comment:3 Changed 3 years ago by
Last edited 3 years ago by (previous) (diff)
comment:4 Changed 3 years ago by
comment:5 Changed 3 years ago by
Tests aren't passing.
comment:6 Changed 3 years ago by
Sorry, I didn't think it was about PR. I'm looking at
comment:7 Changed 3 years ago by
I hope the fails are not about me this time :)
comment:8 Changed 3 years ago by
comment:9 Changed 3 years ago by
comment:10 Changed 3 years ago by
comment:11 Changed 3 years ago by
Note: See TracTickets for help on using tickets.
Reproduced at b3b47bf5156d400595363fa0029e51ce3f974ff0.
|
https://code.djangoproject.com/ticket/29810
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Unmaps and deallocates the region in the current address space at the given address.
#include <sys/types.h> #include <sys/errno.h> #include <sys/vmuser.h> #include <sys/adspace.h>
void io_det (eaddr) caddr_t eaddr;
The io_det kernel service unmaps the region containing the address specified by the eaddr parameter and deallocates the region. This service then adds the region to the free list for the current address space.
The io_det service assumes an address space model of fixed-size I/O objects and address space regions.
The io_det kernel service can be called from either the process or interrupt environment.
The io_det kernel service has no return values.
The io_det kernel service is part of Base Operating System (BOS) Runtime.
The io_att kernel service.
Memory Kernel Services and Understanding Virtual Memory Manager Interfaces in AIX Kernel Extensions and Device Support Programming Concepts.
|
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/ktechrf1/iodet.htm
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Have you ever wanted to implement a feature for a specific platform or device? Like show a screen or some content to mobile users only or execute a different an action based on the user's device
I usually come across contents on sites that are clearly meant for mobile users only while I browse on desktop.
The User-Agent can be helpful in this scenario.
MDN defines the user agent as
The User-Agent request header is a characteristic string that lets servers and network peers identify the application, operating system, vendor, and/or version of the requesting user agent.
A common format for a user agent string looks like:
Mozilla/5.0 (
<system-information>)
<platform> (
<platform-details>)
<extensions>
Detect User's device
To know if a user is on mobile you need to look for "Mobi" in the user agent string as you can see in the example below
const BUTTON = document.querySelector("button"); const { userAgent } = window.navigator; // Set device to "mobile" if "Mobi" exists in userAgent else set device to "desktop" const device = userAgent.includes("Mobi") ? "mobile 📱" : "desktop 💻"; BUTTON.addEventListener("click", () => alert(`You are on ${device}`));
The example above result in
Desktop Demo
Mobile Demo
Test in your browser
To test this in your browser, open the developer tools and click on the 'toggle device' icon. Refresh your page for changes to apply
Practical Example
Here is a practical example of this in my React application.
I used this in a federated login
// Context API for device import { createContext, useEffect, useState } from "react"; export type TDevice = "mobile" | "desktop"; export const DeviceContext = createContext<TDevice>("mobile"); const DeviceContextProvider: React.FC = ({ children }) => { const [device, setDevice] = useState<TDevice>("mobile"); useEffect(() => { const { userAgent } = navigator; // Set device state userAgent.includes("Mobi") ? setDevice("mobile") : setDevice("desktop"); }, []); return ( <DeviceContext.Provider value={device}>{children}</DeviceContext.Provider> ); }; export default DeviceContextProvider;
// login with provider hook const useLoginWithProvider = (redirect: (path: string) => void) => { const device = useContext(DeviceContext); const [signInAttempt, setSignInAttempt] = useState(false); const login = async (provider: string) => { if (device === "mobile") { // Check if user is mobile firebase.auth().signInWithRedirect(providers[provider]); setSignInAttempt(true); } else { firebase .auth() .signInWithPopup(providers[provider]) .then(handleResult) .catch(error => setError(error.message)); } }; useEffect(() => { if (signInAttempt) { firebase .auth() .getRedirectResult() .then(handleResult) .catch(error => setError(error.message)); } }, []); return login; }; export default useLoginWithProvider;
Conclusion
Obviously, you can tell this is not meant to replace media queries, however, this can be very useful in your projects. Media Queries are mainly used for responsive pages whereas this method is for device-specific features or content. This is mainly useful when you want your mobile app to behave differently from the desktop version. You can use this to give mobile users app-like experience especially when dealing with Progressive Web Apps
Photo by Daniel Korpai on Unsplash
|
https://brains.hashnode.dev/device-detection-with-the-user-agent
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Aspose.Cells for CPP 18.4 Release Notes.
Renames all methods like ‘SetIs’ to ‘Set’ methods**
Renames methods, such as, SetIsOutlineShown to SetIsOutlineShown, SetIsSelected to SetSelected in IWorksheet and so on. For more detail, see API Reference guide.
Changes Color to System::Drawing::Color
For example, it changes Color::GetBlue() to System::Drawing::Color::GetBlue(). Since Color is ambiguous symbol here, it might be Aspose::Cells::System::Drawing::Color or Gdiplus::Color. To use Color in Aspose Cells, you have to add namespace System::Drawing.
Renames ICells::AddRange to AddIRange
Adds a range object reference to cells.
Renames ICells::ApplyColumnStyle to ApplyColumnIStyle
Applies formatting for a whole column.
Renames ICells::ApplyRowStyle to ApplyRowIStyle
Applies formatting for a whole row.
Renames ICells::ApplyStyle to ApplyIStyle
Applies formatting for a whole worksheet.
Renames ICells::CopyColumn to CopyIColumn
Copies data and formatting of a whole column.
Renames ICells::CopyColumns to CopyIColumns
Copies data and formatting of specified columns.
Renames ICells::CopyColumns to CopyIColumns
Copies data and formatting of specified columns.
Renames ICells::CopyRow to CopyIRow
Copies data and formatting of a whole row.
Renames ICells::CopyRows to CopyIRows
Copies data and formatting of specified rows.
Renames ICells::MoveRange to MoveIRange
Moves the range to destination postion.
Renames ICells::InsertRange to InsertIRange
Inserts a range of cells and shifts cells according to the shift option.
Renames IColumn::ApplyStyle to ApplyIStyle
Applies formatting for a whole column.
Renames IErrorCheckOption::AddRange to AddIRange
Adds an influenced range by this setting.
Renames IRange::ApplyStyle to ApplyIStyle
Applies formatting for a whole range.
Renames IRow::ApplyStyle to ApplyIStyle
Applies formatting for a whole row.
Renames IPivotField::GetNumberFormat to Get_NumberFormat
Represents the custom display format of numbers and dates. Since the method name GetNumberFormat conflicts with Windows system function, so we have to rename it.
Renames IStyleFlag::GetNumberFormat to Get_NumberFormat
Since the method name GetNumberFormat conflicts with Windows system function, so we have to rename it which represents to get the Number format setting.
Renames IWorkbook::CopyTheme to CopyITheme
Copies the theme from another workbook.
Renames IWorksheet::SetBackground to SetBackgroundImage
Sets worksheet’s background image.
|
https://docs.aspose.com/cells/cpp/aspose-cells-for-cpp-18-4-release-notes/
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
Monitor Kubernetes Cluster using New Relic
Hi Reader, You have landed right here meaning you’re probably looking to apprehend the strength of New Relic Monitoring and alerting capabilities or interested in learning something new.
Excellent!! What’s better to start with monitoring the Kubernetes cluster and get alerted every time something is going wrong?
Let us get started but what hack is New Relic?
New Relic is an observability platform that helps you build better software. You can bring in data from any digital source so that you can fully understand your system and know how to improve it.
With New Relic, you can:
- Bring all your data together: Instrument everything and import data from across your technology stack using relic agents, integrations, and APIs, and access it from a single UI.
- Analyze your data: Get all your data at your fingertips to find the root causes of problems and optimize your systems. Build dashboards and charts or use the powerful query language NRQL.
- Respond to incidents quickly: New Relic’s machine learning solution proactively detects and explains anomalies and warns you before they become problems.
Cool right?
Let us get this implemented to monitor the Kubernetes cluster today.
Assuming you are new to New Relic, let us start with creating a new relic account however in case you have already got one, feel free to pass this part :)
STEP 1: Follow the link and create a new account. It is free, forever!!
Enter your name, email and click on Start Now.
STEP 2: You will receive an email to verify your email, click on verify email and set your password.
Select one of the two regions where you want your data to store and click Save.
STEP 3: You will be now landed on the installation plan page, select Kubernetes.
now, click on Begin Installation.
STEP 4: Assuming you have Kubernetes cluster available, enter your cluster name in the placeholder and click continue. You may change the namespace in case you desire to install new relic agents in a different namespace.
Check all the required data you want to gather from the Kubernetes cluster according to your use case and click continue.
New Relic offers different ways to install its agents on the k8s cluster, either by using helm or directly by manifest files. I decide on the helm, you may directly deploy manifest files if you wish.
Copy the command and log in to your Kubernetes cluster.
$helm repo add newrelic && helm repo update && \
kubectl create namespace newrelic ; helm upgrade — install newrelic-bundle newrelic/nri-bundle \
— set global.licenseKey= <your license key>\
— set global.cluster=my-cluster \
— namespace=newrelic \
— set newrelic-infrastructure.privileged=true \
— set global.lowDataMode=true \
— set ksm.enabled=true \
— set kubeEvents.enabled=true
once this is successfully executed, run the below command to check if all the agents are installed.
$kubectl get pods -n newrelic
Wait until all the pods are up and running.
Now go back to your New Relic UI and click continue. Wait for 2–3 minutes and then you must see “We are successfully receiving data from your cluster. 🎉”
If you see this, congratulations you have integrated your Kubernetes cluster with your New Relic account. Now click on Kubernetes cluster explorer.
All the information of your cluster is visible as below:
Click on the Control plane to get all the core components monitored and click on events to understand everything happening and recorded as infrastructure events. It is difficult to monitor Cronjobs/Jobs using any conventional approach of monitoring. For example, Prometheus require an additional push gateway setup to scrap those matrices however with Kubernetes integration in New Relic this can be achieved as it is recorded as events from the Kubernetes cluster.
NRQL
NRQL is New Relic's SQL-like query language. You can use NRQL to retrieve detailed New Relic data and get insight into your applications, hosts, and business-important activity.
Click on Explorer -> browse data -> Events
Then switch to Query builder to query the data using New Relic Query Language.
Let's understand by an example to get all the pods that are not in the Running state. Use the below query the get the desired results.
SELECT podName FROM K8sPodSample WHERE clusterName =’my-cluster’ and status != ‘Running’
You can now add this to a dashboard by clicking on the Add to Dashboard button at the bottom.
You can also create an alert and get notified to act upon the pod failure.
Alerts and Notification channel
We need a notification channel to send alerts.
STEP 1: Click on create alert option below the query. Enter condition name and scroll down, you will notice the error as below:
The reason behind this error is that it is expected that the query should return an integer result otherwise it is considered invalid. So, we will modify our query to return an integer value.
$SELECT count(*) FROM K8sPodSample WHERE clusterName =’my-cluster’ and status != ‘Running’
Notice that I have replaced podName with count(*)
threshold states that a violation should open if the query returns a value above 1 for at least 5 minutes.
Note that a violation does not mean that an alert is triggered.
Now, scroll down to “Connect your condition to a policy” and choose the Kubernetes default alert policy from the existing policy. That's it, Click on Save condition.
You will see a popup message stating that your condition is saved.
STEP 2: Now, Create a New channel to receive alert. Click on Explorer -> Alerts & AI -> Alerts(classic) -> Channels
Click on create new notification channel from the top right and select any channel you want to receive notification on, I will choose Email.
Add details and click on Create channel.
Once you click on Create channel you will see an option of Send a test notification, Click on it and you will receive an email notification on the email ID.
STEP 3: Switch to Alert policies from the top and add Kubernetes default alert policy to this notification channel. This means we have linked policy with notification channel, meaning all the alert conditions with this policy now have a channel to send a notification to.
If you’ve made it this far, thank you for reading and congratulations we have just configured New Relic to monitor the Kubernetes cluster and set up alerts and notifications to acknowledge and act on the cause as soon as possible.
Hope you like it, Happy Reading!
|
https://medium.com/@rana.ash1997/monitor-kubernetes-cluster-using-new-relic-53140d41a935?source=read_next_recirc---------0---------------------3f85b62e_abd2_4bd0_a162_38121c76ec0f-------
|
CC-MAIN-2022-27
|
en
|
refinedweb
|
What
JavaScript Notes
Language notes — ECMAScript 5
Compiled by Jeremy Kelly
These are my personal JavaScript notes, covering implementations deriving from ECMAScript 5. They%.
It is possible to end some lines without semicolons, particularly if the next line cannot be parsed as a continuation of the current line. This should be avoided outside of minifiers.
Single-line comments are prefixed with
//. Multi-line comments begin with
/* and end with
*/.
Identifiers must begin with letters, underscores, or dollar signs. After the first letter, digits can also be added.
Along with JavaScript keywords, the following reserved words are forbidden for use as identifiers:
When strict mode is enabled, these words are also reserved:
In ECMAScript 3, all Java keywords were reserved.
Boolean values are represented with
true and
false. When evaluated as booleans, falsy values like these are considered to be
false:
undefined
NaN
null
All other values are considered to be truthy.
All numbers are represented with double-precision IEEE 754 floats. JavaScript does not provide integer types, but all integer values between -253 and 253 retain full precision in the number type. JavaScript defines global variable
Infinity and
NaN to represent infinite values and undefined numeric results. As in other languages,
NaN is unequal to every value, including itself. A value can be checked for
NaN with the global function
isNaN. This function also returns
true if the value is something other than a number or boolean, though type conversion causes
isNaN('') to return
false.
isFinite returns
false if the value is positive or negative infinity, or
NaN.
Numbers can be specified with scientific notation, the exponent being marked with
E or
e:
var gTol = 1E-3;
Hex values are specified by prefixing the number with
0x or
0X. These cannot have fractional components.
Number primitives are wrapped by the
Number class. Its methods include:
toString()
toString(base)
toFixed(ct)
toPrecision(ct)
JavaScript strings are sequences of 16-bit character values. There is no distinct character type. Strings are immutable.
String literals can be surrounded with single or double quotes. Quote characters are escaped with a single backslash. Long strings can be split across lines by ending each line with a backslash inside the string:
var gWarn = "No people ever recognize \ their dictator in advance";
Single backslashes in other positions have no effect, and are excluded from the string.
The usual escape sequences are available:
An arbitrary character can be specified by combining
\x or
\u with a number of hex digits:
If
\0 is followed by a digit, the digit sequence will be interpreted as a octal number, producing an exception if strict mode is enabled, or an unexpected result if it is not. It is safer to specify the null character with
\x00.
String primitives are wrapped by the
String class. Its properties and methods include:
length
charAt(index)
substr(start, len)
Returns the substring that begins at start and has length len. If start is negative, it is considered to wrap once orig with substring new. orig can be a string or a regular expression. If it()
padStart(len)
padStart(len, pad)
padEnd(len)
padEnd(len, pad)
As of ECMAScript 5, array syntax can be used to read characters from a string, just like
charAt. Because strings are immutable, the characters cannot be modified:
var oCh = gWarn[0];
undefined is a unique instance of its own dedicated primitive type, as is
null.
undefined is assigned to variables that have been declared but not initialized, among other things. If an attempt is made to read a variable that has not been declared, the browser will throw a
ReferenceError exception in JavaScript, and, like other objects, these can store properties, including other functions. All object variables are references. Objects are never passed by value.
Global values are members of the global object, which is created when the JavaScript interpreter starts. This includes global properties like
undefined and
NaN, functions like
isNaN, constructors like
String, and custom globals defined in the script. Within a browser, the global object also serves as the
Window object.
An object can be serialized by passing it to
JSON.stringify, which returns a string containing a JSON representation. The object can be deserialized with
JSON.parse. JSON represents data with a subset of the JavaScript object literal syntax.
Property names need not qualify as identifiers. In fact, any quoted string can serve as a name, even the empty string:
var oLook = { "Site E": 108, "Site F": 90, "": 0 };
In this sense, objects are more like maps than traditional class or structure instances. If a particular name is not a valid identifier, it must be dereferenced with the array syntax:
var oCd = oCdsFromName["Site E"];
This syntax also allows the name to be specified with a variable:
var oKey = "Num"; var oNum = oRack[oKey];
When using the array syntax, nested objects:
var gPrefs = { Def: { Name: "New", LenMax: 10 } };
are accessed by concatenating dereference operators, just as a nested array would be:
gPrefs["Def"]["Name"] = "OBSOLETE";
Properties can be added to an object by assigning to them. There is no need to declare the property:
oRack.Num = 10;.
As of ECMAScript 5,: so that it hides the parent value, as usual.
Accessors can be added to existing objects with
Object.defineProperty or
Object.defineProperties.
In ECMAScript 5, properties have attributes that determine whether they are enumerable, whether they can be reconfigured or deleted, and whether their values can be changed. Attributes are also used to create accessor properties.
The attributes for a single property can be set by passing the object, the name of the property, and a property descriptor to
Object.defineProperty. If the property already exists, and if it is configurable, it will be modified. If it does not exist, it will be created:
Object.defineProperty(oRef, "Cd", { get: function () { return "F" + this.Num; }, set: function (a) { this.Num = a.substring(1); }, enumerable: true, configurable: true });
Multiple properties can be configured by passing the object to
Object.defineProperties, along with a second object associating one or more property::
var oCkCd = "Cd" in aParams;
It also accepts a number and an array. It returns
true if the number is a valid index for the array:. In ECMAScript 5, the
Object.keys method also identifies enumerable properties, but it returns an array of names, and it excludes inherited properties. The
Object.getOwnPropertyNames method returns a similar array, but non-enumerable properties are included.
The
delete operator removes a property from an object. The property can be referenced fails silently. Inherited properties cannot be deleted through the child; they must instead be deleted through the parent to which they were added.
Objects can be initialized with object literals, which contain property/value pairs within curly braces. Each property is separated from its value by a colon, and multiple pairs are delimited with commas:
var oRt = { Region: gRegionDef, Zone: 0 };
Omitting the properties produces an empty object:
var oRack = {};
As in other languages, a constructor is a function that initializes new objects, though in JavaScript, the function is not a member of the type. As will be seen, constructors are often used to store class-static variables and methods. They are invoked like functions, but they are preceded by the
new keyword:.
Constructors are not meant to return values. If an object is returned, the first constructed object will be replaced with the returned value. If a non-object value is returned, it will be ignored.
Note that the new object's
constructor property is not necessarily set to reference the function just invoked. The constructor has a
prototype property, that object has a
constructor property, and that value is assigned to the new object's
constructor. By default, this will reference the actual constructor, but it can be changed in the prototype.
In ECMAScript 5, objects can also be instantiated with the
Object.create method. This method accepts an argument that specifies the object's prototype:
var oData = Object.create(tData.prototype);
Note that the new object's
constructor property is set to the
constructor of the prototype, even though that function was relies on prototypal inheritance, which has little in common with traditional OOP inheritance. In JavaScript, inheritance describes a relationship between objects rather than types. Inheritance for a particular object is determined by the private prototype property, which references the parent object, or
null if there is no parent. A class in JavaScript is simply the set of objects that share a given prototype.
When a property is accessed, it is first sought within the referencing object. If that object has not declared the property, it is sought within the object's prototype. The prototype is another object. If it does not declare the property, its prototype is checked, and so on, until a
null prototype is encountered. In this way, every object inherits all the properties of its ancestors, including methods, throughout the prototype chain. Properties that have not been inherited are called own properties.. If the property is an inherited accessor with a setter, that setter will be called, though any property it assigns will again produce a property in the child that hides the parent value.
An object's prototype is assigned when the object is created. Every function has a
prototype property that references a default prototype object. When the function is used as a constructor, the prototype is automatically assigned to the new object:
function tOpts() { ... }; var oOpts = new tOpts();
Because every function has this property, every function can theoretically be used as a constructor, though seldom to useful effect. The default prototype for each function contains a non-enumerable
constructor property that references the original function. Overwriting the function's prototype breaks this link:
tOpts.prototype = { Ct: function () { ... }, ... };
The
constructor property can be restored manually, or the problem can be avoided by adding custom properties to the default, rather than overwriting it entirely:
tOpts.prototype.Ct = function () { ... };
Every object has a
constructor property that references the constructor that was used to create it. In ordinary usage, this matches the
constructor property in the object's prototype.
The
Object.create method accepts an object parameter, and assigns that parameter as the prototype of the returned object:
var oOpts = Object.create(tOpts.prototype);
When this is used, the object's
constructor is set to match the
constructor property in the prototype. An object can also be created with an object literal, which causes
Object.prototype to be assigned as the object's prototype, and
Object as its constructor.
If the prototype's
constructor property is maintained, all objects in the class will also share a common constructor, at least by default. This allows class-static variables and methods to be defined in and accessed through the constructor. These could not be added to the prototype, as that would cause the values to be inherited and shared.
To summarize the standard usage:
When one class subclasses another, the prototype of the subclass prototype is made to reference the superclass prototype. The subclass prototype
constructor is then set to reference the subclass constructor, rather than the superclass constructor. The subclass constructor typically uses
call to invoke the superclass constructor on the subclass
this.
An object's class is determined by its prototype. The
instanceof operator accepts an instance on the left and a constructor on the right. It returns
true if the instance is an object, and if it inherits from the object referenced by the constructor's
prototype property, whether directly or indirectly. Adding properties to the object does not change this result. By extension, the operator typically returns
true if the right operand is
Object, since all objects inherit from
Object.prototype, at least by default. Similarly, arrays can be identified by setting the right operand to
Array. The
isPrototypeOf function also indicates whether one object is the ancestor of another, though it is invoked on the prototype, and receives the child as an argument.
The object's prototype can be retrieved by passing the object to the
Object.getPrototypeOf method, and it can be set by calling
Object.setPrototypeOf, though this is discouraged for performance reasons. Most browsers also support the non-standard accessor property
__proto__, which provides read and write access to the prototype.
An extensible object is one to which new properties can be added. An object's extensibility can be checked by passing it to
Object.isExtensible. It is made non-extensible by passing it to
Object.preventExtensions, and once this is done, the object cannot be made extensible again.
A sealed object is non-extensible, and its properties are non-configurable as well. To determine whether an object is sealed, pass it to
Object.isSealed. To seal the object, make it non-extensible, then make its properties non-configurable by setting their attributes, or pass the object to
Object.seal. A sealed object cannot be unsealed.
A frozen object is sealed, and all its properties are read-only. To determine whether an object is frozen, pass it to
Object.isFrozen. To freeze the object, seal it, and make its properties read-only by setting their attributes, or pass the object to
Object.freeze. A frozen object canot be unfrozen.
JavaScript arrays inherit from
Array.prototype. They are typically created with array literals, which are comma-delimited value sequences inside square braces:
var oInsPend = [ "A11", "B04", "CXX" ];
When commas are listed without intervening values, succeeding values are indexed and the array length set as though values had been provided. The last trailing comma before the closing brace is ignored, however:
var oInsMark = [ , "NUL", , ]; gAssert(oInsMark[1] === "NUL"); gAssert(oInsMark.length === 3);
Though the array length includes them, the missing values do not define actual with the specified length, but again, without defining actual elements. Passing multiple arguments, or a single non-numeric argument assigns those values as elements, much like an array literal:
var oInsPend = new Array("A11", "B04", "CXX");
JavaScript arrays are untyped, so different types can be mixed in the same instance. Arrays are indexed with 32-bit unsigned integers, allowing over four billion elements to be stored. The element count is given from an object.
The
length property can be modified to truncate or extend the array. Assigning to a valid index that is greater than or equal to the current
length also extends the array, and in both cases, the
length is increased without adding enumerable elements.
Multidimensional arrays must be implemented as arrays of arrays.
The
Array class includes methods such as:
join()
join(delim)
slice(start)
slice(start, next)
concat()
splice, if one or more arguments are themselves arrays, their elements are added, rather than the arrays as a whole.
reverse()
sort()
sort(compare)
splice(start)
splice(start, len)
splice(start, len, ...)
concat, array arguments are inserted as arrays.
push(el, ...)
pop()
unshift(el, ...)
shift()
In ECMAScript 5, additional methods are provided. These include:
indexOf(val)
indexOf(val, start)
lastIndexOf(val)
lastIndexOf(val, start)
The following methods accept a call function that itself accepts up to three values: an element, its array index, and the array as a whole. These array methods also accept an optional this parameter. When this is provided, it is referenced wherever
this is used within call:
forEach(call)
forEach(call, this)
map(call)
map(call, this)
filter(call)
filter(call, this)
true.
some(call)
some(call, this)
trueif call returns
truefor any element.
every(call)
every(call, this)
trueif call returns
truefor every element.
The following methods accept a callAcc function that itself accepts up to four values: an accumulator that stores an ongoing calculation, an element, its array index, and the array as a whole:
reduce(callAcc)
reduce(callAcc, init)
reduceRight(callAcc)
reduceRight(callAcc, init)
Some objects, like the
arguments object stored in instances of the
RegExp class. Instances can be created with the
RegExp constructor:
var oRegCd = new RegExp("A[1-3]");
or with regular expression literals, which surround the expression with forward slashes:]. Additionally, a given control character
ctrl-X can be specified as
\c X.
The trailing slash may be followed by one or more letters that set flags:
var oRegCmds = /F\d\d/ig;
The letters can also be passed as a second parameter to the
RegExp constructor. Flags are used to configure the search:
Expressions can also use character classes, each of which matches one of a number of characters. Enclosing characters with square braces produces a character set, which matches any one of the contained characters:
var oRegNumLot = /[123]/;
Prefixing the characters with a caret negates the set, so that it matches one character that is not within the braces:
var oRegCdLot = /[^123]/;
A range of characters is specified by joining the lower and upper limits with a hyphen:
var oRegDigOct = /[0-7]/;
Neither
. nor
* are treated as special characters within a set, so they need not be escaped.
Other classes include:
Entire sequences can be matched against a set of alternatives by delimiting sub-expressions with the pipe character: between the remainder, and the rest of the expression.
Surrounding characters with parentheses produces a sub-expression that can be modified as a whole by a quantifier or another function:
var oReg = / (XO)+ /;
These capturing parentheses also store the target substring matched by the sub-expression. The substring can be recalled in another part of the expression by prefixing the sub-expression number with a backslash:
var oRegChQuot = /(["']).\1/;
The recalled substring is matched only if the target text contains an exact repetition of the substring that matched the referenced sub-expression. The characters inside non-capturing parentheses are prefixed by
?:.:
The
RegExp class offers methods that search for matches within a string:) {.
The arithmetic operators function mostly as expected. However:
Infinityor
-Infinity. Dividing zero by zero produces
NaN;
%also works with non-integer values, and when it produces a remainder, its sign matches that of the first operand;
+can be used to convert non-numeric types to numbers.
The loose equality operators
== and
!= check for general equivalence, so various type conversions are allowed. By contrast, the strict equality operators
=== and
!== check for equivalence and verify that the operands have the same type. When applied to arrays, functions, and other objects, both varieties check for identity, so distinct but otherwise identical instances are not strictly equal. There is no operator that tells whether distinct objects or arrays contain the same properties and values.
The
void operator accepts a single operand, which it evaluates. It then discards the result and returns
undefined.
As in other languages, the sequence operator evaluates both its operands and returns the value on the right:
var oYOrig = (++oX, ++oY);
Because this operator has the lowest possible precedence, a sequence expression must be parenthesized if its result is to be assigned.
The
typeof operator returns a string that gives the general type of its operand, whether "undefined", "boolean", "number", "object", or "function".. Objects always produce
truewhen converted to booleans.
These conversions are used when comparing values with the
== operator, so
undefined is equal to
null,
"1" is equal to one, and
"0" is equal to
false. When a string is added to any other type with the
+ operator, the non-string type is converted to a string.
Explicit conversions are performed by passing values to the
Boolean,
Number,
String, or
Object constructors. For values other than
undefined and
null, a string can also be produced by invoking the value's
toString method.
Strings can also be converted to numbers with the global
parseFloat function, which trims the string of leading whitespace and trailing non-numeric characters, and returns
NaN if the string is not a valid number. The global
parseInt function works similarly, but it only converts integers, and it accepts hexadecimal input if the string begins with
0x or
0X.
parseInt also allows the number base to be specified.
Variables are declared with the
var keyword. Multiple variables can be declared and optionally initialized in the same line by separating them with commas:
var oX = 0.0, oY;
Uninitialized variables have the
undefined value. Redeclaring a variable has no effect. If the new declaration also initializes the variable, it is simply assigned with the new value..
A variable declared within a function is accessible throughout the function, even from points before its declaration. This effect is called hoisting. If the variable is read before the declaration, its value will be
undefined.
Variables have no type, so a value of one type can be overwritten with another type at any time. loop variables defined within the initialization statement of a
for loop to be accessed outside the loop:
for (var o = 0; o < oCt; ++o) { ... } var oIdxMatch = o;
Because they are hoisted, such variables can even be used before they are declared.
The
for/
in loop iterates over an array, and its loop variable is also accessible from outside:
for (var o in oEls) if (Ck(oEls[o])) break; var oIdxMatch = o;
Note that the loop variable iterates array indices rather than array elements.
for/
in can also iterate the enumerable properties of an object, including those that were inherited. In this case too, it iterates property names rather than values:
for (var oIdxBl in oBls) { for it is permissible to provide a function name, these definitions are typically anonymous:
var gReset = function (aPos, aCkSync) { ...
Function instances can also be created with the
Function constructor: a block. Function expressions can appear anywhere. Like variables, nested function declarations are hoisted, allowing them to be called before they are declared. Although hoisted variables are
undefined before they are initialized, hoisted declarations can be used at any point.
A method is a function that has been assigned to a property in some object. Functions assigned to array elements are also treated as methods. In JavaScript, functions are themselves objects that can contain their own properties, including additional methods.
Functions are sometimes used as namespaces. A global function is defined, variables and functions are declared and used within it, and the containing function is called immediately after. This avoids the name conflicts that can occur when objects are added to the global scope. The pattern is typically known as the Immediately-Invoked Function Expression or IIFE:
(function () { ... }());
Without the outer parentheses, the interpreter would read this as a function declaration, which is required to specify a name. Only expressions are allowed within parentheses. ECMAScript 5, ECMAScript 5, targeted included in a closure, or the function can be wrapped in a bound function that defines its own
this.
Bound functions are created with the
bind method, which is inherited by all functions.
bind returns a new function that wraps the original.
"strict mode" directive is an ordinary string that enables strict mode. This mode offers language and security improvements, including:
ReferenceErrorexception;
thisis
undefinedwithin non-class functions;
withstatement is disallowed;
SyntaxErrorexceptions;
argumentsnor
evalare allowed to be assigned to another object or function;
evalare not added to the containing scope.
The directive must be the first non-comment line in the script; if placed anywhere else, it is silently ignored. It is ignored altogether in versions before ECMAScript 5. throws a
SyntaxError exception.
The
debugger statement pauses script execution and shows the debugger, like a breakpoint.
JavaScript Pocket Reference, 3rd Edition
David Flanagan
2012, O'Reilly Media, Inc.
JavaScript: The Definitive Guide, 6rd Edition
David Flanagan
2011, O'Reilly Media, Inc.
JavaScript & JQuery
Jon Duckett
2014, John Wiley & Sons, Inc.
MDN: String
Retrieved January 2018
MDN: Object.defineProperty()
Retrieved February 2018
MDN: Function.prototype.bind()
Retrieved February 2018
MDN: Regular Expressions
Retrieved February 2018
Stack Overflow: Explain the encapsulated anonymous function syntax
Retrieved February 2018
|
http://anthemion.org/js_notes.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
>How arbitrary is this path? Must it be within the DocumentRoot?
This one I can answer I believe. I put a path starting from C:\, and
it worked fine, with the limitation that you can't have spaces in the
path, and you can't use double quotes to get around that. So I had to
figure out how to write things dos style, like MyDocu~1, and that
worked.
I came across a bit of trouble with the importer now, It can't find a
module that exists.
File "C:\Docume~1\Dan\MyDocu~1\PYROOT\pyserver\web.py", line 348, in
import_module
module = apache.import_module(module, 1, 1, path)
File "C:\Program
Files\Python\lib\site-packages\mod_python\importer.py", line 236, in
import_module
return __import__(module_name, {}, {}, ['*'])
ImportError: No module named _config
Now I've confirmed there is a module named _config.py in the path
specified, and I can find it if I add path to sys.path.
There is an import at the top of _config.py that should fail and raise
and exception, but that shouldn't be related?
Any ideas why this is happening?
Thanks,
-Dan
On 4/21/06, Jorey Bump <list at joreybump.com> wrote:
> Graham Dumpleton wrote:
> > Graham Dumpleton wrote ..
> >> The new module importer completely ignores packages as it is practically
> >> impossible to get any form of automatic module reloading to work
> >> correctly with them when they are more than trivial. As such, packages
> >> are handed off to standard Python __import__ to deal with. That it even
> >> finds the package means that you have it installed in sys.path. Even if
> >> it was a file based module, because it is on sys.path and thus likely to
> >> be installed in a standard location, the new module importer would again
> >> ignore it as it leaves all sys.path modules up to Python __import__
> >> as too dangerous to be mixing importing schemes.
> >>
> >> Anyway, that all only applies if you were expecting PyServer.pyserver to
> >> automatically reload upon changes.
>
> Graham, can you enumerate the different ways packages are handled, or is
> it enough to say that packages are never reloaded? In this thread, you
> explain that when a package is imported via PythonHandler, mod_python
> uses the conventional Python __import__, requiring an apache restart to
> reliably reload the package, as in the past.
>
> This also implies that if a published module imports a package, and the
> published module is touched or modified, then the module will be
> reloaded, but not the package. Is this correct?
>
> > BTW, that something outside of the document tree, possibly in sys.path,
> > is dealt with by Python __import__ doesn't mean you can't have module
> > reloading on stuff outside of the document tree. The idea is that if it is
> > part of the web application and needs to be reloadable, that it doesn't
> > really belong in standard Python directories anyway. People only install
> > it there at present because it is convenient.
>
> There are security benefits to not putting your code in the
> DocumentRoot. It's also useful to develop generic utilities that are
> used in multiple apps (not just mod_python), but that you don't want
> available globally on the system. I prefer extremely minimal frontends
> in the DocumentRoot, with most of my code stored elsewhere. Will the new
> importer support reloading modules outside of the DocumentRoot without
> putting them in sys.path?
>
> > The better way of dealing with this with the new module importer is to
> > put your web application modules elsewhere, ie., not on sys.path. You then
> > specify an absolute path to the actual .py file in the handler directive.
> >
> > <Directory />
> > SetHandler mod_python
> > PythonHandler /path/to/web/application/PyServer/pserver.py
> > ...
>
> How arbitrary is this path? Must it be within the DocumentRoot?
>
> > Most cases I have seen is that people use packages purely to create a
> > namespace to group the modules. With the new module importer that
> > doesn't really need to be done anymore. That is because you can
> > directly reference an arbitrary module by its path. When you use the
> > "import" statement in files in that directory, one of the places it will
> > automatically look, without that directory needing to be in sys.path,
> > is the same directory the file is in. This achieves the same result as
> > what people are using packages for now but you can still have module
> > reloading work.
>
> Does it (the initial loading, not the reloading) also apply to packages
> in that directory? Or will it only work with standalone single file
> modules in the root of that directory?
>
> This is all very nifty, because it implies that a mod_python application
> can now be easily distributed by inflating a tarball and specifying the
> PythonHandler accordingly.
>
> If the new importer works outside of the DocumentRoot, and Location is
> used instead of Directory, no files need to be created in the
> DocumentRoot at all. Or is this currently impossible, in regards to
> automatic module reloading? I already do this for some handlers I've
> written, and really like the flexibility provided by the virtualization.
>
>
>
> _______________________________________________
> Mod_python mailing list
> Mod_python at modpython.org
>
>
|
http://modpython.org/pipermail/mod_python/2006-April/020943.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Ideally when making automated tests, you don't have to mock anything. You just test exactly what would be executed in production. Some scenarios make that a challenge, though. What if you're testing a view that relies on an external authentication service, like an LTI server?
It's probably possible to automate an LTI server to run alongside my tests, just like a test database. But that adds a lot of complexity without adding much value. Although that would make the test environment more similar to the production one, right now I just want to be able to detect regressions in my custom view.
I'm making a test for a view that uses
LTIAuthMixin
from
the django-lti-provider
library. If you look at LTIAuthMixin, you can see that
all the authentication is handled in
dispatch(), a method that's called with each
HTTP method of a view object
(
.get(),
.put(),
etc).
Python has
a mock
library that I've never really gotten a handle on. I
thought I could use this library here, and tried to mock
out LTIAuthMixin's
dispatch() method. I was
able to get my test to ignore LTI authentication, but
I couldn't figure out how to tell my
mocked dispatch method to still return the necessary response
data given a request with an authed user.
So I kept thinking of alternatives, keeping in
mind that all I cared about testing here is not even the view's
.get()
method, but just
.get_context_data().
Through a combination of Django's RequestFactory, and directly instantiating the view object, you can just alter its attributes as necessary. So I made a plan to not use Django's test client, and just do something like this:
# LoggedInTestMixin just sets self.u to an authenticated user. class MyLTILandingPageTest(LoggedInTestMixin, TestCase): def setUp(self): super(MyLTILandingPageTest, self).setUp() self.factory = RequestFactory() self.g = GraphFactory(title='Quiz graph', needs_submit=True) self.submission = SubmissionFactory( graph=self.g, user=self.u, choice=3) def test_get(self): request = self.factory.get('/lti/landing/') request.user = self.u view = MyLTILandingPage() ctx = view.get_context_data() self.assertEqual(ctx.get('submissions').count(), 1)
The test above runs into a few
errors.
self.lti needs to be set,
and it also needs to respond to a
course_context()
method call. This turned out to be sufficient:
class MockLTI(object): def course_context(self, request): return None
I also needed to attach the request object to the view I've instantiated, so it knows who the current user is. Here's the completed test:
def test_get(self): request = self.factory.get('/lti/landing/') request.user = self.u view = MyLTILandingPage() view.lti = MockLTI() view.request = request ctx = view.get_context_data() self.assertEqual(ctx.get('submissions').count(), 1) submission = ctx.get('submissions').first() self.assertEqual(submission.user, self.u) self.assertEqual(submission.choice, 3)
Now my test confirms that this view's context data
contains a graph submission connected to the authenticated
user. It doesn't matter to
get_context_data() that
the user wasn't actually authenticated with LTI.
So, if there's a clearer, more idiomatic way to do this using Python's standard mock library, maybe a way to override dispatch() using mock, I'd like see some examples.
|
https://www.columbia.edu/~njn2118/journal/2017/12/8.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
into the chassis, watching as the motherboard cleared the edge of the case by a few centimeters, that I realized I had the wrong form factor. I bought a case that was mini-ITX form factor instead of micro-ATX. I have no use for the case, so I’m giving it away. My folly, your fortune.
I will be giving away this Rosewill Neutron case to one lucky entrant. Unfortunately, I will have to limit the contest to USA due to high international shipping costs for bulkier packages. I will be selecting the winner programmatically (I’m thinking about recording my screen while I do this). I will run this contest from sometime today (7/3) until the 5th. To enter yourself in this contest, please fill out the form below. If you have any ad-blocking plugins, please make sure to temporarily allow javascript as you will need to confirm your email which may require that you complete a Google reCAPTCHA.
A bit on free
Given that this contest falls on my country’s day of independence, and that this is a free and open source themed blog, I would like to talk a little bit about what it is to be free. Hearing or reading the word “free” should invoke a feeling of relief, pleasant surprise, or satisfaction. But in some contexts, when people hear the word “free”, they instead react with skepticism, and caution. Is there truly such a thing as a free lunch? I am inclined to say no, there is not – but that doesn’t necessarily make it a bad thing. For example, if I wanted to buy a computer for say $700 in the United States, that same computer might cost upwards of $1,000 in other countries. But healthcare in those countries might be free. It works both ways. The point being, you end up spending the same amount in different ways. There is a saying “if you’re not paying for the product, you are the product.” Which might be true if you’re using freeware rather than free and open source software. The distinction being a matter of liberty instead of price – “free as in free speech, not as in free beer”. So if there is any thing that can be said to be truly free in our day and age, it is free and open source software. A good example of this is the Linux and UNIX-like (which I fully encourage reading the history of) operating systems. And while I completely understand that not everyone is going to drop what they’re doing and install Ubuntu (Windows is easier and will save the layman time), the important thing is that you have the freedom of choice. The power of open source comes from its openness. Open to people, open to ideas. Open data. Open government. The open design is a frame for something much greater to be built within. And so I will ask the winner of this contest to think about installing a flavor of Linux.
Good luck! And may the 1/n person win.
Update: The contest has ended. I will draw the winner shortly.
Update: Here is a video of the winner being picked with a Python script.
**Unfortunately, I will have to limit the contest to USA due to high international shipping costs for bulkier packages.
***I realize now that having full contact info as required fields for entering isn’t necessary – I can reach out to the winner and get that info once the contest has ended. I changed the form so all you need to submit now is your email and first name.
Source code
Here is the source code of how I elected the winner:
import sys import csv import random def main(): with open(sys.argv[1], 'r') as file: data = csv.reader(file) entrants = [] for row in data: if not row[0] == 'Email Address': entrants.append(row[0]) print("There are %s entrants in this contest. Chances of winning are %f." % (str(len(entrants)), 1/float(len(entrants))) ) print("Winner, winner, chicken dinner! %s has won the contest." % random.choice(entrants)) if __name__ == "__main__": main()
Pass the path to the file in as a parameter like: “script.py /path/to/file.csv” or “python3 script.py /path/to/file.csv”.
|
http://www.adamantine.me/2017/07/03/independence-day-giveaway/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
sorting module¶
Base types¶
- class
whoosh.sorting.
FacetType¶
Base class for “facets”, aspects that can be sorted/faceted.
categorizer(global_searcher)¶
Returns a
Categorizercorresponding to this facet.
- class
whoosh.sorting.
Categorizer¶
Base class for categorizer objects which compute a key value for a document based on certain criteria, for use in sorting/faceting.
Categorizers are created by FacetType objects through the
FacetType.categorizer()method. The
whoosh.searching.Searcherobject passed to the
categorizermethod may be a composite searcher (that is, wrapping a multi-reader), but categorizers are always run per-segment, with segment-relative document numbers.
The collector will call a categorizer’s
set_searchermethod as it searches each segment to let the cateogorizer set up whatever segment- specific data it needs.
Collector.allow_overlapshould be
Trueif the caller can use the
keys_formethod instead of
key_forto group documents into potentially overlapping groups. The default is
False.
If a categorizer subclass can categorize the document using only the document number, it should set
Collector.needs_currentto
False(this is the default) and NOT USE the given matcher in the
key_foror
keys_formethods, since in that case
segment_docnumis not guaranteed to be consistent with the given matcher. If a categorizer subclass needs to access information on the matcher, it should set
needs_currentto
True. This will prevent the caller from using optimizations that might leave the matcher in an inconsistent state.
key_to_name(key)¶
Returns a representation of the key to be used as a dictionary key in faceting. For example, the sorting key for date fields is a large integer; this method translates it into a
datetimeobject to make the groupings clearer.
keys_for(matcher, segment_docnum)¶
Yields a series of keys for the current match.
This method will be called instead of
key_forif
self.allow_overlapis
True.
set_searcher(segment_searcher, docoffset)¶
Called by the collector when the collector moves to a new segment. The
segment_searcherwill be atomic. The
docoffsetis the offset of the segment’s document numbers relative to the entire index. You can use the offset to get absolute index docnums by adding the offset to segment-relative docnums.
Facet types¶
- class
whoosh.sorting.
FieldFacet(fieldname, reverse=False, allow_overlap=False, maptype=None)¶
Sorts/facets by the contents of a field.
For example, to sort by the contents of the “path” field in reverse order, and facet by the contents of the “tag” field:
paths = FieldFacet("path", reverse=True) tags = FieldFacet("tag") results = searcher.search(myquery, sortedby=paths, groupedby=tags)
This facet returns different categorizers based on the field type.
- class
whoosh.sorting.
QueryFacet(querydict, other=None, allow_overlap=False, maptype=None)¶
Sorts/facets based on the results of a series of queries.
- class
whoosh.sorting.
RangeFacet(fieldname, start, end, gap, hardend=False, maptype=None)¶
Sorts/facets based on numeric ranges. For textual ranges, use
QueryFacet.
For example, to facet the “price” field into $100 buckets, up to $1000:
prices = RangeFacet("price", 0, 1000, 100) results = searcher.search(myquery, groupedby=prices)
The ranges/buckets are always inclusive at the start and exclusive at the end.
- class
whoosh.sorting.
DateRangeFacet(fieldname, start, end, gap, hardend=False, maptype=None)¶
Sorts/facets based on date ranges. This is the same as RangeFacet except you are expected to use
daterangeobjects as the start and end of the range, and
timedeltaor
relativedeltaobjects as the gap(s), and it generates
DateRangequeries instead of
TermRangequeries.
For example, to facet a “birthday” range into 5 year buckets:
from datetime import datetime from whoosh.support.relativedelta import relativedelta startdate = datetime(1920, 0, 0) enddate = datetime.now() gap = relativedelta(years=5) bdays = DateRangeFacet("birthday", startdate, enddate, gap) results = searcher.search(myquery, groupedby=bdays)
The ranges/buckets are always inclusive at the start and exclusive at the end.
- class
whoosh.sorting.
ScoreFacet¶
Uses a document’s score as a sorting criterion.
For example, to sort by the
tagfield, and then within that by relative score:
tag_score = MultiFacet(["tag", ScoreFacet()]) results = searcher.search(myquery, sortedby=tag_score)
- class
whoosh.sorting.
FunctionFacet(fn, maptype=None)¶
This facet type is low-level. In most cases you should use
TranslateFacetinstead.
This facet type ets you pass an arbitrary function that will compute the key. This may be easier than subclassing FacetType and Categorizer to set up the desired behavior.
The function is called with the arguments
(searcher, docid), where the
searchermay be a composite searcher, and the
docidis an absolute index document number (not segment-relative).
For example, to use the number of words in the document’s “content” field as the sorting/faceting key:
fn = lambda s, docid: s.doc_field_length(docid, "content") lengths = FunctionFacet(fn)
- class
whoosh.sorting.
MultiFacet(items=None, maptype=None)¶
Sorts/facets by the combination of multiple “sub-facets”.
For example, to sort by the value of the “tag” field, and then (for documents where the tag is the same) by the value of the “path” field:
facet = MultiFacet([FieldFacet("tag"), FieldFacet("path")]) results = searcher.search(myquery, sortedby=facet)
As a shortcut, you can use strings to refer to field names, and they will be assumed to be field names and turned into FieldFacet objects:
facet = MultiFacet(["tag", "path"])
You can also use the
add_*methods to add criteria to the multifacet:
facet = MultiFacet() facet.add_field("tag") facet.add_field("path", reverse=True) facet.add_query({"a-m": TermRange("name", "a", "m"), "n-z": TermRange("name", "n", "z")})
- class
whoosh.sorting.
StoredFieldFacet(fieldname, allow_overlap=False, split_fn=None, maptype=None)¶
Lets you sort/group using the value in an unindexed, stored field (e.g.
whoosh.fields.STORED). This is usually slower than using an indexed field.
For fields where the stored value is a space-separated list of keywords, (e.g.
"tag1 tag2 tag3"), you can use the
allow_overlapkeyword argument to allow overlapped faceting on the result of calling the
split()method on the field value (or calling a custom split function if one is supplied).
Facets object¶
- class
whoosh.sorting.
Facets(x=None)¶
Maps facet names to
FacetTypeobjects, for creating multiple groupings of documents.
For example, to group by tag, and also group by price range:
facets = Facets() facets.add_field("tag") facets.add_facet("price", RangeFacet("price", 0, 1000, 100)) results = searcher.search(myquery, groupedby=facets) tag_groups = results.groups("tag") price_groups = results.groups("price")
(To group by the combination of multiple facets, use
MultiFacet.)
add_facets(facets, replace=True)¶
Adds the contents of the given
Facetsor
dictobject to this object.
add_field(fieldname, **kwargs)¶
Adds a
FieldFacetfor the given field name (the field name is automatically used as the facet name).
add_query(name, querydict, **kwargs)¶
Adds a
QueryFacetunder the given
name.
FacetType objects¶
- class
whoosh.sorting.
FacetMap¶
Base class for objects holding the results of grouping search results by a Facet. Use an object’s
as_dict()method to access the results.
You can pass a subclass of this to the
maptypekeyword argument when creating a
FacetTypeobject to specify what information the facet should record about the group. For example:
# Record each document in each group in its sorted order myfacet = FieldFacet("size", maptype=OrderedList) # Record only the count of documents in each group myfacet = FieldFacet("size", maptype=Count)
- class
whoosh.sorting.
OrderedList¶
Stores a list of document numbers for each group, in the same order as they appear in the search results.
The
as_dictmethod returns a dictionary mapping group names to lists of document numbers.
- class
whoosh.sorting.
UnorderedList¶
Stores a list of document numbers for each group, in arbitrary order. This is slightly faster and uses less memory than
OrderedListResultif you don’t care about the ordering of the documents within groups.
The
as_dictmethod returns a dictionary mapping group names to lists of document numbers.
- class
whoosh.sorting.
Count¶
Stores the number of documents in each group.
The
as_dictmethod returns a dictionary mapping group names to integers.
|
https://whoosh.readthedocs.io/en/latest/api/sorting.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
[ ]
Colm O hEigeartaigh commented on CXF-7703: ------------------------------------------ Agreed, I'll make SwaggerUiResourceLocator a public class. > Public constructor of SwaggerUiService takes package-private argument of > SwaggerUiResourceLocator > ------------------------------------------------------------------------------------------------- > > Key: CXF-7703 > URL: > Project: CXF > Issue Type: Bug > Affects Versions: 3.2.4 > Reporter: Simon Kissane > Assignee: Colm O hEigeartaigh > Priority: Major > Fix For: 3.2.5 > > > In CXF 3.2.4 (and also in master), the class > org.apache.cxf.jaxrs.swagger.SwaggerUiService has a single public constructor, > public SwaggerUiService(SwaggerUiResourceLocator locator, Map<String, String> > mediaTypes) > However, the first argument SwaggerUiResourceLocator is a package-private > class. > So, effectively no one outside the package can use that constructor. > This doesn't make any logical sense. Either the constructor should be > package-private, or the SwaggerUiResourceLocator class should be public. > In CXF 3.1.11, I subclass Swagger2Feature.SwaggerUIService to modify its > behaviour. > I am trying to upgrade from CXF 3.1.11 to CXF 3.2.4. > In CXF 3.2.4, Swagger2Feature.SwaggerUIService has been replaced by > SwaggerUiService class. > But I cannot subclass it because SwaggerUiResourceLocator is package-private. > I think the simplest solution would be to make SwaggerUiResourceLocator a > public class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
|
https://www.mail-archive.com/[email protected]/msg38833.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
An async GeoJSON client library for VIC Emergency Incidents.
Project description
python-aio-geojson-vicemergency-incidents
This library provides convenient async access to the VIC Emergency Website incidents feed.
This code is based on [] by exxamalte.
Installation
pip install aio-geojson-vicemergency_vicemergency_incidents import VicEmergencyIncidentsFeed async def main() -> None: async with ClientSession() as websession: # Home Coordinates: Latitude: -37.813629, Longitude: 144.963058 (Elizabeth St in the CBD) # Filter radius: 50 km # Filter include categories: '' # Filter exclude categories: 'Burn Advice' # Filter statewide incidents: False feed = VICEmergencyIncidentsFeed(websession, (-37.813629, 144.963058), filter_radius=50, filter_inc_categories=[''], filter_exc_categories=['Burn Advice'], filter_statewide=False).
|
https://pypi.org/project/aio-geojson-vicemergency-incidents/0.5/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
#include <OnixS/CME/MDH/FIX/MultiContainer.h>
Sequence of tag-value pairs preserving order of pairs and allowing presence of multiple tag-value pairs with the same tag value.
Primarily, designed to provide lightweight service to transform serialized FIX messages into structural presentation for furhter field access and other manipulations over stored data.
Definition at line 247 of file MultiContainer.h.
Iterator over container items.
Definition at line 272 of file MultiContainer.h.
Alias for tag component which serves like an entry key.
Definition at line 264 of file MultiContainer.h.
Iterator over items having the same tag value.
Definition at line 277 of file MultiContainer.h.
Alias for value type.
Definition at line 267 of file MultiContainer.h.
Initializes an empty instance.
Definition at line 280 of file MultiContainer.h.
Finalizes the instance.
Definition at line 285 of file MultiContainer.h.
Allows to iterate all items in the container having the given tag value.
Definition at line 321 of file MultiContainer.h.
Returns iterator pointing to the first item of the container.
Definition at line 291 of file MultiContainer.h.
Full-fill collection from the FIX (tag=value) string presentation.
Definition at line 335 of file MultiContainer.h.
Returns iterator pointing to the item beyond the last one.
Definition at line 298 of file MultiContainer.h.
Returns iterator pointing to the first (of possibly multiple) item having the given key (tag) value.
Definition at line 307 of file MultiContainer.h.
|
https://ref.onixs.biz/cpp-cme-mdp3-handler-guide/classOnixS_1_1CME_1_1MDH_1_1FIX_1_1MultiContainer.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
table of contents
NAME¶
cgiGetCookie - Return a cookie
SYNOPSYS¶
#include <cgi.h> s_cookie *cgiGetCookie (s_cgi *parms, const char *name);
DESCRIPTION¶
This routine returns a pointer to a s_cookie structure that contains all values for the cookie as referenced by name. The pointer must not be freed.
The s_cookie structure is declared as follows:
typedef struct cookie_s {
char *version,
*name,
*value,
*path,
*domain; } s_cookie;
Memory allocated by this data structure is automatically freed by the final call to cgiFree(3).¶
On success a pointer to a s_cookie structure is returned. If no cookie was set or no cookie with a given name exists NULL is returned.
AUTHOR¶
This CGI library is written by Martin Schulze <[email protected]>. If you have additions or improvements please get in touch with him.
SEE ALSO¶
cgiGetValue(3), cgiGetVariables(3), cgiGetCookies(3), cgiDebug(3), cgiHeader(3), cgiInit(3), cgiFree(3).
|
https://manpages.debian.org/bullseye/cgilib/cgiGetCookie.3.en.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Hello,
i just found out that
using Windows.Media.SpeechSynthesis;
using Windows.Media.SpeechRecognition;
does only work when you have a connection to the Internet.
It fails when you do this
SpeechRecognitionResult speechRecognitionResult = await speechRecog.RecognizeAsync();
Why does it fail? Is there any Speech Recognizer or Synthesizer which doesn't need a internet connection?
You most probably use a
SpeechRecognitionTopicConstraint in your setup (search or dictation). These indeed rely on a web service
You may want to use one of the 'local' constraint types, like a
grammar file (very powerful) or a
list constraint (very simple).
|
https://social.msdn.microsoft.com/Forums/en-US/1d61fb3a-ef01-44f1-a281-b8940ed3dc1c/speech-recognition-and-speech-synthesis?forum=winappswithcsharp
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Description
An iterative solver based on modified version of ADMM Alternating Direction Method of Multipliers.
See ChSystemDescriptor for more information about the problem formulation and the data structures passed to the solver.
#include <ChSolverADMM.h>
Constructor & Destructor Documentation
◆ ChSolverADMM() [1/2]
Default constructor: uses the SparseQR direct solver from Eigen, slow and non-optimal.
Prefer the other constructor where you can pass a different directo solver, e.g. Pardiso.
◆ ChSolverADMM() [2/2]
Constructor where you can pass a better direct solver than the default SparseQR solver.
The custom direct solver will be used in the factorization and solves in the inner loop.
Member Function Documentation
◆ GetError()
Return the number of iterations performed during the last solve.
Return the tolerance error reached during the last solve (here refer to the ADMM dual residual)
Implements chrono::ChIterativeSolver.
◆ SetStepAdjustThreshold()
Set the step adjust threshold T, ex.
1.5; if new step scaling is in the interval [1/T, T], no adjustment is done anyway, to save CPU effort.
◆ Solve()
Performs the solution of the problem.
- Returns
- the maximum constraint violation after termination, as dual (speed) residual
- Parameters
-
Implements chrono::ChSolver.
The documentation for this class was generated from the following files:
- /builds/uwsbel/chrono/src/chrono/solver/ChSolverADMM.h
- /builds/uwsbel/chrono/src/chrono/solver/ChSolverADMM.cpp
|
https://api.projectchrono.org/6.0.0/classchrono_1_1_ch_solver_a_d_m_m.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
#include <OnixS/CME/MDH/SocketFeedEngine.h>
Represents a collection of settings affecting the behavior of the multi-threaded feed engine while working with network-related services.
Definition at line 32 of file SocketFeedEngine.h.
Initializes the given instance of the network settings with the default values.
Definition at line 44 of file SocketFeedEngine.h.
Cleans everything up.
Definition at line 53 of file SocketFeedEngine.h.
Defines amount of time Feed Engine spends on socket waiting for I/O while running master processing loop.
Time is measured in milliseconds.
Definition at line 121 of file SocketFeedEngine.h.
Sets dataWaitTime.
Definition at line 128 of file SocketFeedEngine.h.
Max size for network packet transmitted by MDP.
Definition at line 60 of file SocketFeedEngine.h.
Max size for network packet transmitted by MDP.
Definition at line 67 of file SocketFeedEngine.h.
Defines size of receiving buffer in bytes for sockets.
Definition at line 98 of file SocketFeedEngine.h.
Sets socketBufferSize.
Definition at line 105 of file SocketFeedEngine.h.
Watch service to be used by Feed Engine.
Watch is used by Feed Engine to assign time points to packets received from the feeds.
Definition at line 79 of file SocketFeedEngine.h.
Watch service to be used by Feed Engine.
If no instance associated, UTC watch is used.
Definition at line 88 of file SocketFeedEngine.h.
|
https://ref.onixs.biz/cpp-cme-mdp3-handler-guide/classOnixS_1_1CME_1_1MDH_1_1SocketFeedEngineSettings.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
VO L 2 2 / I S S U E 2 6 / 02 J U LY 2 02 0 / £ 4 .4 9
DIGITAL PAYMENTS BOOM How to profit from the decline of cash
TAKING STOCK OF MARKETS AND THE MAJOR CHALLENGES AHEAD
HOW GAMES WORKSHOP BECAME A FTSE 350 STAR PERFORMER
HOLIDAY BOOKINGS ‘EXPLODE’ BUT THE STOCK MARKET ISN’T CONVINCED
Pioneering in 1868, pioneering today
Launched in 1868, F&C Investment Trust is the world’s oldest collective investment scheme. Despite ups and downs during this time, F&C Investment Trust has flourished through innovation, adapting to change, and most importantly – never forgetting to take the long-term view.
To find out more: Call BMO Investments on 0800 804 4564* (quoting 19PSM/1) or contact your usual financial adviser
fandcit.com
Past performance should not be seen as an indication of future performance. Capital is at risk and investors may not get back the original amount invested. Please read our Key Features, Key Information Documents and Pre-Sales Cost & Charges Disclosures before you invest. These can be found at bmoinvestments.co.uk/documents.
* Weekdays, 8.30am – 5.30pm. Calls may be recorded. © 2020 BMO Global Asset Management. Financial promotions are issued for marketing and information purposes in the United Kingdom by BMO Asset Management Limited, which is authorised and regulated by the Financial Conduct Authority. 938582_G20-0791 (03/20) UK.
938582_G20-0791_F&C IT Press Ad_Investment Trusts_v4.indd 1
07/04/2020 16:46:06
EDITOR’S VIEW
More companies set to reinstate earnings guidance Tesco and Serco lead the way with clarity on earnings expectations
M
any. By Daniel Coatsworth Editor
02 July 2020 | SHARES |
3
Contents
CLICK ON PAGE NUMBERS TO JUMP TO THE START OF THE RELEVANT SECTION
EDITOR’S 03 VIEW
More companies set to reinstate earnings guidance
06 NEWS
UK GDP figures are set to get worse / Holiday bookings ‘explode’ but the stock market isn’t convinced / Shell follows BP by marking down the value of its assets / Work from home boom spawns specialist ETF / Taking stock of markets and the major challenges ahead
GREAT 12 IDEAS UNDER THE 16 BONNET 22 FEATURE 28 RUSS MOULD INVESTMENT 32 TRUSTS FIRST-TIME 36 INVESTOR 40 ASK TOM MONEY 43 MATTERS READERS’ 45 QUESTIONS 47 INDEX
New: Flutter Entertainment / JPMorgan Japanese Updates: Redrow / Supermarket Income REIT How Games Workshop became a FTSE 350 star performer Digital payments boom Good portfolio planning matters more than good luck Why Scottish Mortgage won’t go the way of Woodford Why do companies join the stock market and which ones can I invest in? Which pension pot should I use first? Drop in investment ISA demand but more women are saving than men
How to tell if your ETF is physical or synthetic | 02 July 2020.
It takes all sorts to achieve long term success. Stock markets have proven to deliver over the long term. At Witan, we’ve been managing money successfully since 1909. We invest in stock markets worldwide by choosing expert fund managers. So you don’t have to. You can buy shares via your investment platform including, Hargreaves Lansdown, AJ Bell Youinvest, Interactive Investor, Fidelity FundsNetwork and Halifax Share Dealing Limited. Or contact your Financial Adviser. witan.com Witan Investment Trust plc is an equity investment. Past performance is not a guide to future performance. Your capital is at risk.
NEWS
UK GDP figures are set to get worse Key news to watch in the third quarter as the UK economy sees worst contraction in 41 years $1.45
$1.35
$1.25
$1.15
STERLING TO US DOLLAR EXCHANGE RATE 2017
T
2018
2019
2020
6
| SHARES | 02 July 2020 • 6 August – MPC meeting • 12 August – First estimate of UK Q2 GDP • 30 September – Apparent deadline set by UK Government on Brexit talks
NEWS
Holiday bookings ‘explode’ but the stock market isn’t convinced Holiday companies say bookings have soared, but the rise in demand is coming off a low base
H
oliday
Are UK holidaymakers planning a staycation this summer?. 02 July 2020 | SHARES |
7
NEWS
Shell follows BP by marking down the value of its assets The second quarter results from UK oil majors are likely to be ugly
T
he £22 billion worth of asset write-downs announced on 30 June by Royal Dutch Shell (RDSB) reflected its lower commodity price assumptions in the wake of coronavirus. They provided a gloomy trailer to its second quarter results but there were some bright spots for investors to take away. The company’s operational performance was a little bit better than expected with quarterly production of between 2.3 billion to 2.4 billion barrels of oil equivalent per day (boepd) comparing with previous guidance of 1.75 billion to 2.25 billion boepd. On 29 June Shell’s peer BP (BP.), which has already announced write-downs of $17.5 billion, unveiled the sale of its petrochemicals business to Ineos for $5 billion subject to regulatory clearance. This potential future cash injection is unlikely to
be enough to prevent it having to follow in Shell’s footsteps by cutting its dividend. The extreme nature of the challenge posed by the Covid-19 pandemic is enabling management at both companies to make hard-nosed strategic decisions. Analysts at Killik & Co comment: ‘We believe that rising costs of capital to the sector could create barriers to entry from which the strongest names could benefit, while the integrated stocks are best placed to transition their businesses from big oil to big energy companies.’ WHEN DO SHELL AND BP REPORT? • Shell Q2 – 30 Jul • BP Q2 – 4 Aug
Work from home boom spawns specialist ETF UK investors won’t be able to buy this product but they can invest in relevant tech funds AS WORKING FROM home takes on a life of its own, the trend has now become an investment strategy after the launch of a specialist exchange-traded fund in the US. The Direxion Work From Home ETF will track the Solactive Remote Work Index, a 40 stock index of work from home-friendly product and service providers. This means areas like remote communications, cyber security, document management and cloud technologies. Communications platform Twilio,
8
| SHARES | 02 July 2020
cloud tech specialist Inseego and Crowdstrike, the cyber security firm, are the top three stakes in the ETF. UK investors won’t be able to buy the ETF due to EU regulations, yet there are some UK funds also tapping into the broader theme. For example, Polar Capital Global Technology Fund (B42W4J8) invests in companies helping to make working from home possible. These include cloud infrastructure, software-as-a-service companies and digital payments providers,
while Crowdstrike and video meetings platform Zoom are among the top 10 stakes of Allianz Technology Trust (ATT). Both funds invest in Microsoft, which owns collaboration platform Teams. The SPDR MSCI World Technology UCITS ETF (WTEC) offers an ETF option on the home-working theme, aiming to capture future growth of the wider technology industry. Disclaimer: The writer Steven Frazer owns shares in Allianz Technology
NEWS
Taking stock of markets and the major challenges ahead If the first half of the year has been fraught, the second half could be equally hard to navigate
A
WINNERS AND LOSERS The best performing part of the UK market has been the Leisure Goods sector with a gain of 32.2%, which seems unlikely until you discover that the sector is dominated by ‘stay at home’ success story Games Workshop (GAW). Its fantasy miniatures and intellectual property have been in strong demand through lockdown. Other winners have been pharmaceuticals, food and drug retailers and personal goods, all made up of large, unglamorous but dependable stocks. The biggest losers so far are oil services and autos, both of which reflect the lack of travel during lockdown. Close behind are banks, which are essentially a
leveraged play on the prospects for the UK economy and the direction of interest rates and as such probably tell us more about the market’s view of what is to come than any of the winners. VALUE AND EARNINGS Unlike the US market, which was trading at stretched multiples before the coronavirus crisis, the FTSE 100 was only trading at or slightly above its long term
average valuation on the basis of cyclically adjusted earnings prior to the March sell-off. On the same basis, today it trades close to the level reached at the bottom of the great financial crisis in 2009 which suggests long-term investors could find value in largecap UK stocks. However, the flip side of the value case is the fact that earnings will take many years to
FTSE 100 TREND PE VALUATION CHART (TREND EPS GROWTH RATE 4.9%) FTSE 100: Trend PE, 1986-2020
Trend PE (x)
s we reach the halfway stage in what has been a year of upheaval, we thought we would look at which markets and sub-areas of those markets have held up best and which are still nursing their wounds. For the first half of the year the UK’s FTSE 100 benchmark of large cap companies had lost 16.9% while the FTSE 250 midcap index had lost 21.9% and the higher-growth AIM 100 market had lost just 8.4%.
Source: M23 Research. SD: Standard Deviation
02 July 2020 | SHARES |
9
NEWS UK 12M FORWARD ROLLING EPS UPGRADE / DOWNGRADE
(%)
Source: M23 Research
recover to their previous levels so reversion back to the mean for the market is going to take some time. Analysts are currently cutting their 12-month earnings forecasts for the UK market as a whole by more than a quarter, approaching the same level as in early 2009. Whether they subsequently raise them as fast as they did over a decade ago is the great unknown. For all the talk of a V-shaped recovery, few firms seemed to have any visibility in terms of earnings in their latest trading updates. First half reports, which will begin rolling in this month, are likely to be a write-off so investors need to concentrate on the outlook statements before coming to any conclusions on particular stocks.
pricing the extent to which the recovery could follow the traditional playbook’. On the other hand, the International Monetary Fund (IMF) has warned that the global economy faces the biggest slump since the Great Depression of the 1930s and that financial markets are ‘disconnected from shifts in underlying economic prospects’. Last week the IMF cut its 2020 global growth forecast to minus 4.9% from minus 3% previously due to lower consumption, demand shocks from social distancing and an increase in the savings rate as people hunker down. It also said that if there was a second wave of infections the global economy could flatline instead of growing 5.4% next year.
BULLS VERSUS BEARS Strategists such as Morgan Stanley’s Andrew Sheets are upbeat, arguing that the global economy was showing typical late-cycle characteristics before the pandemic, and that ‘the market is under-
BREXIT TIGHTROPE Politics is likely to play a large part in how markets behave in the second half of the year, both in the UK and abroad. While there seems to have been little progress so far on a Brexit deal with the EU,
10
| SHARES | 02 July 2020
European chief negotiator Michel Barnier believes an agreement is still within reach as long as the UK adheres to ‘the letter and the spirit’ of its non-binding declaration last year. There are hints the UK would accept tariffs and quotas in areas of its choosing if they allowed it to walk away from other conditions. Ultimately the ‘moment of truth’ will come at the EU summit in October when the bloc will want to see a draft agreement. TRUMP DUMPED? In the US, which is grappling with severe outbreaks of coronavirus across several states, investors will eventually have to switch their focus to the presidential election in November. The latest Financial Times poll calculator puts Joe Biden on 287 electoral college votes against 142 voted for Donald Trump, with support for Biden on both coasts far outweighing Trump’s narrow central southern base. Interestingly, key states such as Florida and Michigan are leaning towards Biden while Texas, the second biggest state in terms of college votes, is considered a toss-up. Despite his erratic performance, Trump is probusiness and pro-markets so if the current projections prove to be correct there could be volatility ahead for US stocks. By Ian Conway Senior Reporter
You won’t find our fund on a buy list. But that hasn’t stopped investors finding us.
For more than a decade, the FP Octopus UK Micro Cap Growth Fund has attracted investors who like to do their own research.
Our fund managers
The kind of investor who looks for a fund manager that does their homework, and for an investment team that has sharpened its skills over many years.
Richard Power
Lead Fund Manager
Our fund managers have over 700 meetings with companies every year. We manage money across the smaller companies spectrum. And often we’ll have met with a company’s management long before they list.
Dominic Weller Fund Manager
Past performance is not a reliable indicator of future returns. The value of your investment, and any income, can fall or rise. You may get back less than you invest. Smaller companies can fluctuate more in value and they may be harder to sell.
Chris McVey Fund Manager
31/05/2019 31/05/2020
31/05/2018 31/05/2019
31/05/2017 31/05/2018
31/05/2016 31/05/2017
31/05/2015 31/05/2016
FP Octopus UK Micro Cap Growth P Acc
3.9%
0.7%
18.6%
39.3%
10.6%
IA UK Smaller Companies TR
-7.0%
-4.0%
12.8%
27.6%
8.6%
Numis Smaller Companies plusAIM (-InvTrust) TR
-12.1%
-7.0%
6.3%
26.4%
12.0%
Past performance is not a guarantee of future returns.009790-2006
* Source: Lipper, 31/05/15 to 31/05/20. Returns are based on published dealing prices, single price mid to mid with net income reinvested,
net of fees, in sterling.
Flutter Entertainment’s shares are a winner The Flutter/Stars tie-up brings huge benefits for the gambling business
W
e believe the pandemic-induced shift towards online gambling and acceleration in the opening up of US states puts Flutter Entertainment (FLTR) in the box seat, stealing a march on the competition and underpinning growth. Flutter, which owns Betfair and Paddy Power, agreed an all share tie-up with Canadian sports betting company Stars Group in October 2019 to create the world’s largest online businessto-consumer betting company with forecast revenues expected to exceed £4 billion this year, according to data compiled by Refinitiv. The combined group has a better balance of sports and gaming revenues and a larger global reach, increasing diversification and growth opportunities. The pandemic-induced lockdown has accelerated the shift towards online, as highlighted by first quarter results to 31 March which showed a 200% increase in US gaming revenues, which offset the 46% fall in sports revenues. In addition to the online tailwind, pressure on public finances resulting from COVID-19 will increase the pace of US states opening to online sports betting and ultimately gaming, according to management. For example, California could
12
| SHARES | 02 July 2020
FLUTTER ENTERTAINMENT
BUY
(FLTR) £108.75 Market cap: £16.2 billion
raise around $400 million a year in taxes assuming a 15% tax rate, helping to ‘cut through’ some of the political obstacles. Using the UK spending per head as a proxy for US spending suggests gross revenues of $19 billion and earnings before interest, tax, depreciation and amortisation (EBITDA) topping $5 billion in sports betting by 2023. While only Jersey, Delaware and Pennsylvania currently permit online casino and poker, with Nevada online poker only, investment bank Jefferies expects around 20 states to pass the necessary legislation by 2023 out of 44 possible. The onset of the pandemic forced all gambling firms to cut non-essential expenses and raise new money to tide them over during lockdown and strengthen finances. Flutter successfully raised £812 million of fresh capital on 29 May which reduced the ratio of net debt-to-EBITDA by 0.9 times towards its one-to-
two times target. Arguably Flutter wouldn’t have moved so quickly to deleverage without the impetus provided by the pandemic. Importantly the company now has extra firepower to access further deals and enhance competitive positioning. Some of the money will be used to retain poker and casino customers it gained through lockdown. Analysts don’t expect a dividend for the 2020 financial year, but the cash reward should resume in 2021, albeit most likely at a lower level. Jefferies forecasts 134p in 2021 and 200p in 2022, the latter putting the dividend back at the level paid for the 2019 financial year. 000'S 11
FLUTTER ENTERTAINMENT
9 7 5
2019
2020
JPMorgan Japanese: buy Japan’s best-in-class innovators on the cheap Specialist Japan trust focuses on quality companies with strong balance sheets and structural growth
A
lthough the Japanese economy and its corporates haven’t escaped the effects of coronavirus, which compounded a hit from 2019’s consumption tax hike, the country has navigated the pandemic fairly well. Japan remains overlooked by many investors which is a shame since the nation is home to bestin-class innovative companies that are growing both at home and overseas. One way to gain exposure is via the JPMorgan Japanese Investment Trust (JFJ), which boasts a five-star Morningstar rating and sells on a 10.2% discount to net asset value (NAV) that belies a strong longrun record as well as a high Morningstar sustainability rating. First and foremost a capital growth-focused fund, JPMorgan Japanese’s investment managers Nicholas Weindling and Miyako Urabe seek the most attractively valued Japanese investment themes and companies in what remains an under-researched stock market. According to Morningstar performance data, the trust has delivered 10-year annualised NAV and share price returns of 13.6% and 14.7% respectively
JPMORGAN JAPANESE INVESTMENT TRUST
BUY
(JFJ) 547.92p Market cap: £877.5 million
versus the MSCI Japan GR USD benchmark’s 8.4% return. Being Tokyo-based gives Weindling, Urabe and the investment team a competitive advantage, as does their ability to leverage JP Morgan Asset Management’s formidable research resource. Unconstrained by sector or market cap, the unwavering focus is on quality companies with strong cash flows. Weindling and Urabe put money to work with innovative Japanese companies that boast strong future growth prospects and bring investors exposure to Japan’s new products, technologies and markets. It is also worth noting that in contrast to the dividend suspensions witnessed in Western markets, in general Japanese companies haven’t been cancelling dividends or buybacks thanks to the strength of corporate balance sheets. Investment themes to which
the portfolio already had exposure, such as automation, video game downloads and e-commerce, have been accelerated by the pandemic. Top holdings in its portfolio include Keyence, a rapidly growing factory automation business that manufactures sensors and has one of the highest operating margins of any company globally at around 50%. It has a stake in Hoya, which has a 100% market share in the glass substrate used in hard disk drives for data centres. JPMorgan Japanese holds stakes in Uniqlo clothing brand owner Fast Retailing, console maker Nintendo and online legal documents company Bengo4.com. It also has a stake in GMO Payment Gateway, an online payment platform set to benefit as cashless payments become more prevalent in Japan amid pandemic-induced health concerns over touching physical notes and coins. 600
JPMORGAN JAPANESE 450
300
2019
2020
02 July 2020 | SHARES |
13
This is an advertising feature
The Royal Mint Physical Gold ETC RMAU For Professional Investors Only
The UK’s Oldest Company – The Royal Mint – Issues first Financial Product Owning Gold Since 2001, the market for gold has increased by an average of 14% a year driven by new ways to invest and growing demand for gold from newly affluent middle classes in emerging markets like China and India. 1 As a relatively scarce resource, it has long been viewed as a ‘safe haven’ asset and a good way for investors to protect themselves against inflation or currency movements. These characteristics can help explain why demand for gold goes into overdrive during times of crisis, whether troubles occur in the equity markets such as CoronaVirus, or in the case of bond market crises like the credit crunch. Investors have many choices when it comes to buying gold, but each come with their own characteristics and risks: Buying Physical Gold Bars or Coins: Small bars and coins enable investors to own physical gold metal. Bullion Bars: At 400 oz, these bars are used by many large institutions as a way to own gold. London ‘Good Delivery’ bars are the global standard but are often too large for individual investors. Gold Mining Stocks: Investors can buy shares in gold mining companies. However, the company’s share price may not track the price of gold and the investor will not own any physical metal. Physical Gold ETCs: Backed by physical gold, these investment products allow investors to track the price of gold, giving them access to the properties and security of owning physical gold without the need to arrange for
storage and insurance separately.
The Royal Mint Founded in 886 AD by King Alfred the Great, The Royal Mint has an unbroken track record of trust and authenticity dating back over 1,100 years. As the UK’s home of gold, The Royal Mint is the leading provider of bullion bars and coins and the Royal Mint’s 35-acre site is home to a purpose-built precious metals storage facility, TheVaultTM - one of the most secure sites in the world.
1
In 2020, the Royal Mint made history again with the launch of its first listed financial product – The Royal Mint Physical Gold ETC (RMAU). Backed 100% by responsibly-sourced gold, The Royal Mint Physical Gold ETC enables investors to own gold securely, without the cost and risks of storing it themselves.
The Royal Mint Physical Gold ETC A Gold exchange traded commodity (ETC) is a financial instrument that tracks the price of gold and trades on a stock exchange in a way similar to a share. It is an efficient way for investors to buy gold securely as they do not have to store the physical gold themselves. Most Gold ETCs usually hold their gold in the vaults of banks in major financial centres such as London. The Royal Mint Physical Gold ETC - RMAU is unique in Europe in that the gold is held at The Royal Mint’s vault near Cardiff, Wales, which provides an attractive option for investors looking to diversify their custody arrangements away from banks. As with other investments, when you trade gold ETCs, your capital is at risk. RMAU can be traded through any stockbroker.
Responsible Gold investing In 2012 the London Bullion Market Association established guidelines for responsible gold sourcing covering environmental impact, responsible supply chain, workforce safety, human and labour rights, community impact, environmental stewardship, land use and water & energy use. As each bullion bar meeting these standards can be identified by a unique serial number, some pioneering companies like The Royal Mint can now offer products that provide responsibly sourced gold on a best endeavours basis, providing a solution for responsible investors. The Royal Mint Physical Gold is 100% backed by these bars.
Learn About Investing in Gold To learn more about investing in gold through The Royal Mint, please visit or contact HANetf via
[email protected]
When you trade ETFs and ETCs, your capital is at risk. For professional investors only. This content is issued by HANetf Limited, an appointed representative of Mirabella Advisers LLP, which is authorised and regulated by the Financial Conduct Authority.
REDROW (RDW) 439.7p
SUPERMARKET INCOME REIT (SUPR) 111.44p.
Gain to date: 6.1%
Original entry point: Buy at 105p, 2 April 2020
AS WE HOPED, groceriesrelated. 115
900
REDROW 700
105
500
95 300
2019
2020
SHARES SAYS: Redrow’s performance has clearly been very disappointing but we hope it can recover some ground in the remainder of 2020. Stick with the shares.
SUPERMARKET INCOME REIT 2019
2020
SHARES SAYS: We continue to see Supermarket Income REIT as a reliable source of dividends at a time when income investors have been hit by a wave of payout cuts and cancellations. Keep buying.
02 July 2020 | SHARES |
15
How Games Workshop became a FTSE 350 star performer Mid-cap star dominates a growing, global niche wait for a more attractive buying opportunity.
GAMES WORKSHOP SHARE PRICE: £81.40 MARKET CAP: £2.6 BN
N
ottingham-based fantasy miniatures manufacturer Games Workshop (GAW) has been the best FTSE 350 performer over the last five years, providing a total return, which includes reinvested dividends, of 1,600%. That means a £1,000 investment in 2015 would have turned into £17,000 today, which works out at an extraordinary compound annual growth rate (CAGR) of 76%. It demonstrates what can happen when strong earnings growth is accompanied by a rising price-to-earnings ratio (PE). We see Games Workshop as a unique business with strong returns which has arguably only
16
| SHARES | 02 July 2020
scratched the surface of potential growth opportunities. However, given the growth expectations already embedded in the share price, we believe it is best to
TURBOCHARGING RETURNS The chart shows that from 2015 onwards the one-year forward price-to-earnings ratio started to rise steeply from around 12 times to the current 29 times. Analysts often use forecast earnings because they are more relevant to investors than historical numbers. This should always be used with some caution but generally a rising PE means investors expect higher future profits. In Games Workshop’s case net profits have grown from £12.3 million in 2015 to around £70 million for the year to 31 May 2020, an increase of 5.7 times. If the PE had remained at 12,
SHARE PRICE AND PE HAVE ADVANCED RAPIDLY
Source: Factset
Share price (p, LHS)
1 year forward PE (x, RHS)
WHO IS GAMES WORKSHOP? Warhammer and Warhammer 40,000 is a miniature wargame based on Warhammer Fantasy Battle, the most popular wargame in the world. The game is set into the distant future where a stagnant human civilisation faces hostile aliens and malevolent supernatural creatures. The company doesn’t sell ready-to-play models but rather it sells boxes of model parts which enthusiasts are expected to assemble and paint. The tools, glue and paints are sold separately. Effectively the company serves and inspires millions of table-top hobbyists across the world. Board gaming is a global market growing at an estimated compound annual growth rate (CAGR) of 9% and is expected to be worth around $12 billion by 2023 according to consultancy Statista. today’s market value would only be £840 million compared with the actual £2.6 billion. A higher PE has increased the perceived value of the company significantly. Before dissecting the financials of the business to understand how the company has achieved such impressive growth, we explain some of the unique aspects of the business that add up to what famed investor Warren Buffett has called an economic moat and which the company calls its ‘Fortress Moat’. ECONOMIC ADVANTAGES Games Workshop is categorised
a retailer inside the FTSE 350 Leisure Goods sector, but in reality it is much more. Traditional retailers are intermediaries between suppliers of products and consumers. They apply a small mark-up for providing this service. However Games Workshop is vertically integrated, which means it designs, manufactures and distributes directly to the consumer via its 529 Warhammer stores, online customers and through over 6,000 trade partners. Effectively it controls all parts of the value chain, from design, manufacturing and distribution.
The company employs a 200 person design studio which creates all of the firm’s miniature designs, artworks, games and publications. That adds up to over 30 years of intellectual property rights. As we will elaborate later, the business is starting to reap some meaningful revenue from selling TV and production rights to its products. Focusing on fantasy characters instead of historical ones means future product innovation is only limited by the designers’ imaginations and provides endurance to the brand, a valuable trait. Investing in the best manufacturing and tooling equipment means the company makes the best quality and most detailed miniatures on the market which protects the business from inferior imitators. In addition the firm is much larger than its nearest rival which means it can manufacture the products more cheaply. Finally, controlling the whole supply chain means the company has flexibility to set prices. Once made all products are distributed from a warehousing facility in Nottingham to stores, trade partners and online customers or via hubs in Sydney and Memphis. STRONG RETURNS The financial consequence of possessing a ‘Fortress Moat’ and having control over the entire value chain is that the company achieves a very high return on capital employed (ROCE). According to company data the ROCE hit 111% last year, one of the highest in the UK market. This means the company can 02 July 2020 | SHARES |
17
RETURN ON CAPITAL EMPLOYED (ROCE) Year
ROCE %
2014
32.1
2015
30.5
2016
31.8
2017
65.0
2018
97.2
2019
82.3
Source: Sharepad
easily self-fund future growth from internally generated cash with the flexibility to pay consistent dividends. Since listing in 1995 the company has grown its dividend by a CAGR of 9.5%. WHAT HAS DRIVEN SUCH STRONG GROWTH? The short answer to what is behind the firm’s rapid expansion was eloquently articulated by small cap fund manager at Aberdeen Standard, Harry Nimmo who told Shares ‘new
management made it fit for the internet age’. Having joined the company in 2008 and served time as chief financial and operating officer, Kevin Rowntree took up his current chief executive role in January 2015. Stronger revenue growth can be traced back to the 2016 launch of online hub WarhammerCommunity.com, which attracted 5 million users and 70 million page views over the first two years.
STRONG GROWTH IN CONTENT ON WARHAMMER TV Number of Games Workshop videos uploaded to YouTube each year
Source: YouTube
18
| SHARES | 02 July 2020
In the 2019 half-year report the company highlighted 48% growth in users accessing its online hub while sessions per user also increased, ‘meaning our fans are visiting more often and are more engaged with the content’. At least one new video is uploaded to Warhammer TV every day across YouTube and Facebook platforms, detailing how to build and paint models as well as unbox the kits. This is a marked improvement from 2016 when only 40 videos were uploaded in a year. Twitch is a streaming service that allows users to watch live and pre-recorded recording of a broadcaster’s video game. Warhammer is featured heavily with a busy programme that includes ‘Hang out and Hobby’ which broadcasts from 4pm to 7pm during weekdays and live content at 3pm every day bar Wednesday. LICENSING DEALS According to some analysts there was a step change in the company’s approach to licensing from around 2015, which has resulted in income increasing from £1.5 million to around £16 million in the year to 31 May 2020. All in all, successful social media engagement has not only resulted in the company selling more products to its existing customer base but has also attracted new, younger hobbyists. Revenue has gone from £119 million in 2015 to around £270 million in 2020. However operating profit has increased five-fold.
The table shows the operating margin has more than doubled over the last few years as more revenue was captured as profit. This is due to management keeping a tight lid on administration expenses which have not kept pace with the expanding size of the business. As a proportion of revenue, they have fallen by over a third, allowing operating margins to expand appreciably. While the company should be applauded for good financial controls, it has also benefited from an increase in volumes going through the manufacturing and warehouse facilities at no marginal cost. One underappreciated aspect of Games Workshop’s business model is the huge fan base that in effect acts as a free sales and marketing team. The move to online has accelerated this effect because of the viral nature of social media, where videos, podcasts, interviews and tutorials spawn yet ever increasing content. In turn, discussion among users drives awareness,
GAMES WORKSHOP PROPORTION OF REVENUES 2014 (%)
2019 (%)
Gross Profit
70.0
67.5
Admin Expenses
56.5
36.0
Operating Margin
13.6
31.6
Source: Sharepad
attracting new hobbyists, creating a virtuous circle. HOW FAR CAN OPERATING MARGINS EXPAND? A key consideration for investors
is to make an assessment of how sustainable the current economic advantages enjoyed by Games Workshop can continue. Analysts at Jefferies believe sales growth momentum will continue for the next decade, eventually fading to around 3% a year by 2030. Recognising the benefits from operational leverage they expect the operating margin to continue to expand to 40.5%. The higher margins are also expected to get a boost from increasing, higher margin license revenues. By Martin Gamble Senior Reporter
02 July 2020 | SHARES |
19
THIS IS AN ADVERTISING PROMOTION
BEYOND OIL: THE WORLD’S CHANGING ENERGY MIX BLACKROCK ENERGY AND RESOURCES INCOME TRUST PLC
The way the world fuels its transport, heats its homes and powers its industry is changing, but the path to renewables will not be linear, says Mark Hume, Co-Manager of the BlackRock Energy and Resources Income Trust plc.
Mark Hume Co-Manager, BlackRock Energy and Resources Income Trust plc Capital at risk. The value of investments and the income from them can fall as well as rise and are not guaranteed. Investors may not get back the amount originally invested. The world’s energy mix is in motion. The most recent data from the International Energy Agency shows that while around a quarter of the world’s electricity is powered by renewables, capacity is expected to expand by 50% by 20241. The path ahead if clear: dominant fossil fuels are being replaced and a new energy infrastructure is emerging. The adoption of renewables has been strongest in the electricity sector, where solar and wind power have seen rapid growth driven by policy initiatives and falling prices1. The size of the global wind power market grew 35% in 2018 and is expected to reach $124.5 billion by 2030 as the AsiaPacific region drives growth2.There is similar growth potential in solar, hydro and biofuels as the world transitions to a less carbon intensive energy system. Any energy strategy needs to reflect this shift and the BlackRock Energy and Resources Income Trust recently moved to incorporate more companies linked to the global energy transition in its portfolio mix. Today’s GDP growth is less energy intensive, which means that there is less growth in core energy markets. As such, this shift is important to build sources of new growth into the portfolio. THE ROLE OF OIL However, electricity accounts for only a fifth of global energy consumption1. There are still areas such as heating where renewables are a relatively small part of the mix. Renewables met only 10% of global heat demand in 2018 and is only expected to reach 12% by 20243. Within transportation,
dependence on oil is partly being addressed through electrification, but although this is growing fast, it is from a small base. As such, the energy mix is likely to include some traditional sources of power for some time. There is also the question over whether the current coronavirus outbreak may put a temporary pause on the move to decarbonise as governments turn their attention elsewhere. It is our view that the pace of the energy transition will evolve and shift over time, which may change the opportunity set at any given point in the cycle. Equally, it is important to note that many ‘traditional’ energy companies – oil majors and so on – are likely to play a key role in the energy transition. Many of them have championed renewable fuels. BP for example, is the UK’s biggest name in electric vehicle (EV) charging, while also holding investments in biofuels, wind and solar. It has committed to becoming a net zero company by 2050 or sooner4. CHANGING BUSINESS MODELS Royal Dutch Shell has developed wind and solar-power projects, encouraged the adoption of hydrogen electric energy and invested in low-carbon start-ups — spanning electric vehicle charging to home energy storage5. In early 2020, Rio Tinto shared plans to invest around $1 billion over the next five years to support the delivery of its climate change targets6. It is also working towards net zero emissions from operations by 20506. Increasingly, it is not a question of a company being on one side or another but all being part of a broader transition. The question is what happens in the meantime. A side effect from the COVID-19 outbreak has been extreme volatility in the oil price including, at one point, a dip to a negative rate. There have been many questions over whether the oil producers can weather the short-term shock. We estimate that oil demand may be as much as 10% lower for the year ahead. While some of the recent price fluctuations have been anomalous, there can be little doubt that the oil price could remain under pressure for some time. VULNERABILITY Certainly, there may be bankruptcies among those companies with the highest costs of production – the US shale companies look particularly vulnerable. However, for larger companies, with lower debt and strong management teams, these low oil prices should not disrupt their longerterm strength and it may even accelerate their adoption of alternative energy sources. Many entered this crisis in a good position with strong balance sheets, advantaged assets and a clear and well-articulated strategy to be part of the solution.
THIS IS AN ADVERTISING PROMOTION
The world’s energy mix is fluid. While the path of travel is clear – towards renewables and away from fossil fuels – it may not be linear. Active management can help direct investment to those companies with the greatest influence at any given point in the cycle. Risk: The specific companies identified and described above do not represent all of the companies purchased or sold, and no assumptions should be made that the companies identified and discussed were or will be profitable.
1
For more information on this Trust and how to access the opportunities presented by the energy and resources markets, please visit Unless otherwise stated all data is sourced from BlackRock as at May 2020. All amounts given in USD.
TO INVEST IN THIS TRUST CLICK HERE
IEA, Feb 2020. 2 Power Technology, Nov 2019. 3 IEA, Feb 2020. 4 BP, Feb 2020. 5 FT, Sep 2019. 6 FT, Feb 2020. Risk Warnings
professional advice prior to investing.
Past performance is not a reliable indicator of current or future results and should not be the sole factor of consideration when selecting a product or strategy.. Emerging markets risk: Emerging market investments are usually associated with higher investment risk than developed market investments. Therefore, the value of these investments may be unpredictable and subject to greater variation. Mining investments risk: Mining shares typically experience above average volatility when compared to other investments. Trends which occur within the general equity market may not be mirrored within mining securities..
Net Asset Value (NAV) performance is not the same as share price performance, and shareholders may realise returns that are lower or higher than NAV performance. The BlackRock Energy and Resources. ID: MKTGH0520E-1196782-4/4
DIGITAL PAYMENTS BOOM How to profit from the decline of cash
By Steven Frazer News Editor
I
n 2020 you are no longer faced with the dilemma of fumbling around in your pocket for change to pay for your coffee as the train readies to leave. Instead you can simply tap your debit or credit card, or even your phone or smartwatch, on the machine and get on your way. This is the modern world of cashless, digital payments. It might not be what we want all of the time, but there is safety and convenience that most people appreciate. In this article we examine this fast growing
22
| SHARES | 02 25 June July 2020 2020
industry and explain why ordinary investors might want to gain exposure to the theme. We also highlight some of the industry’s most significant players, emerging names, and offer a selection of companies and funds that will give you exposure. DIGITAL PAYMENTS LANDSCAPE The rise of digital cashless transactions is hardly new. People have been paying for more stuff
SPENDING THE MOST DIGITAL DOLLARS China
$1.92 trillion
US
$895.79 billion
Japan
$165.21 billion
UK
$164.41 billion
South Korea
$113.52 billion
Source: Statista
electronically for years, usually with debit or credit cards. But the increasing shift to online shopping, technology improvements, such as better smartphones and smartwatches, and the emergence of digital wallets has created a boom in the digital payments industry. This has drawn interest from lots of fund managers, from mainstream global growth funds, like the JPM Global Unconstrained Equity Fund (B235QT6) and the Liontrust Global Equity Fund (3067916), to thematic specialists, such as tech investor Polar Capital Global Technology Fund (B42W4J8) and Robeco Global Consumer Trends Equities (BZ1BV36), which does what it says on the tin and backs consumer growth stocks. The global Covid-19 pandemic has greased the wheels of transition and accelerated the switch for millions, says Stephen Yiu, chief investment officer at Blue Whale Capital and lead manager of the Blue Whale Growth Fund (BD6PG78), with payment platforms and merchants using dirt cheap debt to pay for digital investment. People are using digital payment options to avoid contact and the spread of infection that direct cash handling risks, while it has also made social distancing easier to manage during these testing times. But it is the mass closure of shops during lockdown that really pushed people online and, in many cases, use digital payments for the first time.
The mass closure of shops during lockdown pushed people online and many tried digital payments for the first time
For example, at the end of April Amazon revealed itself as one of the big winners of the coronavirus pandemic. It announced revenue of $75.4 billion in the first three months of the year as millions of consumers used the platform to buy healthcare and antiseptic products, bits and pieces for odd jobs around the home, food and other necessities and much else during lockdown. That meant a better than expected 26% yearon-year revenue jump, and was calculated at more than $33 million of sales an hour. Black Friday sales in November 2019 showed an 82% global increase in purchases made with mobile wallets compared to 2018, according to data compiled by Research & Markets. The preference for cashless payments offline and one-touch payments online is changing the e-commerce landscape. Buyers want to be able to pay anytime, anywhere and with minimum hassle. According to Verdict data, around 720 billion digital transactions will be made in 2020, up from 641 billion in 2019. That could be worth more than $4.4 trillion this year, if market and consumer data firm Statista is right. Compound average growth is forecast to run at 17% a year over the next five years, projecting nearly $8.3 trillion of digital payments by 2024. By then, Verdict estimates that we’ll be making in excess of 1.1 trillion transactions globally, over smartphones and other connected gadgets. PUSHBACK ON CASHLESS Plenty of people still prefer to use cash. It is often the best, and often only, way to shop at street markets or buy items in small stores like your local newsagent or corner shop. There has also been push back from consumers and campaigning organisations that digital payments can be a challenge for vulnerable groups like the elderly, those with poor credit histories or simply the less tech savvy. In Sweden for example, which leads the world in cashless payments, there is a growing feeling that the pace of change to cashless is moving too fast. In Stockholm’s Odenplan square, at the heart of the city centre and a hotspot for visiting tourists, you’ll struggle to buy a cup of coffee and a bun with cash today, while you can no longer use coins or notes to hop on one of the capital’s buses, just like in London. 02 July 2020 | SHARES |
23
According to Riksbank, Sweden’s central bank, cash retail sales transactions in the Scandinavian nation have dropped from around 40% to below 15% over the past decade. Data such as this has led Swedish National Pensioners’ Organisation to lobby the government on behalf of its 350,000 members to force shops, cafes and other businesses to accept cash in Sweden for as long as people have the right to use it. Despite these issues, governments around the world continue their push towards cashless societies. Ostensibly, having a digital paper trail for all transactions would decrease crime, money laundering and tax evasion. This is driving favourable regulation, which seeds greater adoption. According to data from digital payments researcher Mordor Intelligence, electronic payments in the UK have ‘experienced constant and sustained growth,’ with debit cards overtaking cash as the most popular form of payment in recent years. In 2016 about a quarter of in-store payments were made digitally, but that is now well over 50%, says Mordor.
WHAT IS A DIGITAL WALLET? A digital or e-wallet is a way of electronically storing all your payment details in one place. They can make it safer and simpler for you to make cashless purchases online and in-store. As well as storing your payment details for online payment systems, like PayPal, Google Pay and Apple Pay, digital wallets can also connect with traditional bank accounts and store your credit and debit card information. According to Blue Whale’s Stephen Yiu, data shows that having someone’s financial details recorded on a digital wallet increases the chance of a customer completing an online purchase by 50%.
STOCKS TO PLAY A CASHLESS SOCIETY In simple terms there are two types of digital payment provider; the Mastercard/Visa duopoly, or the digital wallet providers, such as PayPal, Apple Pay, Amazon Pay, Google Pay, Shopify, Square, Worldpay and a long list of others. Our top stocks to buy are Mastercard, Visa and Paypal. MASTERCARD (MA) BUY $289.34
VISA (V) BUY $189.27 According to Blue Whale’s Stephen Yiu, Mastercard and Visa are the lower risk ways into this growing space with one or the other appearing on the front of any of the debit or credit cards sat in your wallet or purse. Visa, which has the larger market cap of the two ($403 billion versus $290 billion), has a greater US domestic and debit card slant, while Mastercard has a modest advantage in credit cards. Both make their money by taking tiny percentage charges every time a one of their cards is used. But with transactions in the trillions every year across billions of merchants, that small 220
360
VISA
MASTERCARD 180
280
200
24
140 2019
| SHARES | 02 July 2020
2020
2019
2020
cut on card payments really adds up. Revenue this year is forecast at $15.6 billion (Mastercard) and $20 million (Visa), with net income of $6.5 billion and $11.1 billion estimated respectively. Operating margins run at more than 50%. The stocks trade on 12 month price-to-earnings multiples in the region of 35-times, and while they admitted transaction volumes slowdown during lockdowns (travel spending has been very hard hit), Yiu believes extra online shopping will have offset much of the decline this year.
with new entrants emerging all of the time. For example Stripe, the private digital payments provider popular with start-ups, was only set up 10 years ago by Irish brothers Patrick and John Collison, but was valued last year at $35 billion. THREE GOLDEN RULES FOR DIGITAL WALLETS
•Easy to use •Competitively priced •Trusted by consumers
Source: Stephen Yiu, Blue Whale Capital
PAYPAL (PYPL) BUY $170.87 PayPal is the dominant digital wallet provider, and our top pick, with more than 20 million online merchants signed up. To put that into perspective, Facebook has only half that number of merchants advertising across its ecosystem, Square says it has about 2 million merchants while Worldpay has 400,000. PayPal was the first digital payment company of scale, and it is probably the most familiar to readers. This is an intensely competitive space
PAYPAL ACTIVE USERS HAVE GROWN RAPIDLY
PayPal has used its first mover advantage to good effect by continuing to evolve by investing in the business. This has created a network effect, a virtuous circle where the company gets bigger by attracting more merchants to its platform, which makes it more attractive to new merchants, and so on. Yiu believes there are three factors more important than any others when considering digital payment platforms; ease of use for consumers, cost to merchants and, above all else, trust. When it’s your bank and credit card details on the line, you need to be sure that your information will be held securely and used responsibly. That’s another advantage for PayPal. But competition does cap profit margins, at 23.2% for PayPal last year. They are creeping higher, largely thanks to its current dominant position, with 25% predicted over the coming year or two, but PayPal is never likely to match the 50%-plus of the card giants. Still, the business has the capacity to grow much faster as the share of cardless digital transactions escalates.
Period
Users
Q1 2020
325 million
Q1 2019
277 million
Q1 2018
237 million
Q1 2017
205 million
Year
Amount spent
Q1 2016
184 million
Q1 2020
$190.6 billion
Q1 2015
165.2 million
Q1 2019
$161.5 billion
Q1 2014
148.4 million
Q1 2018
$132.4 billion
Q1 2013
127.7 million
Q1 2017
$100.6 billion
Q1 2012
109.8 million
Q1 2016
$81.1 billion
Q1 2011
97.7 million
Q1 2015
$63.0 billion
Q1 2010
84.3 million
Q1 2014
$53.7 billion
Source: Statista, company accounts
MONEY SPENT OVER PAYPAL HAS JUMPED 255% SINCE 2014
Source: Statista, company accounts
02 July 2020 | SHARES |
25
FUNDS TO PLAY A CASHLESS SOCIETY ALLIANZ US EQUITY (B4N1GS7) % OF PORTFOLIO: PAYPAL 2.3%, MASTERCARD 2.3% The Allianz fund prefers large companies with good prospects for increasing profits in the years ahead and which trade on attractive valuations. That valuation caution possibly explains its lukewarm performance in rcent years, where it has struggled to beat its IA US benchmark. The fund draws from healthcare and other consumer sectors but is currently most exposed to finance and technology, with stakes in Alphabet, Amazon, and Apple alongside several of the big digital payments specialists. For investors looking to use funds to get exposure to the rapidly growing digital payments space, here are three good options: TROJAN GLOBAL EQUITY (B0ZJ5S4) % OF PORTFOLIO: VISA 5.6%, PAYPAL 6.8% Invests globally with a longer-term remit of at least five years, is tech heavy and includes American Express, Google Pay-parent Alphabet and UKlisted credit checker Experian (EXPN). Run by joint managers Gabrielle Boyle and George Viney, the Trojan fund has outstripped its Investment Association Global benchmark over one, three and five years by some margin, 98.7% versus 59.3% on the five year measure. 420
370
TROJAN GLOBAL EQUITY O ACC 320
2019
2020
380
320
ALLIANZ US EQUITY C ACC 260
2019
2020
BLUE WHALE GROWTH FUND (BD6PG78) STAKES: MASTERCARD, PAYPAL, VISA (ALL IN TOP 10 HOLDINGS 31 MAY) This is a portfolio of technology-based growth companies where manager Stephen Yiu hopes to pick out best-in-class ‘winners’. This is a very concentrated fund of about 25 stocks in the US and Europe which it follows daily, although that makes it fairly high risk. It’s a young fund, set-up in 2017 but it has smashed its IA Global benchmark in each of the past two years, with 18.2% and 19.9% returns respectively. 170
LF BLUE WHALE GROWTH R GBP ACC
155 140 125 2019
26
| SHARES | 02 25 June July 2020 2020
2020
Good portfolio planning matters more than good luck The VIX index helps investors measure levels of market volatility
T
he twentieth-century American journalist Edward R. Murrow may be best known for his ground-breaking reports from Buchenwald in 1945 and the manner in which he signed off each of his broadcasts by saying ‘Good night and good luck’. That phrase was ultimately used in 2005 as the title of a film that depicted his battle with communism-obsessed Senator Joseph McCarthy but this column’s favourite pearl from Murrow is his comment that: ‘Anyone who isn’t confused doesn’t really understand the situation.’ In the narrow context of financial markets, this seems particularly apposite. Equity markets switched from blind panic in late February and early March to what felt like exuberance by midJune. investors, and the range of emotions can be tracked, albeit in a rather shorthand form, by the VIX index. PANIC TO OPTIMISM AND BACK The so-called ‘fear index’, which can be tracked for free via the internet, measures expected future volatility in the US equity market (there used to
28
| SHARES | 02 July 2020 this year of 82.7 in March, when it surpassed the 80.9 peak seen in November 2008 as the Great Financial Crisis reached its zenith. A long-term chart shows that the VIX tends to peak as share prices bottom and vice-versa. This makes sense in that valuations will be at their cheapest (and therefore most rewarding) during panics and sentiment is washed out, while valuations will be at their highest (and therefore least rewarding, at least over the long-term) when animal spirits are rampant and confidence is high.
RUSS MOULD
AJ Bell Investment Director THE VIX CAN BE A HELPFUL CONTRARIAN INDICATOR, AT LEAST AT THE EXTREMES
2020
2018
2016
2014
2012
2010
2008
2006
2004
2002
2000
1998
1996
1994
1992
1990
Source: Refinitiv
What is eye-catching about the rally in the S&P 500 US equity benchmark since March is how
2019 more than twice the long-run average) are pretty rare – the ‘fear index’ has only got there in nine years out of 30 and for just over 200 days in total (barely 2.5% of the time). hits 40 is pretty mixed but gold can point to seven gains and just two losses so the precious metal could be considered as a possible diversifier, especially if investors fear a sustained economic downturn.
2020
Source: Refinitiv
02 July 2020 | SHARES |
29
RUSS MOULD
AJ Bell Investment Director Admittedly, an argument in favour of portfolio diversification is hardly new but in its defence this column will reach for a third and final quote from
Ed Murrow, by way of conclusion: ‘The obscure we see eventually. The completely obvious, it seems, takes longer.’
GOLD HAS HISTORICALLY DONE BETTER THAN THE S&P 500 IN YEARS WHEN THE VIX HITS 40 Number of VIX readings of 40+
S&P 500 annual return (%)
Gold annual return in $ (%)
1998
15
26.7%
-0.5%
2001
4
-13.0%
1.4%
2002
11
-23.4%
24.0%
2008
64
-38.5%
3.1%
2009
67
23.5%
27.1%
2010
3
12.8%
29.3%
2011
11
0.0%
11.1%
2015
1
-0.7%
-10.4%
2020
34
-6.9%
16.5%
Source: Refinitiv
READ MORE STORIES ON OUR WEBSITE
Shares publishes news and features on its website in addition to content that appears in the weekly digital magazine. THE LATEST STORIES INCLUDE: RIO TINTO REACHES POWER DEAL FOR HUGE COPPER AND GOLD MINE
OMEGA DIAGNOSTICS READIES ANTIBODY HOME TEST KIT
SHARESMAGAZINE.CO.UK SIGN UP FOR OUR DAILY EMAIL 30 | For SHARES |the 25 June latest 2020 investment news delivered to your inbox
SCOTTISH MORTGAGE INVESTMENT TRUST
We invest in profound change, disruptive technologies and big ideas. Not just the stock market. We don’t see Scottish Mortgage Investment Trust as simply trading stocks on the world’s markets. We see our role as seeking out those genuinely innovative businesses that are shaping the future. We call it investing in progress. And by using our skills as actual investors, not simply stock traders, we believe we can deliver strong returns for your portfolio. Company. Baillie Giffford & Co Limited is authorised and regulated by the Financial Conduct Authority. The investment trusts managed by Baillie Gifford & Co Limited are listed UK companies and are not authorised and regulated by the Financial Conduct Authority.
Why Scottish Mortgage won’t go the way of Woodford There are good reasons why shareholders have given the investment trust permission to invest in a greater proportion of privately-owned companies
S
hareholders in FTSE 100 member Scottish Mortgage (SMT) have just voted in favour of the popular investment trust increasing the maximum percentage of privately-owned businesses in its portfolio from 25% to 30%. This is an important development for the trust for several reasons, one of which is the heightened risks associated with investing in unquoted companies – businesses that do not trade on a stock market. Disgraced fund manager Neil Woodford fell out of favour partly because he deviated from his tried and trusted strategy of buying mainly large caps stocks whose income potential had been undervalued by the market. Woodford moved away from what had made him so successful at his previous employer Invesco Perpetual and instead invested in lots of tricky-to-value unquoted assets in sectors in which he 900
Scottish Mortgage’s portfolio includes Zoom
wasn’t an expert. These illiquid investments were then difficult to sell when he needed to hand investors back their cash. Given that this style drift was at the heart of the Woodford debacle, should risk-averse investors be worried by the developments at Scottish Mortgage? We think not.
SCOTTISH MORTGAGE FTSE ALL SHARE
600
300 2011
32
2012
| SHARES | 02 July 2020
2013
2014
2015
2016
2017
2018
2019
DIFFERENTIATED PROPOSITION The rapid rise of some of Scottish Mortgage’s most successful investments over the past couple of decades, and their apparent resilience during economic hardship, means some of its biggest stakes increasingly feature in passive tracker funds. One of the ways Scottish Mortgage can diverge from more mainstream funds is to unearth opportunities among privatelyowned companies not listed on stock markets. GREAT TRACK RECORD Shares is a long-run admirer of Scottish Mortgage, which gives
investors a way to access the world’s most exciting growth companies. Co-managers James Anderson and Tom Slater identify companies, enabled by technology, which they believe have the potential to be much greater in size in the future thanks to having a proposition which is scalable and could be market-leading in time. They will hold on to these investments once investee companies become market leaders, thereby turbo-charging returns for shareholders. The investment trust performed strongly during the market weakness surrounding the pandemic and economic shutdown and has also benefited during the equity market rebound thanks to its focus on tech companies, disruptive businesses and a relatively high weighting in Chinese domiciled companies. For instance, the trust owns shares in the likes of video conferencing star turn Zoom, online shopping-to-cloud services colossus Amazon, Google-parent Alphabet and Illumina, which is building advanced equipment to unlock the power of genetic science. GOING WHERE THE GROWTH IS The exposure to unquoted companies in Scottish Mortgage’s portfolio continues to grow rapidly. It has risen from only 4% of net asset value (NAV) in 2015 to 22% as of 31 March 2020. For Scottish Mortgage to crystalise any value creation from its unquoted investments, it would need to find a buyer
Illumina, another company featured in Scottish Mortgage’s portfolio, works in the field of genetic science
for its holdings privately or wait until one of these holdings lists on a stock market so it can freely trade the shares. However, its team are masters at finding excellent growth companies and we have great faith in them picking the right ones. Analysts at investment bank Stifel say the level of disclosure on unquoted companies in Scottish Mortgage’s accounts has improved and a number of these companies have delivered strong returns for investors in recent years. Fund managers Anderson and Slater argued for the 5% increase in the unlisted exposure to 30% in the belief that ‘it is just as important to ensure that further investments may be made in those private companies showing real progress, as it is to ensure that all new opportunities may be judged equally on their fundamental merits’. The unquoted exposure increase will enable them to continue to invest in the best opportunities available, whether they be public or private companies, without changing
the nature of the investment proposition. They say that if Scottish Mortgage hadn’t made this amendment, this valuable flexibility would have become severely constrained and largely dependent on the timings of stock market flotations of existing unquoted companies, to the clear detriment of shareholders. As Scottish Mortgage clearly articulated in its full year results (15 May), ‘equity investing is all about capturing long run compounding returns’ and one of the important advantages the trust has when investing in established private companies is the ability to continue owning such businesses as and when they become public companies. ‘This means that it is possible to capture the benefits from the long run compounding of returns as they grow from a lower starting value.’ GOOD ACCESS TO GROWTH COMPANIES Stifel points out that investment manager Baillie Gifford has unparalleled access to many of 02 July 2020 | SHARES |
33
VARIOUS UNQUOTED INVESTMENTS IN SCOTTISH MORTGAGE’S PORTFOLIO Company
Total assets (%)
Country
Industry
Ant International
2.3
China
Financial Services
Ginko BioWorks
1.8
US
Synthetic Biology
Tempus Labs
1.5
US
Healthcare AI
TransferWise
1.1
UK
Financial Services
Grail
0.8
US
Healthcare
Bytedance
0.7
China
Social Media
Space Exploration Technologies
0.6
US
Satellite Comms & Aerospace
Tanium
0.5
US
IT
Affirm
0.3
US
Financial Services
Full Truck Alliance
0.3
China
Logistics
Palantir Technologies
0.3
US
IT
Stripe
0.3
US
Financial Services
Airbnb
0.2
US
Travel & Leisure
AUTO1
0.2
Germany
Automotives
Snowflake
0.1
US
IT
Source: Scottish Mortgage
these unquoted opportunities. It says: ‘The unlisted portfolio continues to become a more significant part of Scottish Mortgage’s portfolio and potentially returns. The performance of the unlisted segment of the portfolio, as a whole, appears to have been relatively good in the past year at around +11% and this is similar to the performance of the NAV of the rest of the portfolio over the year at +13.7%.’ Admittedly, there have been disappointments – par for the course when investing in unquoted growth hopefuls – but there have also been some significant successes, with five initial unquoted investments delivering annualised returns in excess of 40% per year in recent years. 34
| SHARES | 02 July 2020
SEVERAL ADVANTAGES Shares believes Scottish Mortgage and Baillie Gifford are well positioned to invest in unquoted companies for two key reasons. Firstly, Anderson and Slater take a long-term view which suits growing unquoted companies – a number of the listed holdings have been owned in excess of 10 years – and they like to hold on to investments for a long time, meaning that a stock market listing is not necessarily the point at which Scottish Mortgage will exit an investment. Secondly, the closed-end structure of an investment trust works well with an unquoted strategy given there is no requirement for immediate liquidity, unlike an open-ended
fund such as the ill-starred Woodford Equity Income. PORTFOLIO EXCITEMENT Unquoted companies are growing in importance to the returns delivered by Scottish Mortgage too and add a sprinkle of diversification to a trust with a concentrated portfolio including only 43 listed companies as of the end of May; the largest investment is electric vehicle maker Tesla, followed by Amazon. Within the trust’s largest 30 investments there are six unquoted companies, the largest being online financial services platform Ant, a subsidiary of Chinese internet titan Alibaba. While there have been some disappointments, Stifel says there have also been numerous
© Official SpaceX Photos
significant successes. It points out that of the 59 investments that Scottish Mortgage originally backed as unquoted companies, including music and podcast streaming platform Spotify and Alibaba, 21 have delivered annualised returns north of 10% per year, with five of those names, including Alibaba and Slack Technologies, returning over 40% on an annualised basis. The best performer is Vir Biotechnology whose stake was initially purchased in 2017 and it has returned 115% annualised. The top unquoted holdings by scale include Elon Musk’s rocket and spacecraft designer SpaceX. Unquoted portfolio investments expected to float on a stock market in the future
SCOTTISH MORTGAGE: TOTAL ASSETS IN UNQUOTED COMPANIES
20%
17% 15% 12%
4%
4%
2014
2015
2016
13%
2017
2018
2019
2020
Source: Scottish Mortgage
include accommodation platform Airbnb; Ant Financial; Bytedance, which owns the social media phenomenon TikTok; and data integration software play Palantir Technologies. Other private companies in Scottish Mortgage’s portfolio that should excite investors include online payments platform Stripe, cloud data warehousing platform Snowflake and Indigo Agriculture, a company that analyses plant microbiomes to increase crop yields. It is also worth noting stakes in electric aircraft hopeful Joby Aero and Recursion Pharmaceuticals, which uses machine learning to improve drug discovery. James Crux, Funds & Investment Trusts
02 July 2020 | SHARES |
35
FIRST-TIME INVESTOR
Why do companies join the stock market and which ones can I invest in? We examine the ‘menu’ of shares from which an investor makes their selection
I
n the previous part of this series we defined what a share was, how you can generate a return by investing in them and how this return can be made up both of dividends and capital gains. In this article we turn to the menu from which you can pick shares. After all, shares come in different sizes, shapes and flavours just like the dishes you choose in a restaurant or from a takeaway menu. Before we stretch the analogy too far it is worth clarifying that we are talking about the wider stock market and the different exchanges, sectors and stocks it encompasses. It is this market of buyers and sellers which enables you to buy and sell shares. In the UK nearly all shares are traded on the London Stock Exchange.
Did you know? Investment trusts and exchange-traded funds are also traded on the stock market
36
| SHARES | 02 July 2020
WHY DO BUSINESSES JOIN THE STOCK MARKET? There are a variety of reasons why a company joins the stock market. These include raising the profile of the business, increasing its credibility with customers and prospective lenders, and potentially to use shares for acquisition purposes. A key motivation is to gain access to investors’ capital and as a way for the founders of a business or staff to profit from a successful venture by selling some of their interest to new shareholders. Firms typically raise money
when they first join the stock market and may well follow up with further issues of shares when they need more cash to finance their growth ambitions. Typically, but not always, a business will wait until it has reached a certain level of maturity before joining the stock market, perhaps already generating a profit. After all, this is not a cost-free equation, there are significant fees and responsibilities associated with being a public company. Engaging with the market will take up a significant chunk of management’s time.
FIRST-TIME INVESTOR global standard which has four tiers with 11 different industries, 20 supersectors, 45 sectors and 173 subsectors. As a snapshot of how this works, the accompanying table shows the consumer staples industry classification and the supersectors, sectors and subsectors within it. This separation into different sectors makes it easier for an investor to identify potential investment opportunities.
You can’t buy shares in any company you want. Some companies like Lego are privately-owned and so the general public cannot buy shares in the business
Not all companies have a stock market listing. Confectionery giant Mars and toy maker Lego are just two examples of highprofile brands which have no stock market presence as they are privately owned. THE LIST OF COMPANIES ON THE STOCK MARKET You can download a full list of all the companies on the stock market from the London Stock
Exchange website. According to the World Federation of Exchanges, as of May 2020 there were 2,359 companies listed on the London Stock Exchange with a total market value of $3.16 trillion. Shares are divided into different sectors based on the industry in which they operate. There are different layers of sector – the Industry Classification Benchmark is a
SHARE PRICE INFORMATION The UK stock market is open from 8am until 4.30pm. During this time share prices will probably move up and down depending on demand from people wanting to buy and sell. Share prices will be heavily influenced by news either specific to a company or to sectors or even broader factors such as central bank interest rates, economic activity and political events.
FROM CONSUMER STAPLES TO BREWERS Industry (Level 1)
Supersector (Level 2) Sector (Level 3) Beverages Food, beverage and tobacco
Food producers
Consumer staples Tobacco
Personal care, drug and grocery stores
Personal care, drug and grocery stores
Subsector (Level 4) •
Brewers
•
Distillers and vintners
•
Soft drinks
•
Farming, fishing, ranching and plantations
•
Food products
•
Fruit and grain processing
•
Sugar
•
Tobacco
•
Food retailers and wholesalers
•
Drug retailers
•
Personal products
•
Nondurable household products
•
Miscellaneous consumer staples
Source: FTSE Russell
02 July 2020 | SHARES |
37
FIRST-TIME INVESTOR You can find the latest share prices on a variety of websites, ranging from Shares’ own website and that of your ISA or SIPP (self-invested personal pension) provider to those run by the London Stock Exchange and more specialist finance data sites such as Morningstar. FOCUSING ON WHAT YOU UNDERSTAND If you are new to investing it might make sense to concentrate on companies whose products and services you recognise, understand and can easily research. This might include a supermarket like Tesco (TSCO) or a property website such as Rightmove (RMV). You might, for example, see your local streets busy with Tesco delivery vehicles and make the educated guess that this local phenomenon was being replicated across the country. That doesn’t mean Tesco shares are automatically worth buying; you would need to do further research into areas like valuation (is the stock cheap or expensive?) which we will cover in future parts of this series, but it does illustrate how you can apply what you see with your own eyes to your investing. Alternatively, if you work in a specific industry you might feel qualified to invest in other participants in the same industry because you are familiar with how it works. THE DIFFERENT INDICES As well as sitting in different sectors, stocks also trade on a variety of exchanges and belong to various indices, the latter being specific baskets of shares 38
| SHARES | 02 July 2020
REDROW’S STOCK MARKET EXPERIENCE Registered by a 21-year-old Steve Morgan in the 1970s after taking over a failing civil engineering business with the help of a £5,000 loan from his dad, Redrow (RDW) got into housebuilding in 1982. Having navigated some ups and downs in the housing market, it joined the London Stock Exchange in 1994 raising around £60 million to invest for future growth. Having sold much of his stake and stepped away Morgan subsequently returned to lead the business in the wake of the financial crisis when the group also raised £150 million from shareholders to shore up its balance sheet. Morgan abandoned an attempt to take the group private in 2013 and eventually retired in 2018. As of June 2020, Redrow was worth £1.7 billion. relating to characteristics such as the size of a company. The London Stock Exchange encompasses the Main Market and AIM, the latter principally aimed at smaller growthfocused businesses and has looser regulation than the Main Market. There is also a rival market called NEX Exchange which is aimed at junior companies. The flagship index for the UK is the FTSE 100 which is made up of the largest firms listed on the London Stock Exchange. There is also the FTSE 250 which is the venue for medium-sized firms and the
FTSE All-Share which includes pretty much everything on the Main Market. Many FTSE 100 businesses have international horizons, but some areas like technology are under-represented on the UK stock market and this is one of the potential motivations for more experienced investors to broaden their search to other stock markets around the world and look at indices such as the S&P 500 or Nasdaq in the US. By Tom Sieber Deputy Editor
082020 JULY
Presentations: 18:00 BST
WEBINAR
Sponsored by
Join Shares in our next Spotlight Investor Evening webinar on Wednesday 8 July 2020 at 18:00 CLICK HERE TO REGISTER COMPANIES PRESENTING:
GB GROUP (GBG) Dave Wilson, Finance Director
ONCIMMUNE (ONC) Adam Hill, CEO.
The webinar can be accessed on any device by registering using the link above
Event details
Presentations to start at 18:00 BST
Becca Smith [email protected]
Register for free now
Which pension pot should I use first? AJ Bell pensions expert Tom Selby weighs up the options for a reader? 40
| SHARES | 02 July 2020
All these questions and more will help you decide whether taking an income through drawdown or buying an annuity, or a combination of the two, is right for you. It should also help guide any decision to access a defined benefit pension.
Do you want a secure income every month? Is the personal allowance enough to cover your spending needs? Do you have the time to manage your fund?.
ShareSoc investor events - Come and Join Us!
ShareSoc is a not-for-profit membership organisation, created by and for individual investors. Our aims are to help improve your investment experience and to represent your interests wherever this is needed.
Visit for more
Webinar 2 July 2020
Webinar 7 July 2020
Webinar 8 July 2020
PCF Bank (PCF)
Avation plc (AVAP)
HarbourVest (HVPE)
register now
register now
register now
All you need do is complete the registration form for your chosen event on our website. Go to for more about ShareSoc.
SIPPs | ISAs | Funds | Shares
The freedom fighter If your inner investor demands financial freedom, take control of your pension pot with our SIPP. Discover your inner investor youinvest.co.uk
The value of your investments can go down as well as up and you may get back less than you originally invested.
Drop in investment ISA demand but more women are saving than men New figures show how savings patterns are changing
T
Women have consistently been bigger users of ISAsterm.. 02 July 2020 | SHARES |
43
Junior ISA usage is up 5% year-on-year
JUNIOR ISAS BECOMING MORE POPULAR
LIFETIME ISAS ALSO IN DEMAND
44
| SHARES | 02 July 2020. By Laura Suter AJ Bell Personal Finance Analyst
YOUR QUESTIONS ANSWERED
How to tell if your ETF is physical or synthetic We help with a query on the differences between types of passive product Some ETFs have physical underlying holdings, which is safer for the investor, whereas some are derivative instruments where you take the credit risk of the underlying issuer (eg UBS, Citibank). Is there an easy way to differentiate between the two types; for instance, is it clearly stated on their factsheet or is there some way to tell from their identifying code? Kearia Yau Reporter Yoosof Farah replies
You are correct to say there are two types of ETF – physical ones and synthetic ones. Physical ETFs buy the underlying assets to physically replicate the index they are tracking. A commonly used method of replicating an index is full replication, where for example an ETF tracking the FTSE 100 would buy shares in all the companies in the index. But where this is a bit difficult, for example with an ETF tracking the MSCI World Index and its 1,000plus constituents, ETFs will a technique called ‘optimisation’, whereby they will obtain the desired exposure by matching things like sector and country weights, the dividend yield, etc,
without needing to buy all of the stocks in the index. Synthetic ETFs on the other hand use complex financial tools like derivatives to replicate the returns of the index, without actually buying the holdings at all. These are more commonly used for leveraged ETFs, more illiquid stock markets like some emerging markets, and ETFs linked to commodities, where investors wouldn’t actually want to take physical delivery of things like oil for example. There are advantages and disadvantages to both types. Synthetic ETFs tend to have a lower tracking error, meaning their returns stick closer to the index they’re tracking. Yet there is counterparty risk, namely the risk of the counterparty going bust and not being able to fulfil its commitments, which would wipe
out the ETF’s return. The best way to see if an ETF is physical or synthetic is to look at the ETF’s literature, namely the factsheet and key investor information document (KIID). On the factsheet, this information should be detailed in the fund’s fact box, where among other details like cost, benchmark and rebalancing frequency, it should also mention ‘product structure’, and this will say whether the ETF is physical or synthetic. More detail on this will be given in the KIID, on the first page under a heading like ‘Objectives and Investment Policy’. For example, in the KIID for iShares Core FTSE 100 (ISF), which is a physical ETF, it says: ‘The fund intends to replicate the index by holding the equity securities, which make up the index, in similar proportions to it.’
DO YOU HAVE ANY QUESTIONS ABOUT MARKETS AND INVESTING? Let us know if we can help explain how something works or any other question relating to markets and investing. We’ll do our best to answer your question in a future edition of Shares. Email [email protected] with ‘Reader question’ in the subject line. Please note, we only provide information and we do not provide financial advice. We cannot comment on individual stocks, bonds, investment trusts, ETFs or funds. If you’re unsure please consult a suitably qualified financial adviser.
02 July 2020 | SHARES |
45
WEBINAR
WATCH RECENT PRESENTATION
WEBINARS
Christian Hoyer Millar, Executive Director, Oxford Biodynamics (OBD) Oxford Biodynamics is a biotechnology company. It is focused on the discovery and development of novel epigenetic biomarkers for use within the pharmaceutical and biotechnology industry.
Rob Shepherd, Group Finance Director, President Energy (PPC) - President Energy is an oil and gas
exploration and production company. Its principal activity is oil and gas exploration, development and production and the sale of hydrocarbons and related activities.
Visit the Shares website for the latest company presentations, market commentary, fund manager interviews and explore our extensive video archive.
CLICK TO PLAY EACH VIDEO
SPOTLIGHT
INDEX KEY • Main Market • AIM • Investment Trust • Fund • ETF • Overseas share Allianz Technology Trust (ATT) Allianz US Equity (B4N1GS7) Blue Whale Growth Fund (BD6PG78) BP (BP.)
Liontrust Global Equity Fund (3067916)
23
Mastercard
24
On The Beach (OTB) PayPal
TUI (TUI)
7
Visa
24
3, 38
7 25
26
Trojan Global Equity (B0ZJ5S4)
23, 26 8
Redrow (RDW) Rightmove (RMV)
8, 23 15, 38 38
26
Flutter Entertainment (FLTR)
12
26
KEY ANNOUNCEMENTS OVER THE NEXT WEEK Full year results 3 July: Fuller, Smith & Turner. 7 July: Halfords, JD Sports Fashion. 8 July: Liontrust, U&I. 9 July: Ilika, Superdry. Half year results 7 July: Micro Focus, Ocado, RM. Trading statements 7 July: Electrocomponents, Reach. 9 July: PageGroup, Persimmon.
8
Experian (EXPN)
Games Workshop (GAW)
Tesco (TSCO)
15
8
Polar Capital Global Technology Fund (B42W4J8)
Direxion Work From Home ETF
Supermarket Income REIT (SUPR)
WHO WE ARE
16, 9 Robeco Global Consumer Trends (BZ1BV36) Royal Dutch Shell (RDSB)
23 8
Daniel Coatsworth @Dan_Coatsworth FUNDS AND INVESTMENT TRUSTS EDITOR:
James Crux @SharesMagJames
iShares Core FTSE 100 (ISF)
43
JPM Global Unconstrained Equity Fund (B235QT6)
23
Serco (SRP)
3
JPMorgan Japanese Investment Trust (JFJ)
8
13
SPDR MSCI World Technology UCITS ETF (WTEC)
Scottish Mortgage (SMT)
32
DEPUTY EDITOR:
NEWS EDITOR:
Tom Sieber @SharesMagTom
Steven Frazer @SharesMagSteve
EDITOR:
SENIOR REPORTERS:
REPORTER:
Yoosof Farah @YoosofShares
Martin Gamble @Chilligg Ian Conway @SharesMagIan
ADVERTISING Senior Sales Executive Nick Frankland 020 7378 4592 [email protected]
CONTRIBUTORS
Russ Mould Tom Selby Laura Suter.
02 July 2020 | SHARES |
47
|
https://issuu.com/shares-magazine/docs/aj_bell_youinvest_shares_020720_web?fr=sNzJiMzE2MDI4NzM
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
README
eivoreivor
🎭 Library for seamless transition animations of images.
View demo page or the sample app from the gif on the right.
What & HowWhat & How
- You give it two images
<img>tag or element with
background-image
Could be different scales, cropped,
- It calculates how A and B overlap
Handles physical image size, crop and different aspect ratio; sizing (
object-fit,
background-size:
cover,
containand percentages), positioning
background-position,
clip-path, etc...
- Creates intermediary element for animation
Or picks the suitable one of the A and B. (Either the higher quality one, or one that contains the other)
- Runs smooth animation
Transitioning from the position and location of the first image, to the second one.
- ???
- Profit
InstallationInstallation
npm install eivor
<script type="importmap"> {"imports": {"eivor": "./node_modules/eivor/src/ImageTransition.js"}} </script>
UsageUsage
class ImageTransition
source and
target can be
<img> tag or any html element with
background-image css property. Third and optional argument is an
options object.
import {ImageTransition} from 'eivor' let transition = new ImageTransition(source, target, {duration: 1000}) await transition.play()
Optionally you can await
transition.ready
Promise if you need to wait for images to load.
let transition = new ImageTransition(source, target) // Wait for source and target images to load. Calculating position delta hasn't yet begun. await transition.ready // Images are now loaded, here you can do something await transition.play() // animation is over, temporary files are removed from DOM, source and target have returned to their original positions, any additional CSS props are removed.
object options
options.easing:
string
options.duration:
number
options.fill:
string
options.mode:
'crop'|'recreate'|'clone'automatically determined. Can be overriden with caution.
Eivor figures out which image to animate (depending on crop, size and image quality) wheter a) source to target's position while the target is hidden or b) target from source's position while the source is hidden. If cropping one whithin the other is not possible, then a temporary node with the full image is created for the duration of the transition
crop: Crop, scale and translation are applied to the larger image. Only available if one image fits into the other.
recreate: The larger image is temporarily resized in order to display the whole image uncropped. Then the target image is animated like
crop.
clone: Like
recreate, but animation is applied to a clone of the target image while the original is hidden.
TODOs & IdeasTODOs & Ideas
The script is already mighty as is right now. But there are still some edge cases or nice to have things I'd like to implement.
- Figure out a way to write unit tests
testing animations is hard. There's an extensive example/debug file but it has to be tested manually.
- Add option to specify which image to animate.
Now you specify source and target nodes. Until recently the script always animated target node from source node's position and size. Now it can also animate source into target's position.
It'd be nice if user could specify which image to use for transition (either which node to manipulate if
*ContainedWithinTargetallows it, or which url to use for clone node)
- Detect which image is higher quality and use that for animation
Will work perfectly with 1:1 images. (maybe it should be default case in this scenario)
May collide with variously shaped and clipped source/target images, thus ignoring
sourceContainedWithinTarget/
targetContainedWithinSourceand forcing use of clone instead of manipulating the image that covers more imge space. (maybe it should be just an option defaulting to false)
- Ahead of time animation
Right now, both source and target images have to be in the DOM, and the script waits for both of them to load in order to calculate their position, size, clip, etc...
If user could provide that information to script, we could launch the animation while the target image is still loading (e.g. navigating to new view)
- Option to insert clone globally into body instead of source's/target's container
- Option to force clone instead of manipulating source or target nodes
- work with pixel values in background-position
- Investigate use of transform-origin (maybe as an API for user)
LicenceLicence
MIT, Mike Kovařík, Mutiny.cz
|
https://www.skypack.dev/view/eivor
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.
One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.
if (_config.Encrypted && fileName.Substring(fileName.LastIndexOf(".") + 1).ToLower() == "gpg")
fileName.Substring(fileName.LastIndexOf(".") + 1).Equals("gpg", StringComparison.CurrentCultureIgnoreCase) // And if you import the System.IO namespace you could do Path.GetExtension(filename).Equals("gpg", StringComparison.CurrentCultureIgnoreCase)
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.
|
https://www.experts-exchange.com/questions/24306483/Ignore-Case-in-c.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
SoVertexProperty.3iv man page
SoVertexProperty — vertex property node
Inherits from
SoBase > SoFieldContainer > SoNode > SoVertexProperty
Synopsis
#include <Inventor/nodes/SoVertexProperty.h>
enum Binding {
SoVertexProperty::OVERALL Whole object has same material/normal
SoVertexProperty::PER_PART One material/normal for each part of object
SoVertexProperty::PER_PART_INDEXED One material/normal for each part, indexed
SoVertexProperty::PER_FACE One material/normal for each face of object
SoVertexProperty::PER_FACE_INDEXED One material/normal for each face, indexed
SoVertexProperty::PER_VERTEX One material/normal for each vertex of object
SoVertexProperty::PER_VERTEX_INDEXED One material/normal for each vertex, indexed
}
Fields from class SoVertexProperty:
SoMFVec3f vertex
SoMFVec3f normal
SoMFUInt32 orderedRGBA
SoMFVec2f texCoord
SoSFEnum normalBinding
SoSFEnum materialBinding
Methods from class SoVertexProperty:
SoVertexProperty())
static SoType getClassTypeId() property node may be used to efficiently specify coordinates, normals, texture coordinates, colors, transparency values, material binding and normal binding for vertex-based shapes, i.e., shapes of class SoVertexShape. An SoVertexProperty node can be used as a child of a group node in a scene graph, in which case the properties it specifies are inherited by subsequent shape nodes in the graph. It can also be directly referenced as the VertexProperty SoSFField of a vertex-based shape, bypassing scene graph inheritance.
When directly referenced by a VertexProperty SoSFField and normal bindings, and can be used to specify the current 3D coordinates, the current normals,, users are cautioned that, for optimal performance, the vertex property node should be referenced as the VertexProperty field of an SoVertexShape, and should specify in its fields all values required to render that shape.
The various fields in a vertex property node can be used in place of corresponding fields in other property nodes, as follows: The vertex field contains 3D coordinates, as in the point field of an SoCoordinate3 node. The normal field contains normal vectors, as in the vector field of the SoNormal node. The orderedRGBA field contains packed colors in the hexadecimal format 0xrrggbbaa, where rr is the red value (between 00 and 0xFF hex) gg is the green value (between 00 and 0xFF hex) bb is the blue value (between 00 and 0xFF hex) aa is the alpha value (between 00 = transparent and 0xFF = opaque). The packed colors are equivalent to an SoPackedColor node, and provide values for both diffuse color and transparency. The texCoord field replaces the point field of the SoTextureCoordinate2 node.
If the transparency type is SoGLRenderAction::SCREEN_DOOR, only the first transparency value will be used. With other transparency types, multiple transparencies will be used.
The materialBinding field replaces the value field of the SoMaterialBinding node. The materialBinding field in a directly referenced SoVertexProperty node has no effect unless there is a nonempty orderedRGBA field, in which case the material binding specifies the assignment of diffuse colors and alpha values to the shape. The materialBinding field can take as value any of the material bindings supported by Inventor.
The normalBinding field replaces the value field of the SoNormalBinding node. The normalBinding field of a directly referenced SoVertexProperty node has no effect unless there is a nonempty normal field, in which case the normal binding specifies the assignment of normal vectors to the shape. The value of the normalBinding field can be any of the normal bindings supported by Inventor.
Fields
SoMFVec3f vertex
vertex coordinate(s).
SoMFVec3f normal
normal vector(s).
SoMFUInt32 orderedRGBA
packed color(s), including transparencies.
SoMFVec2f texCoord
texture coordinate(s).
SoSFEnum normalBinding
normal binding.
SoSFEnum materialBinding
material binding.
Methods
SoVertexProperty()
Creates an SoVertexProperty node with default settings.
Action Behavior
SoGLRenderAction, SoCallbackAction, SoPickAction
When traversed in a scene graph, sets coordinates, normals, texture coordinates, diffuse colors, transparency, normal binding and material binding in current traversal state. If not traversed, has no effect on current traversal state associated with action. The normalBinding field has no effect if there are no normals. The materialBinding has no effect if there are no packed colors.
SoGetBoundingBoxAction
When traversed in a scene graph, sets coordinates in current traversal state. If not traversed, has no effect on current traversal state associated with action.
File Format/Defaults
VertexProperty { vertex [ ] normal [ ] texCoord [ ] orderedRGBA [ ] materialBinding OVERALL normalBinding PER_VERTEX_INDEXED }
See Also
|
https://www.mankier.com/3/SoVertexProperty.3iv
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
On Sun, Oct 03, 2010 at 09:14:25PM +0200, Anton Khirnov wrote: [...] > @@ -457,7 +459,9 @@ typedef struct AVInputFormat { > */ > int (*read_seek2)(struct AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags); > > - const AVMetadataConv *metadata_conv; > +#if FF_API_OLD_METADATA > + attribute_deprecated const AVMetadataConv *metadata_conv; > +#endif > > /* private fields */ > struct AVInputFormat *next; if you remove it from AVInputFormat then you should remove it from AVOutputFormat too otherwise the API will be really messy [...] --: <>
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-October/099119.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Shapes could be defined by an inequality, for example simple shapes could be defined as follows:
Sphereboolean isInside(float x, float y, float z) {return (r*r < x*x+y*y+z*z);}
CubeBoolean isInside(float x, float y, float z) {return ((x<left) & (x>right) & (y<top) & (y>bottom) & (z<front) & (z>back));}
Other shapesOther shapes could be made by combining these simple shapes using Boolean operations, for example a Boolean 'or' will create a new shape with an outline covering both shapes. A Boolean 'and' will create a shape with an outline which is the intersection of both shapes. An 'not and' would allow a 'bite' to be taken out of an object in the shape of the other object.
Movement
You could offset or move the object with the following type of method: (see Using this to model physics)Boolean isInside(float x, float y, float z) {return shapeToBeMoved.isInside(x+xOffset, y+yOffset, z+zOffset);}
Uses of this method
Parameterisation
|
http://euclideanspace.com/threed/solidmodel/solidgeometry/equations/index.htm
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Hi all,When an initialization of a network namespace in setup_net() fails, wetry to undo everything by executing each of the exit callbacks of everynamespace in the network.The problem is, it might be possible that the net_generic array wasn'tinitialized before we fail and try to undo everything. At that point,some of the networks assume that since we're already calling the exitcallback, the net_generic structure is initialized and we hit the BUG()in net/netns/generic.h:45 .I'm not quite sure whether the right fix from the following threeoptions is, and would be happy to figure it out before fixing it: 1. Don't assume net_generic was initialized in the exit callback, whichis a bit problematic since we can't query that nicely anyway (asub-option here would be adding an API to query whether the net_genericstructure is initialized. 2. Remove the BUG(), switch it to a WARN() and let each subsystemhandle the case of NULL on it's own. While it sounds a bit wrong, it'sworth mentioning that that BUG() was initially added in an attempt tofix an issue in CAIF, which was fixed in a completely different wayafterwards, so it's not strictly necessary here. 3. Only call the exit callback for subsystems we have called the initcallback for.Thanks!-- Sasha.
|
http://lkml.org/lkml/2012/4/5/290
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Tamas TEVESZ wrote: > > On Sat, 3 Jul 1999, Robert Thomson wrote: > > /usr/bin/X11 -> /usr/X11R6/bin > /usr/lib/X11 -> /usr/X11R6/lib/X11 > /usr/include/X11 -> /usr/X11R6/include/X11 While the old Unix semantics for filesystem hierarchy are very useful for small console applications which do not typically contain much more than system utilities, I feel them strongly lacking for larger, more complex applications. Managing namespace is always difficult. The partitioning of namespace is already accepted throughout Debian. /usr/doc contains directories for specific package documents instead of throwing all documents in one place, /usr/lib contains several directories containing libraries for specific packages not meant to be linked by other applications, /usr/share is nothing but directories devoted to various applications. So the question is, if you are going to partition namespace, where do you want to do it and why do you want to do it? I can see 3 types of partitioned namespace in Unix; software package, environment, and file/data type. Now is creating the partition in /usr any less evil than creating it in /usr/bin, /usr/lib, /var, /etc, /usr/share, /usr/doc, etc? Is it better to scatter a software application distribution throughout a hierarchy instead of putting it into a single directory in /opt? And finally, has anyone given any serious thought to just dropping the Unix filesystem hierarchy standard and starting over? The reason I ask is because Berlin will have a whole lot of files and I mean, a whole lot. I'm not sure people really want us polluting the global namespace with them. Jordan -- Jordan Mendelson : Web Services, Inc. :
|
https://lists.debian.org/debian-devel/1999/07/msg00187.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
On 22 April 2010 01:12, Michael Niedermayer <michaelni at gmx.at> wrote: > the idea is good, your implementation is not > > it should be looking something like > > ff_set_console_color(int fd, int color){ > #if HAVE_ISATTY > ? ?if (isatty(fd)) > ? ? ? ?fprintf( fd, "\033[... > #endif > } That doesn't sound very easy to use on Windows because you need a console handle and then you need to store the the original colours so you can restore them. Also, you can't fprintf into int fd But that can be corrected by passing FILE *fd and then using isatty(fileno(fd))
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-April/081295.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
{- | Tutorialinal' exampleG [winter,sprinkler,wet,road] [rain] @ Now, if you have observed that the grass is wet and want to take into account thios observation to compute the posterior probability of rain (after observation): @ 'posteriorMarginal' exampleG [winter,sprinkler,wet,road] [rain] [wet '=:' True] @ If you want to combine several observations: @ 'posteriorMarginal' example' example' j' j' print' cancer -- mapM_ (\x -> putStrLn (show x) >> (print . 'posterior' jtcancer $ x)) [varA,varB,varC,varD,varE] -- print \"UPDATED EVIDENCE\" let jtcancer' = 'updateEvidence' [varD '=:' Present] jtcancer mapM_ (\x -> putStrLn (show x) >> (print . 'posterior' jtcancer' $ x)) [varA,varB,varC,varD,varE] @ -} module Bayes.Examples.Tutorial( -- * Tests with the standard network inferencesOnStandardNetwork #ifndef LOCAL -- * Tests with the cancer network , inferencesOnCancerNetwork #endif #ifdef LOCAL , miscDiabete #endif , Coma(..) , miscTest ) where import Bayes.Factor import Bayes import Bayes.VariableElimination #ifndef LOCAL import Bayes.Examples(example, exampleJunction,exampleImport,exampleDiabete, exampleAsia, examplePoker, exampleFarm,examplePerso,anyExample) #else import Bayes.Examples(example, exampleJunction,exampleDiabete, exampleAsia, examplePoker, exampleFarm,examplePerso,anyExample) #endif import Bayes.FactorElimination import Data.Function(on) import qualified Data.Map as Map import Data.Maybe(fromJust,mapMaybe) import System.Exit(exitSuccess) import qualified Data.List as L((\\)) #ifdef LOCAL miscDiabete = do (varmap,perso) <- exampleDiabete let jtperso = createJunctionTree nodeComparisonForTriangulation perso cho0 = fromJust . Map.lookup "cho_0" $ varmap print $ posterior jtperso cho0 #endif miscTest s = do (varmap,perso) <- anyExample s let names = Map.keys varmap l = mapMaybe (flip Map.lookup varmap) names jtperso = createJunctionTree nodeComparisonForTriangulation perso print perso print jtperso print "FACTOR ELIMINATION" let post (v,name) = do putStrLn name print $ posterior jtperso v mapM_ post (zip l names) print "VARIABLE ELIMINATION" let prior (v,name) = do putStrLn name print $ priorMarginal perso (l L.\\ [v]) [v] mapM_ prior (zip l names) -- | Type defined to set the evidence on the Coma variable -- from the cancer network. data Coma = Present | Absent deriving(Eq,Enum,Bounded) #ifndef LOCAL -- | Inferences with the cancer network inferencesOnCancerNetwork = do print "CANCER NETWORK" (varmap,cancer) <- exampleImport print cancer let [varA,varB,varC,varD,varE] = fromJust $ mapM (flip Map.lookup varmap) ["A","B","C","D","E"] let jtcancer = createJunctionTree nodeComparisonForTriangulation cancer mapM_ (\x -> putStrLn (show x) >> (print . posterior jtcancer $ x)) [varA,varB,varC,varD,varE] print "UPDATED EVIDENCE : Coma present" let jtcancer' = changeEvidence [varD =: Present] jtcancer mapM_ (\x -> putStrLn (show x) >> (print . posterior jtcancer' $ x)) [varA,varB,varC,varD,varE] print "UPDATED EVIDENCE : Coma absent" let jtcancer' = changeEvidence [varD =: Absent] jtcancer mapM_ (\x -> putStrLn (show x) >> (print . posterior jtcancer' $ x)) [varA,varB,varC,varD,varE] #endif -- | Inferences with the standard network inferencesOnStandardNetwork = do let ([winter,sprinkler,rain,wet,road],exampleG) = example print exampleG putStrLn "" print "VARIABLE ELIMINATION" putStrLn "" print "Prior Marginal : probability of rain" let m = priorMarginal exampleG [winter,sprinkler,wet,road] [rain] print m putStrLn "" print "Posterior Marginal : probability of rain if grass wet" let m = posteriorMarginal exampleG [winter,sprinkler,wet,road] [rain] [wet =: True] print m putStrLn "" print "Posterior Marginal : probability of rain if grass wet and sprinkler used" let m = posteriorMarginal exampleG [winter,sprinkler,wet,road] [rain] [wet =: True, sprinkler =: True] print m putStrLn "" let jt = createJunctionTree nodeComparisonForTriangulation exampleG print jt displayTreeValues jt putStrLn "" print "FACTOR ELIMINATION" putStrLn "" print "Prior Marginal : probability of rain" let m = posterior jt rain print m putStrLn "" let jt' = changeEvidence [wet =: True] jt print "Posterior Marginal : probability of rain if grass wet" let m = posterior jt' rain print m putStrLn "" let jt'' = changeEvidence [] jt' print "Prior Marginal : probability of rain" let m = posterior jt rain print m putStrLn "" let jt3 = changeEvidence [wet =: True, sprinkler =: True] jt' print "Posterior Marginal : probability of rain if grass wet and sprinkler used" let m = posterior jt3 rain print m putStrLn "" return ()
|
http://hackage.haskell.org/package/hbayes-0.2.1/docs/src/Bayes-Examples-Tutorial.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I'm at the point - after several years of programming with Perl as a freelancer - where I would like to move to the "next level".
Sure I know Perl well, but I feel problems come from other parts.
For instance, I would like to start using source control. Subversion seems nice...
But I'm not sure how to store modules (from CPAN) that I use. Some of them I will change for my needs, most will not be changed. Do I store them as downloaded from CPAN (just uncompressed) and then (re)compile them on each deployment?
What if target machine doesn't have tools to compile it? I admit, this would happen rearely as I ussualy use just pure perl modules ... Maybe also store compiled versions?
Then comes deployment.
How do I organize that? When using source control I guess it's easier - you just do an export, copy files to the target and maybe compile modules - right?
At present I don't use automated tests - just a refresh (web development) and see if it works. Would like to start using tests - there are many nice tutorials about that ... so I'm lookit at the other two.
Is there anything else that Pro's use?
EDIT: What I haven't mentioned - is that I curently only develop for web. And there is ussualy Perl already installed on the server - so it's just modules that might be missing. ...
You have obviously not used SVN's merging tools. While very nice when they work, they often don't work, requiring a lot of hand-merging. Frankly, it's much better just to use the CPAN shell to install the version you want to install. Most of the time, installing the latest and greatest will be good enough.
I was thinking along the lines of having an older machine with OS that client has (FreeBSD or Linux in 99.99% cases) to compile the code myself ...
Assuming you're running the same Perl version. 5.6 and 5.8 are NOT XS-compatible. This means that something compiled against 5.6 will NOT run with 5.8 and vice-versa. 5.10 is rumored to be XS-incompatible with 5.8. The solution you're looking for is a list of prerequisites that you (or the client) install on the client machine. Period. Anything else is asking for more trouble than you want to deal with.
If I really want to modify the behaviour of a CPAN module and it's not reasonable to subclass (or I couldn't be bothered ;) I usually inject my own subs/methods into the namespace from my own code. That way you don't need to carry around modified CPAN modules. And with any luck, minor upgrades to the cpan module won't break your addins (which are hopefully based on documented API anyway).
I simply put modules that are not "standard" in the "local" libs/ folder that I add to @INC (just in application - not for whole server) - so it's only used by my application.
But the best benefits (IMO) are the longer-term benefits. You'll be working on dozens of projects, and when you come back to one after not touching it for a year, you'll be amazed at what you don't remember about it. (In fact, I'm occasionally surprised when I'm asked to make changes to something--and I find that *I* wrote it years ago. I totally forgot everything about it.)
In cases like this, you'll find that it's *much* easier to change the system. You go in, read the code, make the changes, and the testing system will remind/alert you to assumptions you made this time that are different than last time. So the better your test coverage, the easier it is to make a patch "that just works".
Another thing I like: You know how sometimes a line of code or a module looks like it could be simplified or cleaned up? So you do it, and you find that it fails in some way. So you put it back the way it was. Then another person comes in and makes the same observation. (S)he comes in, does the same cleanup/simplification, and has to fix it.
Putting in a test makes it easy for you to prevent this and other types of bug regressions. Before you make a patch, make a test that forces the bug to occur, then fix it. Then if someone changes the code to re-introduce the bug, they'll be instantly alerted to it.
My final reason for enjoying test suites: Sometimes you want to make a change to a subsystem. You make it and the test suite shows you dozens of breakages. This allows you to find dependencies that you can correct, so you can further simplify your code. Some of those breakages will be unavoidable, so it will help document some requirements for your subsystem. Also, for fun, you can try to predict the breakage. This can help you keep an eye out for unnecessary interdependencies between subsystems.
There are many other reasons to make testing a primary concern, but I can't think of 'em at the moment. But you'll find them soon enough once you integrate testing into your development regime!
--roboticus"
Finding the right deployment strategy takes some practice. I'll tell you some basics of how we do it at, for a fairly large Perl application.
First, we keep the code in Subversion. In addition to some obvious benefits of avoiding overwriting each other's work and keeping long-term undo, it's the only way to deal with needing to keep a released version working with occasional patches while you build a new version. This is done with branches.
Next, you need to make a release somehow. We have a script that build release packages. They aren't much more than a .tar.gz file, but some things that are only relevant for development are left out, and care is taken to avoid over-writing config files.
We move that to our server, where we unpack it and run an automated build script that builds all the included CPAN modules. Then we run upgrade scripts which make any required database changes between releases.
We include the CPAN modules in their normal compressed state and automate the whole unpack, Makefile.PL/Build.PL, install business. We install them into a local directory, not the site_lib on the machine, and point our apps at that directory. This means that we always know exactly which version of a module we're using and never get burned by a change in a module's API on an upgrade.
There have been cases where we needed to modify a CPAN module. In that situation, we move it to the source tree, at least until a later version that no longer needs our changes is released. (And yes, we share our changes with the authors in these cases.)
Since we're dealing with dedicated hardware, not a cheap shared host, we control the environment as much as possible. We build our own perl, so we can count on knowing the version and the compile options and not be surprised by a special Red Hat patch or a missing Scalar::Util feature. We also install a specific version of the database, so we know what to expect from it too.
If you're building something to run on shared hosts, or your project is small, then this may be overkill for you. It's been really helpful for us though. You can see most of this in action in the Krang CMS project, which shares many of the same build and deploy approaches.
Yes cheap shared hosting is what makes problems to me most of the time. As I'm still a small fish and don't want to cash ~ $100+/month for dedicated server I took one of those VPS for my needs. I also sugest that most of the customers should take at least VPS (they start at $20/month) if for nothing else than mod_perl/FastCGI posibility.
The thing is that applications being developed are also to be sold to 3rd parties of which most will have shared hosting. And under those conditions only pure Perl modules will work ...
Anyway - in case you want to start using a new module, you first import it in the repository, then do a new export/chekout (script that builds releases handles it). And then you can use it, right?
Are there any articles/books/whatever on this subject? But something that can be applied to Perl?
When adding a new module to the repository, we just check the downloaded CPAN package into a directory and then run the build script. It finds all CPAN packages in that directory and builds them.
Most of the articles about software deployment and configuration management tend to be about Java, since many LAMP developers do this kind of thing in sloppy ways without much thought. It's only when you get into larger apps that you realize the necessity of thinking through infrastructure like this. However, you may find some useful things in the archives on the site.
currently, we have all the commonly used modules (such as CGI::Application and other in-house modules) in a central perl lib directory and all apps use it from there. problem may arise if a module gets upgraded with API changes and such. but i think testing may help us stay away from trouble.
to your reply, I am wondering why not unpack all needed modules locally once for your app in development and save to cvs. when deploy to the production, just cvs out. be done with it? (the only reason i can think of not doing it is unless you need to deploy to different OS). rebuilding same modules on dev,QA,production sound tedious.
or, you can keep all unpacked modules in a central location, copy to new app when needed without rebuilding it for every release.
thoughts?..
|
http://www.perlmonks.org/index.pl?node_id=552226
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
but if you're not running in docker then I have no idea 😛
we initially did it because one of our sites is quite high traffic and running a production snapshot during the day would start causing errors for users so we decided to start taking regular snapshots at night so we always had access to a fairly recent snapshot if we needed to debug a production issue
yeah I think they do but you'd have to go via the service desk to get them which can take time
import + sql + export
on a daily basis we delete any existing snapshots in dash, create a fresh snapshot, download it, extract the database and remove any personal information (user data), repackage it up as a new sspak file, and make it available as an artefact in our TeamCity instance
oh yeah I use my one for that too
definitely glad I saw this conversation
looks like
Report::$excluded_reports might let you hide them?
|
https://slackarchive.silverstripe.org/slack-archive/user/guttmann
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
BloomReach Cloud Release Notes
v7.17.0 (04 December 2019)
- A brCloud user can mark environment as failed to enable redeploy
- Upgraded niginx-ingress to latest version
- Huge internal improvements w.r.t lucene export, CI/CD, migration to support niginx upgrade and more
v7.16.0 (06 November 2019)
- Bugs and Improvements
- Improvement Blue/green production switch work for high traffic sites
- Efficient handling of memory during large database backup upload
- Set New Relic security to high to enable SSL and update the agent to the latest version
- Elastic search optional
v7.15.0 (11 October 2019)
- Improvements
- Dockerfile cleanups (incl. running as non-root)
- RDS alerts to detect peak storage thresholds
- Add "request url" and "source-ip" info to access logs in Humio
- Make JVM options generally configurable
- Reduce the number of steps to configure an environment-specific hst.properties file
- Refactor platform-worker to leverage lib/brck8s services
- Improve init platform worker to remove false alerts
- Bug fixes
- Fix data race in nightly builds
- Authentication token set grows too large
- Session draining cannot deal with newly added environments
- Session draining is not deleting the old pod
- Fix TestProtectSiteAfterDeploy
- APP_CONFIG_PATH resolving incorrectly
- Watch deployment should be aware of session-drained deployments
- Check the DB connection before borrowing from pool
- Minor code cleanups / comments
- lucene index problems
- Rewrite integration tests to unit tests
- Fix kube-state-metrics that is in a restart loop
- Task
- Automation for support team accounts in AWS
- Automate cover static IP with Ansible + Migrate stacks using this feature to managed state
- Tag and release v7.15.1 to engineering
v7.14.3 (17 October 2019)
- Updated the New Relic agent and enabled high security mode
v7.14.2 (09 October 2019)
- Bug fix: Status not updated for a session drained environment (Watch deployment)
v7.14.1 (26 September 2019)
- Lucene index bug fix
- Increase the backup retries for lucene index
- Memory queue configuration for filebeat
v7.14.0 (5 September 2019)
- Bug fixes and Improvements
- Session draining should not use login multiple times
- Platform-worker aws sync command gives unknown options
- Move patch changes to integration and make them configurable
- Parameterize separate efs for redis
- Revisit Liveness Check implemented in BRC-3939
- Adjust cpu/memory limits for platform-worker
- Operations
- Create v7.14.0 tag and roll-out internally
- US-EAST-1E Nodes unavailable while provisioning for US customer stack
- Improve ansible for the latest upgrades (kubernetes)
- Create backups of distributions for prod stacks manually
- Logs & monitoring
- We need an alert and humio log entry if redis backup fails
v7.13.0 (14 August 2019)
- Automation / CI / CD (Internal)
- Upgrade CI CMS projects because of bug in Hippo Repository in 12.3
- System tests for jsessionid cookie from tomcat
- Deleting stack is failing with ansible - idempotency issue
- Fix npm audit findings for mission control
- Fix performance test job
- Automate stack deletion via ansible
- Investigate deployment times in CI cycle
- Fix nightly system tests
- Extract kubernetesservice and cmscontainerdeployment logic
- As BRC dev, I want pods cms-foo-bar to be removed
- Improvements and Bug fixes
- Censure tomcat version information
- Do not print morphed password of RDS to logs or env vars
- Avoid CMS pods pending in Terminating state
- Reduce ELB timeout to be less then the backend server idle timeout
- Fix the error log generated while activating New Relic, although functional
- Set limit for Redis to rewrite AOF
- Make the brc and support team members admins in humio dashboard after humio fixed the api bug
- API docs mention 'deployed' state instead of active
- GetState returns internal error if there was no session draining for an environment before
- Preserve customized appconfigfile names during restore operation
- Session draining cannot get status of draining from platform worker (Check for all stacks)
- Upgrade Tomcat to 9.x latest - customer request
- Implement liveness check for cms pod
- Cover logs containers with init
- Switch redis to a separate EFS share
- Improve toast notification duration for operations that take long time
- Bloomreach with little 'r' in Mission Control & reset mail
- Improve the liveliness check for the CMS pods
- Operations
- Enable Cloudflare on bloomreach.com stack
- Add ssl wildcard certificate to Customer stack
- verify migration steps for security group
- Recreate Documentation/Hippo stack
- Scope & Design for Cloudflare
- Create 7.13.0 tag and deploy to Engineering
- Apply nginx tweak to all stacks
- Configure bundle cache size, max memory and tomcat max thread count
- Give a newly created stack provisioned EFS credits
- Environments stuck in backup / deployment stage (customer stack)
- Logs & Monitoring
- Define prometheus/grafana alert for nginx pod restarting
- create alarm and improve grafana dashboard for CI stack
- Automate adding/removing pingdom checks for mission control
v7.12.1 (25 June 2019)
- Operations
- Script to associate static IP's with k8s cluster
v7.12.0 (20 June 2019)
- Mission Control (UI)
- MC - Long names are not shown correctly in firefox browser
- Environment name disappears when a toast message appears
- Login form feedback labels when browser pre-fill saved username/password
- Improve the UI of domain cards
- Core
- Disable some audit logs
- Increase max age HSTS setting
- Fix Elasticsearch health test pod
- Enable deletion of heapdumps created automatically
- ES fix after upgrading to ES 6
- Bump proxy_*_size values in nginx
- Fix for nginx-ingress-controller bug #4039 for contenty-type header
- NGINX backend should copy custom error pages when it is restarted
- Fix kubectl swagger openapi issue
- Move environment protection encryption key to secrets
- IDs are not always unique
- Session draining does not delete the older pods
- Design CloudFlare tenant API
- Enable backups bucket encryption
- Upgrade go code to use v1.11.0 kubernetes API
- Implement efs encryption at rest
v7.11.1 (24 May 2019)
- Bug
- ES pod affinity after upgrade to ES 6
v7.11.0 (17 May 2019)
- New feature
- CDN support - investigate
- Support ElasticSearch 6.x
- Improvements
- Security groups segregation for AWS resources
- Show improved link to humio from environment
- Bug
- Return an error for requests to domains with no matching certificate
v7.10.2 (03 May 2019)
- Maintenance and Bug
- Cannot create a manual backup for an environment in MC
- Consolidate brc.environmentname and hippo.environment
- Environment field is not shown in Humio anymore
- Some pagerduty alerts should just be slack messages in #brc-notify-prod
- Measure and publish test coverage results (ut/it/st)
- Replace user/admin/root literals with constants
- As a user, I want the domain / user card to be wider so I can read the fields
- 3rd CMS pod is stuck in pending after marking environment as production
- Assess and improve inline help content
- Show correct error in UI when environment exceeds 30 chars
- Improvement
- POC integration with CloudFlare API while creating domains & sub-domains
v7.10.1 (16 Apr 2019)
- Maintenance and Bug
- Change Hippo in Login screen to Bloomreach
- fix Kubelet error logs
- Remove helm base64 encoding from helm templates
- protect environment api gives 500 internal error
- delete secret when protection is disabled
- Improvements
- Adjust K8s Resource handling per best practices
- Fix bug in UpdateProtectionForDomain
- change log level on the fly
- Improvement
- make st interface improvements
- Show md5 hash for distributions
V7.10.0 (03 Apr 2019)
- Maintenance and Bug
- Upgrade kubectl version
- Fix that some of the production stack alerts go to internal channel
- Only admin should be able to create and download dumps for prod
- More restriction on File worker privileges
- Enable the secure flag on the jsessionid cookie from tomcat to prevent transmission over plain http
- Prevent local disk space running out during restore operation
- Upload limit for images/assets not 10mb everywhere
- Fixed custom domain site template for cookie path
- Switch to go 1.12
V7.9.0 (21 Mar 2019)
- New Feature
- PageView reporting feature (ready to use)
- Improvements
- Use OpenJDK instead of Oracle JDK and switch to Tomcat 9.x
- Added redis dashboard to grafana (monitoring)
- Maintenance
- Maintenance w.r.t Image pull policy
- Go resty to latest version
V7.8.1 (27 Feb 2019)
- Improvements and bug fixes
- Remove redirects from https to http in the CMS
- Alerts for unexpected certification expiration
- Forbid http access to the CMS
- Mock disaster recovery across region using platform backups
- Deleting a domain from BRC can sometimes fail
- New Feature (PageViews - part 1)
- Aggregate access logs in single store as part of Pageviews feature for a single stack
- Store access logs in s3 as part of Pageviews feature
- Automate creating s3 bucket and account needed for access logs
- UI Bugs
- Restrict long names for environment
- When entering an invalid domain, the edit panel is closed
V7.8.0 (7 Feb 2019)
- Improvements
- Fix AZ handling in the main cloudformation template
- Multi Site Support on Bloomreach Cloud
- Support v13 log files
- Remove the waiting to start second cms container
- Fix the security recommendation from AWS Security Advisor
- UI Improvements and bug fixes
- UI for multi-site support (part of domain configuration) MSDTS
- Environment tool tip shows incorrect information
- As a user, I expect the checkbox 'switch domains' to be checked by default
- Distribution upload broken after creating fresh stack
- Maintenance & bug fixes
- Fixed Prometheus and kubestate metrics serviceaccount permission issue
- Make it easy to filter for an environment in Humio
- Improved the current logstash parser by removing freemarker error detection
- Long lasting request against CMS logs out users
- Set queue limit for logstash and upgrade to latest version 6.5.2
- Journal cleanup after platform backup fails due to security restrictions
- Restore does not restore the lucene index file before deploy
- Eliminate $rootScope usage
- Work around the aws PoliciesPerGroup limit of 10
- Replacing distribution files for CI CD scenario
- Performance tests vol. 2
- Increase the bundle cache size for Versioning Persistence Manager
- Implement AWS instance logging
- Improve unit tests heapdump
- repackage - release - deploy redis for each commit
- Deploy latest redis configuration to all stacks
- Send kubernetes logs to humio - if data wise feasible
- Make k8s logs enable/disable parameter for filebeat
- Reload prometheus config without killing the pod
- Run performance tests against one "frozen" release of brc
- Update MC & ansible for humio migration
- Improvements to CI CD
- Run tests with supported sanitizers
- Make sure tests on integration cannot run with FIt(..)
- Limit the number of distributions
- switch_production_test does not check result of the switch operation
- Clean up distributions after running system tests
- Modify gometalinter reports for new report parser
v7.7.1 (14 Dec 2018)
- Core
- Allocation of enough CPU for all pods
- Improve system tests for heap/thread dumps feature
- Return {} for 2xx
- UI Bugs
- Align the tab emphasise line
- Improve usability while deleting the environment
v7.7.0 (6 Dec 2018)
- Core
- API support to create heap-dumps and thread-dumps
- Include config files and properties files during redeploy
- Improve timeout for long operation requests from CMS
- Use same SES for all stacks
- Alerts & monitoring
- Dashboards for pods, deployments, nodes, namespaces
- Alerts for unhealthy pods & expiring certificates
- Security
- Stacks accessible is restricted to office network and VPN
- Restrict unauthorised kubernetes access between pods or nodes
- Automation
- Include gsuite user creation as part of stack creation
- Mission Control
- Correct the toast message when file upload is abandoned.
- MC session expiration
- Improved Layout for environment details & "production environment set-up"
- Update environment label correctly after deployment
- Remove colour from buttons when clicked
- Better confirmation message for restoring backup
- Disable deploy option during backup restore
- Show hypen when config file or system property is not there for an environment
v7.6.0 (16 Nov 2018)
- Graceful upgrades
- App config UI support
- Custom error page support
- Stack level robots.txt support
- Converted Nginx ingress controller as daemonset
- Disable delete button for prod environment
- File Upload limited to 10mb
- Google business account non gmail customers to login to humio
- Enterprise forms email support
- CI improvements
- Chart museum adaption
- Login credentials for engineers on CI stack
- Bugs
- allow subdomain with single letter "m.mywebsite.com"
- Switch production when not active
- Nigthly backup should use sync instead of copy
v7.5.1-a (23 Oct 2018)
- Blocker fix for Session draining failure due to app config feature.
v7.5.1 (28 Sep 2018)
- API to manage application configuration files
- RBAC mechanism for services (workers) to get kubernetes access
- Support for EBS sticky zone
- API improvements and bug fixes
- UI improvements and bug-fixes
v7.5.0 (09 Aug 2018)
- Encryption of RDS & S3 encrypted
- UI to mark an environment as "production"
- UI bug fixes and maintenance tasks with kubernetes 1.9.9 upgrade
v7.4.1 (11 July 2018)
- Improved UI for (re)deploy distribution
- Minor improvements to look & behaviour in UI (mission control)
- Mandatory check for the presence of hippo enterprise repository and lucene-export jars in distribution file
- Minor improvements to the backend services w.r.t health check, error messages & redis index entries
v7.4.0 (21 June 2018)
- EBS now part of nightly platform backup
- Upgraded kubernetes to v1.8.13, tomcat to 8.5, java to 1.8.172 and nginx to 0.15.0
- Improvements to Elasticsearch & cms container that result in start-up problems
- Improvements and bug fixes to UI and Backend
v7.3.0 (25 May 2018)
- Mark an environment as production that results in more CMS Containers for that production environment and switching all the domains to that production environment in one go.
- Lot of UI improvements:
- Relevant error messages for the services that are unavailable
- Availability of help panel by default
- Labels & tab alignments and consistent text formats in environment section
- Disable an inactive user to request password reset
v7.2.0 (30 April 2018)
- Implement rolling update and stop/start redeploy
- Improve UI look for backups in MC
- Navigate easily to the environment from domain configuration
- Move 'CMS setup' info from DETAILS tab to Help text for the page (in Help panel)
- Add clear button for site and cms protection
- Improve text "you..together" for CMS section in the header
- Show message when renaming distribution fails because of not unique name
- Make environment worker API well formed
- Deprecated the old API for "deploy end-point", API documentation is update accordingly
- Make file worker API well formed
- Domain entry is not closed when opened via environment >> details tab
- Fix inconsistent & overlapping of text and UI buttons
- Enable downloading of logs during deployment
- The "redeploy" option affects all the environments
v7.1.0 (11 April 2018)
- Display domains for an environment in the environment section
- Show readable message when user click on the 'here' link in the (re)set password email after it has been expired
- Fix bug where uploading new distribution file with existing name removes old one and changes the distributionId
v7.0.0 (28 March 2018)
- Enable to protect a environment from external access
- Align backup-worker with new error handling design
- Show the name of the domain in confirmation delete dialog
- Enable clicking on the header panel of a domain to expand it
- Replace label password with passcode in MC for the protect env settings
- Show error message when deleting an environment with a domain attached
- Replace usage "/distribution/{distributionId}" with "/{distributionId}" in MC
- When updating a domain name that is not unique, the panel should not close
- Remove slash '/' at the end for list like api endpoinds
- Show error message for a domain name that has no certificate installed
- Support domains like
v6.0.0, v6.1.0 (19 February 2018, 14 March 2018)
- Improve error message when the domain name is not unique
- When (re)deploying, the domain configuration should be applied (again)
- Show no-certificate found error when entering an invalid domain name
- Privilege only admin to change domains
- Create new distribution file that has site hst configuration
- Apply domain configuration tag and configure domain on existing stacks
- Do not show confirmation when updating a domain without environment
- Enable downloading of log files from CMS instances
- Prepare Docker image for ansible
- Fixed bug, if the Deployment, Backups or Settings tab of an environment is open, the selection of that environment in the left side panel is lost
- Logstash filter doesn't handle environment with dash in name properly
- Fixed bug where default TOMCATLOG pattern in Logstash cannot parse logs produced by CMS pods
- Improved internal error when invalid distribution file is uploaded
- Avoid scheduling same workers and cms-containers on the same node
- Show the year of dates in MC
- Resolve Channel Manager does not work for a CMS domain
v5.0.0 (23 January 2018)
- Help opens when logz.io link is clicked
- Retrieve the version information of the stack
- Show version information in MC
- Create new monitoring stack using prometheus 2.0 using BRC sub-account
- Change and install monitoring charts to support prometheus 2.0 on non-prod stacks
- Disable access-log in ingress controller
- Do not show the error in the logs if a project does not bundle the lucene export service
v4.0.0 (15 December 2017)
- Upgrade BRC to use Kubernetes v1.8.5
- As ODT, we need to upgrade client-go dependency to v5.0.0 when migrating to latest K8s
- Update k8s manifest files with changes APIs in K8S v1.8
- Replace init-container annotation with .spec.initContainers
- Tag 4.0 based on K8s 1.8.4
- Fix crashing of logstash that causes docker engine restarted
v3.9.0, v3.9.1 (04, 08 December 2017)
- Ansible script to include redis
- Disable vet duplication warning for tests
- Use ISO8601 date format in file and backup worker
- Clear all deployments, services, ingress for new deployment
- Improve error message 'cannot find user xyz' when logging in with invalid username xyz
- Access-Control-Allow-Origin header does not exist in the APIs return 4xx
- Resolve Merge conflict in ONDEMAND-1913 that causes unit-test failure
- ODUI - Login controller unit tests fails
- Made active to be true when creating a new user
- Resolved failing file worker tests on integration branch
- Monitor Elasticsearch service in stacks
- Disable Lucene export endpoint authentication
v3.8.0, v3.8.1 (22 Nov 2017, 04 January 2018)
- Implemented lazy loading of Deployment tab & backup page when backup lists is 100+
- Aligned the font in help text for Deployment
- Shows clear error when deleting a distribution file that is in use
- Backup restore toast and loading bar are synced
- Logo and old title (onDemand) in reset password login dialog are shown
- Always show Mission Control's environment tab to be always visible
- MC UI: Use environmentId as identifier instead of environment name
- As a MC user, I expect active to be true when creating a new user
- Fixed bug when Backups menu dropdown was not working
- Disable collapse button for environments that do not have a backup
- Do not show the system creator if it is a system backup
- Fixed bug when the date time stamp is different (30 mins difference) in same screen
- Fixed bug where Help opens when logz.io link is clicked
- Improve backups page ordered by collapsable environments
- Show Logz.io url in Mission Control
- Show see more details about the deployment
- Support 'dynamic' icon in the environment overview page for environments that are deploying
- Improve feedback when calls take a long time
- Replace Change password page by a dialog
- Dates not visible in Firefox and Safari
- Support secure feedback in MC for reset password in case user does not exist
- Implement distinctive inline help panel in Mission Control
- Disable controls in Mission Control when deploy is in progress
- MC UI: Always show the plus button for backups
- MC UI: Leverage screen width for the user panel
- Prevent exceptions in the logs when deleting an environment
- Fix backup-worker which failed to backup ES indices of environments with their IDs in capital cases
- Fix Password-reset function does not work on v3.6.1
- Fix failed unit tests in the integration branch
- Include bloomreach-cloud-secrets in ansible script
- Perform session draining on stack level
- Align file-worker with new error handling design
- Turn off RMI in Tomcat's config
- Support execution two concurrent snapshot backups at the same time
v3.7.0, v3.7.1, v3.7.2 (09, 13 & 24 November 2017)
- Support ES index with upper cases
- Reset password URL fixes
- For backups use local storage instead of EFS
- Rate limit the requests to the workers
- Update the JDK version used in the CMS container
- Use latest Tomcat 8.0.x version
- Refactor code - rename apiResponse.Error to api.ResponseError
- Prevent sql injection is not possible when creating or deleting database
- Set explicit idle timeout to AWS ELB
v3.6.2 (16 November 2017)
- Create tag v3.6.1 & v3.7.1 with ONDEMAND-2572 fix
- Create tag v3.6.2 with cherry-pick ONDEMAND-2587 and ONDEMAND-2589
v3.6.0 (06 November 2017)
- As an API user, I expect the User API to return better error messages
- As ODT, file worker throws exception when get distribution with invalid id
- As a user, I expect Mission Control to give me feedback about environment name validity
- As an admin, I expect adding an inactive user makes the user inactive
- As user, I expect the backup operation should fail if the the ES backup fails
- Fix system test TestDeleteNonExistingEnv and TestDeployInValidDistribution
- As ODT, upload distributions from tests take too long or fail
- ODUI - In the login screen change label Username to Email address
- Incorrect initialize glog
- As ODT, make backup restore, backwards compatible with older backups that does not have elasticsearch
- As BRC team, clean all backup items as part of backup retention schema
- Elasticsearch concurrent snapshot exception - blocks v3.6
- Missing NODE_NAME variable in ES setup
- Creating new active user fails in Mission Control
- As ODT, I want the automated database backups kept according to a retention schema
- As a customer, I want my database journal table to be kept clean
- As ODT OPS, add one worker node to BR stack
- As ODT we want to receive an alert if Elasticsearch service fails in stacks
- As ODT, improve file worker integrations tests
- As ODT, turn on file worker system tests using AWS secret
- As ODT, improve user worker error handling
- Run code analysis nightly on integration only (and email the results)
- Fix TestDeleteValidDistribution and TestDeleteDistributionInUse
- Fix system tets TestCreateBackupWithInvalidType and TestGetBackupState
- Upgrade build tool to use go from v1.8.4 or v1.9.1
- As a user, I expect always a 200 response when sending reset password request
- Improve error codes response in change password API
- As ODT, we need a nice way of handling internal errors in system tests
- Improve Makefile to detect errors when running system tests without parameters
- As ODT, remove panics() from test code
- As a user, I expect the reset password to contain a warning about the lifetime of the reset link
- Minor improvements in Jenkinsfiles and Dockerfiles
v3.5.3 (16 October 2017)
- As ODT, make backup restore, backwards compatible with older backups that does not have elasticsearch
- As ODT, run backups sequential, tag 3.5.3 and deploy to production
v3.5.1 (09 October 2017)
- Incorrect permission on distribution files
- Uploading same file does not replaces old distribution file in distributions api\Environment delete does not clean up ingresses
- Environment-worker must not update env state when it had been deleted
- Max packet size is set to 1GB
- cms-container system test does not run
- Fix broken build in integration branch due to backup-worker-test failure
- As a user, I cannot rename a distribution file from Mission Control
- Fix system tests
- Delete ES index when environment is deleted
- Fix system tests failure in the Integration branch (v3.4.1-128-g4653851f)
- Enable glog in file-worker
- As ODT, we should backup/restore Elasticsearch
- Verify that the Elasticsearch restore is incremental in blue/green scenario
- As ODT, remove copyright lines from go code and single copyright file
- As ODT and a user, i want database restore to be faster
- Fix Elasticsearch deployment
- As ODT, extend cms container to support SSO Dice
- Tune alert rule on system load
- Upgrade helm to v2.6.1
- Replace blue/green to one/two
v3.4.2 (25 September 2017)
- Replace blue/green to one/two
- As a user I should not be able to delete a distribution file that is in use
- Use uid-generator rather than 'math/rand'
- As a user, I experience progress bar on the wrong distribution file when deploying
- As a user, when creating a new environment not all pods are running
- Backup-worker exposed credentials to logz.io
- _visitor cookie path incorrect
- As a user, I expect the Environment API to return 400 for duplicate environment
- As a user, I expect the Environment API to return correct error for deleting non-exist env.
- As ODT, admin should not be able to create and update root user
- As ODT, max.environment should work correctly
- A ODT, create env. response should return env. state correct
- As ODT, platform worker configurations are not correct
- Backup worker cannot connect to redis
- As BRCT, we want monitoring & alerting for redirection server
- Upgrade script / path for existing stacks as consequence of ONDEMAND-2185
- Remove workaround for login issue onehippo website
- Add sticky-cookie/target-rewriter fix to hippo/ingress github project and create bugfix request to the mainstream
- Integration branch produced too many static analysis warning
- As BRCT, update document on installing Helm charts
- Create a DNS AWS api account
- As OPT, we want to use AWS subaccounts for stacks
- Merge QA repo into bloomreach-cloud repo
- GO static analysis for BRC build jobs
- Change memory threshold alert to 90%
- As ODT, we need to bump to alpine 3.6
v3.4.0 (21 August 2017)
- AWS credentials in platform worker is not stored in correct format
- Incorrect document on how to install bloomreach-cloud-monitoring
- EPIC Rolling update CMS containers with zero downtime using session draining
- As Hippo I want a plan to implement inline help in Hippo UI's
- As ODT we need a solution for upgrading cms containers
- As ODT, we want an integration test for session draining
- Migrate "hippo pagerduty" to "bloomreach pageduty"
v3.3.2 (05 September 2017)
- As ODT OPS, we need a maintenance version 3.3.2 and deploy it to prod stacks
- As ODT, I want to see stack-name in web-hook alerts
- Ingress controller does not update when pods change
- Unable to login with Users added via UI
- As BRCT, we need godep for reliable builds
- Tag and deploy 3.2.5 & 3.3.0 to engineering stack
- Deploy User-worker and MC v3.3.0 to PoC stack
- Remove hardcode of host in mission control ingress
- As ODT, Using Helm chart for OD2 deployment
- As ODT, finalize Helm chart for automatic deployment for monitoring & logs
- As a user I want to be able to add a user account in the onDemand UI
v3.2.3 (01 August 2017)
- Password reset mail - Image and text alignment issues (single repo)
- As a user, I expect to be logged out of MC after 30 minutes of inactivity
- Platform backup cron-jobs are delayed after platform-backup-worker runs very long time
- As a user I expect an easier to recognize browser tab for MC
- As a user, I expect the Backup API to return 404 for not-existing job
- As a user, I should not be able to create an environment with the name api or missioncontrol (moved to single repo)
- As ODT, we need to delete the lucene index that is a left over of after stack snapshot (single repo)
- Cannot build environment-worker docker image
- Build Failure: Environments working build failing
- Build Failure: Mission Control build failing
- Enable build failure notification in Jenkins build script
- Remove 'yarn' from build steps
- ODUI, No email is sent when creating the user though UI shows it is sending the mail
- ODUI - Adding new user is not UI friendly
- As an admin user, I should not be able to create or update a root user
- As an admin I can delete my self as user in Mission Control
- As an admin user, when retrieving users I should only get regular users or admins (not root users)
- As a user, I want to administer user accounts in onDemand UI
- As ODT, we need an endpoint to update existing user (single repo)
- As ODT, I should get a pagerduty alert when the platform sends an alert
- As ODT, I want to use a single Redis instance with AOF
- As ODT, we need to backup route 53 as part of the stack snapshot
- Rename platform-backup-worker to platform-worker (single repo)
- Deploy latest 3.2.2 tags to production and 3.2.3 to engineering stack
- As a user, I should not be able to specify root as a role in Mission Control
- As ODT, we want to move to a single gitlab repository
- As OD UI User, I want to see a helpful message when environment is in "unknown state"
- As a user, I want to update users in Mission Control
- ODUI - The help section overlaps buttons and hides features
v3.2.3 (18 July 2017)
- Improve reliability of Elastic Search Deployment
- Alerts for restoration or manaul backup failures
- Alerts when an item in platform backup fails
- Improve alerts and logging
- Improve / Debug storage shortage in monitoring stacks
- Improvements on UI:
- Handle max environments error Message
- Confirmation dialog when creating a backup
- Disable environment links when environment is unavailable
- Switch to environment overview after changing password
- Show time using local timezone
- Browser Refresh should work in UI
v3.2.2 (20 June 2017)
- Add New Relic config key to CMS Deployment template
- Fix the backup end-points that return 404
- Align OD UI to use v3 API
- Upgrade router to use latest nginx ingress controller
- Implement API for refresh token mechanism
- Fix the date field (safari shows null)
- Possible to use enter key for reset button
v3.2.0 (06 June 2017)
- Make kubernetes services to use cluster-port than node-port
- Improve feedback when deploy completes
- Improve error handling during backups
- Simplify external authentication URL in ingress rules
- Include user in the platform logs
- Configure and backup the settings for grafana in monitoring stack
- Improve logging (rename container to service, filebeat issue with k8s 1.6 )
- Update the mysql connector jar to latest
v3.1.0 (23 May 2017)
- Stack snapshot should always success with results (number of success & failures)
- Resolve when BRC stack failed to recover from EC2 outage
- Deploy new redis sentinel in engineering, hippo and POC stacks
- Alert when stack snapshot fails
- Framework for logging solution
- Alerts for monitoring & logging events
- Improve CMS Container logging
- Implement Prometheus-grafana monitoring solution for all BRC stacks
- Upgrade production databases for BPM
- Alerting feature when EC2 terminates
- Upgrade stacks for the nginx log warnings
v3.0.1 (09 May 2017)
- User environment-id in cms container config
- Improve nginx logging
- Make cms container version configurable
- Support hosting of bloomreah.com on BRC stack
- Enforce password rules
- Support new database as part of BPM feature in CMS
- Support log4j2 for CMS v12
- Update angular version
v3.0.0 (25 Apr 2017)
- Recover redis sentinel during outage
- Support both v2 & v3 API
- Resolve the compatibility issues w.r.t supporting v2 & v3 API
- Include redis & database backups as part of Stack snapshot
- Support lucene index export
- Clean up datadog to monitor BRC BRC stacks
- Replace log statements with glog wrapper
v2.2.0 (05 Mar 2017)
- Documentation "Building a website" trail on BRC
- Improvements to CMS logs feature
- Improvements to backup-restore operations
- Improvements to password management, sharing secrets, parameterised etc
- Improvements to UI
- Introduced Authentication service
- Enable restoration in empty environment
|
https://documentation.bloomreach.com/13/bloomreach-cloud/brc-release-notes.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Users Guide for SnapDevelop
Last Updated
Most of the content, except for a few places (except for a few settings and instuctions) in this document has been updated against SnapDevelop 2019 R2.
Introduction
The SnapDevelop Integrated Development Environment (IDE) contains a wide variety of features that make it easy for you to edit, test, debug, compile, build, and publish your applications.
Personalizing the SnapDevelop IDE
You can personalize your SnapDevelop IDE by selecting the Options tab from the Tools menu so that you can develop in ways that best suit your style and needs. The Options tab allows you to make settings in relation to the development environment, projects and solutions, source control, text editor and NuGet package manager. This article presents a brief introduction to the diverse personalizations that can be made for the SnapDevelop IDE.
Environment Settings
Several personalization options are offered via the Environment options dialog box. For example, this dialog box allows you to set the color theme and specify how the documents are displayed.
Color Themes
You can choose any of the two color themes: light and dark. The default color theme for SnapDevelop's user interface is Light. If you want to change the theme, you can take the following steps.
On the menu bar, select Tools > Options.
Select Environment > General, change the Theme to Dark, and then select OK.
The color theme for the whole SnapDevelop IDE changes to Dark.
Document Display
The Options dialog box allows you to control the display of files in the SnapDevelop IDE and manage external changes to files. You can specify the ways for the display of files and perform file management operations by selecting Tools > Options and then Environment > Documents.
Detect when file is changed outside the environment
If this option is enabled, a message immediately pops up, indicating that changes have been made to an open file by an editor, other than the SnapDevelop editor, outside the IDE. This message allows you to reload the file from storage.
Reload modified files unless there are unsaved changes
If you have selected Detect when file is changed outside the environment and an open file in the IDE is changed outside the IDE, a warning message automatically pops up. If the current option is enabled, no warning message pops up and the file is reloaded in the IDE to adopt the changes made externally.
Open file using directory of currently active document
If this option is enabled, the Open File dialog box displays the directory of the active document. Otherwise, it displays the directory most recently used to open a file.
Show Miscellaneous files in Solution Explorer
If this option is enabled, the Miscellaneous Files node appears in Solution Explorer. Miscellaneous files are files you want to work with independently from the containers. They are external to a solution or project but can appear in Solution Explorer for the purpose of convenience.
Items saved in the Miscellaneous files project
This option specifies the number of files that can appear in the Miscellaneous Files folder of Solution Explorer. These files are listed even if they are not open anymore in a code editor. You can specify any whole number between 0, which is the default number, and 256.
For example, you set the number at 2 and open 4 miscellaneous files. When you close all 4 files, the first 2 will still be displayed in the Miscellaneous Files folder.
Projects and Solutions Settings
You can avoid accidentally moving your files by setting a prompt prior to the movement. To access this option, select Tools > Options, expand Projects and Solutions, and then select General.
Prompt before moving files to a new location
If this option is enabled, a confirmation message box pops up before you attempt to relocate the files in Solution Explorer.
Source Control Settings
Source code control system serves to track changes to the source code and other text files during the development of a program. Developers can use this system to retrieve any of the previous versions of the original source code and the changes which are stored.
Plug-in Selection.
Text Editor Settings
The Text Editor option allows you to change global settings for the SnapDevelop code editor and offers ways to customize the behavior and appearance of the text editor.
General Settings and Display
Under the General option, you can specify the various settings for the text editor and determine how the editor is displayed. To access this option, select Options on the Tools menu, expand the Text Editor folder, and then select General.
Settings
Drag and drop text editing
If this option is enabled, you are able to move code to anywhere in the current file by selecting it and dragging it with the mouse.
Automatic delimiter highlighting
If this option is enabled, the delimiters between parameters, methods, or matching braces are automatically highlighted in grey.
Track changes
When the code editor is selected, a vertical yellow line appears in the left margin of the changed text to indicate that the text has changed since the file was last saved.
Enable single-click URL navigation
If this option is enabled, the mouse cursor changes to a pointing hand when it passes over a URL in the editor and Ctrl is pressed. You can then click the URL to go to the indicated page in your web browser.
Hide cursor while typing
Your cursor by default remains on screen where you leave it. Annoyance often occurs while you are typing. You don’t want the cursor visible above the text you are typing, but you want the cursor to remain in the text area, in case you need to use it. Therefore, this option offers you the ability to make the cursor invisible on screen while you are typing, until you stop typing and move your mouse.
Display
Show space character
Tab, space, linefeed, vertical-tab, carriage-return, formfeed, and newline characters are called "white-space characters" since they play the same role as the spaces between words and lines on a printed page. If this option is enabled, the text editor shows the space characters, making a program more readily readable.
Highlight current line
If this option is enabled, a grey box appears around the line of code where the cursor is located.
Line numbers
If this option is enabled, a line number appears next to each line of code.
Column rulers
The column ruler serves to determine line lengths. It is particularly useful when your editor has line length restrictions. The column ruler defaults to the 120th character. If you find the file width undesirable, you can modify the width setting.
C# General Settings
Under the C# option, you can manage the ways the code editor behaves when you are writing C# code.
Statement Completion
Statement Completion is a source code editor feature that promotes rapid development by offering an alphabetical list of possible names, keywords, and code snippets that might be entered as you write code into the editor.
Synchronization delay time of syntax service
The synchronization delay time of syntax service refers to the time interval from when you are writing code until when you stop writing. The synchronization delay time defaults to 150 ms. If necessary, you can adjust it within the range from 50 ms to 1,000 ms.
Auto filter the list of members on typing
If this option is enabled, pop-up lists of available values, members, properties, or methods are displayed by IntelliSense as you write code in the text editor. Select any item from the pop-up list to insert the item into your code.
Settings
Enable virtual space
If this option is enabled and the Word wrap option is automatically disabled, you can click anywhere in the code editor and type anything you want.
Word warp
If this option is enabled, the part of a long line of code that extends horizontally beyond the view of the code editor window is automatically displayed on the next line.
Inherit word warp indentation
If you select the Word warp option only, the code to be wrapped appears at the very beginning on the new line. If you select the Word wrap option and then enable this current option and specify the indentation value, which represents the characters and can be positive or negative, the code to be wrapped is indented by the characters you specify and the indentation is increased or decreased on the basis of the indentation level of the previous line.
NuGet Package Manager Settings
NuGet Package Manager controls a set of tools that can help automate the process of downloading, installing, upgrading, configuring, and removing packages from a SnapDevelop project. To access this option, select Options on the Tools menu, and then select NuGet Package Manager.
General Package Settings
The General tab of the NuGet Package Manager option enables you to determine whether to allow NuGet to download missing packages. It also allows you to determine whether to automatically check for missing packages during build in SnapDevelop.
Package Restore
Package Restore installs all of a project's dependencies in order to reduce the size of repositories and promote a cleaner development environment. SnapDevelop can restore packages automatically when it builds a project, and you can restore packages at any time through SnapDevelop. Package Restore guarantees that all the dependencies of a project are available even if they are not stored in source control.
Allow NuGet to download missing packages
If this option is enabled, NuGet first tries to retrieve, from the cache, packages that haven’t already been installed. If the packages are not present in the cache, NuGet then attempts to download the packages from all enabled sources.
Automatically check for missing packages during build in SnapDevelop
This option controls automatic restoration of missing packages. If this option is enabled, missing packages are automatically restored by running a build from SnapDevelop.
Package Management
Default package management format
SnapDevelop provides two package management formats - Packages.config and PackageReference. .NET Core and .NET Standard projects that SnapDevelop currently offers are by default managed via the PackageReference format. The .NET Framework projects that SnapDevelop can open but does not offer right now are by default managed via the Package.config format. The Package.config file is displayed in Solution Explorer while the PackageReference file is included in the .csproj file in File Explorer.
Clear All NuGet Cache(s)
You can choose to clear all NuGet caches on your computer so as to prevent yourself from using the old or obsolete packages and help your apps run better on your machine.
Package Sources
This option allows you to teach NuGet where to find packages to download. You can have multiple package sources. The flexibility that SnapDevelop and NuGet offer makes it easy to create a personal repository.
There are four buttons that represent the possible actions you can take. You can add a repository path (+), delete the selected repository (x), or move it up or down. There is also a list of available package sources. You can uncheck any package source that you want to disable temporarily without having to remove the package from the list.
Available package sources
The available package sources are custom package sources. In SnapDevelop, you can add multiple package sources to this list in order to get data for NuGet Package Manager.
Machine-wide package sources
The machine-wide package sources list all the package sources, except for the available package sources, on the current machine that might be useful for collecting data for NuGet Package Manager.
Developing with SnapDevelop
This section describes how to use the code editor and other tools in SnapDevelop to write, manage, and improve your code.
Solutions and Projects in SnapDevelop
This section primarily describes the two most important elements in SnapDevelop: project and solution, and additionally demonstrates how to create a new project and the Solution Explorer tool window.
Projects
A project contains all the source code files, data files, etc. that are compiled into a library or executable, and also includes compiler settings and other configuration files that might be used by a variety of services or components that your program works with. A project is defined in an XML file with an extension .csproj.
SnapDevelop allows you to create and develop a variety of projects, which are:
- ASP.NET Core Web API
- Console App (.NET Core)
- Class Library (.NET Core)
- Class Library (.NET Standard)
- Shared Project
- xUnit Test (.NET Core)
Solutions
A solution is a container for one project, or two or more projects that are related in some way. It contains build information, SnapDevelop window settings, and any miscellaneous files that are not directly related to a particular project. A solution is described by a text file with the .sln extension that organizes projects, projects items and solution items in the solution.
Creating a New Project
To create a new project, select New > Project from the File menu.
The New Project dialog box pops up.
From this dialog box, you can select from among the various available project templates that are grouped under the C# category (A project template contains a basic set of pre-generated code files, .config files, and settings). Name the project and choose where you want the project to be created. Click OK to generate the project. The project will then be opened by SnapDevelop so that you can start developing.
Notice that if you select the ASP.NET Core Web API template, four Sample Code options are available, which are Basic, .NET DataStore, ModelStore, and SqlModelMapper. The Basic option contains no data access code while the other three options contain their corresponding data access code. For more information about the sample code files, please refer to the Readme text that appears by default when you create a project with .NET DataStore, ModelStore, or SqlModelMapper used for data access.
Solution Explorer
After you create a new project, you can use Solution Explorer to view and manage the project and solution and their related items.
A variety of menu commands, such as building a project, managing/restoring NuGet packages, adding a reference, and renaming a file, are offered by right-clicking on various items in Solution Explorer. The toolbar at the top of Solution Explorer enables you to collapse all nodes, show all files, and set the various properties of the project.
In Solution Explorer, you can right-click on the project node and then choose Add to add any of the following project items:
Source Editor
The SnapDevelop editor offers a considerable number of features that make it easy for you to write and manage your code. You can find and replace text in single or multiple files, in current project, or in the entire solution. You can collapse and expand the various blocks of code by using the Outlining feature. You can find code by using such features as Go To Definition, and Find All References.
Basic Editing Features
This section describes the basic features of the SnapDevelop code editor, which can help you edit your code quite efficiently and easily.
Error and Warning Marking
When you write code, you may see red wavy underlines (known as squiggles) or light bulbs in your code. Red wavy underlines indicate syntax errors and green underlines indicate warnings. If your mouse hovers over an error, potential fixes for the error are suggested right blow.
Code Auto Completion
Code Auto Completion is a feature of the source code editor that promotes rapid development by offering an alphabetical list of possible names, keywords, and code snippets that might be entered as you write code into the editor.
Line Numbering
Line numbers are not displayed by default. If you enable the Line numbers option in the Tools > Options > Text Editor > General settings, line numbers are displayed in the left margin of the code editor.
Change Tracking
If you enable the Track changes option in the Tools > Options > Text Editor > General settings, changes you have made since the file was opened but not saved are indicated by yellow vertical lines in the left margin of the code editor.
Code and Text Selecting
You can select code either in box mode or in the standard continuous stream mode. To select code in box mode, press Alt as you drag the mouse over the selection (or press Alt+Shift+<arrow key>). The selection includes all characters within the rectangle defined by the first character and the last character in the selection. Anything typed or pasted into the selected area is inserted at the same point on each line.
Virtual Space
When you click somewhere beyond the end of a line, SnapDevelop puts the cursor at the end of the corresponding line. If you enable virtual space in the editor in the Tools > Options > Text Editor > C# settings, you can click anywhere and type anything you want in the code editor. Note that you can enable either Word Wrap or Virtual Space, but not both at the same time.
Finding and Replacing
If you want to search or replace anything in a document, project or solution, you can select the finding and replacing feature that is available on the Edit > Find and Replace menu on the toolbar.
Undo and Redo
You can run the undo or redo command to actions in the current SnapDevelop session by selecting Edit > Undo or Edit > Redo.
Outlining
You can use the Outlining feature to display the outline of your code the way you want. To use this feature, select Edit > Outlining. We use the following text code as an example to demonstrate how you can perform the various outlining operations.
The Outlining feature allows you to:
Toggle Outlining Expansion, which reverses the current collapsed or expanded state of the innermost outlining section when you put the cursor in a nested collapsed section;
Toggle All Outlining, which sets all blocks of code to the same state, expanded or collapsed;
Stop Outlining, which removes all outlining information throughout the file;
Stop Hiding Current, which removes the outlining information for the currently selected blocks of code; If the cursor lies in a nested collapsed section, you can use this feature to expand the nested collapsed section.
Collapse to Definitions, which collapses the members of all types;
Start Automatic Outlining, which automatically sets all blocks of code to the expanded state. Note that this feature can be used only after you have selected the Stop Outlining feature.
Refactoring
This feature allows you to reconstruct your existing code without changing its external behavior. It offers the following three options:
Rename, which allows you to rename identifiers for code symbols, such as namespaces, local variables, types, methods and properties. You can use this feature when you want to safely rename something without having to find all instances, and copy/paste the new name. To use this feature, place your insertion point at an identifier and then select Edit > Refactor > Rename (or right-click and then select Rename).
Extract Method, which allows you to turn a block of code into its own method. You can use this feature when you have a fragment of existing code in some method that needs to be called from another method. To use this feature, you:
Highlight the code to be extracted.
Select Edit > Refactor > Extract Method.
- Extract Interface, which allows you to create an interface using existing members from an interface, struct, or class. You can use this feature when you have members in a class, struct, or interface that could be inherited by other classes, structs, or interfaces. To use this feature, you:
Put your cursor in the class name.
Select Edit > Refactor > Extract Interface.
Enter the necessary information in the popup Extract Interface dialog box.
Select OK.
Reference Locating
You can use the Find All References command or press Shift+F12 to locate where a particular code element is referenced throughout your code base. To use this feature, you:
Place the insertion point at a particular code element.
Right-click the element and then select Find All References, or simply press Shift+F12.
Go to Definition
This feature navigates to the source of a type or member, and opens the result in a new tab. To perform this function, you:
Put the insertion point at a particular element.
Right-click the element and select Go To Definition, or simply press F12.
Advanced Editing Features
A variety of advanced editing features are available on the Edit > Advanced menu on the toolbar.
Format Document
Sets the proper indentation of lines of code and moves curly braces to separate lines in the document.
Format Selection
Sets the proper indentation of lines of code and moves curly braces to separate lines in the selection.
Make Uppercase
Changes all selected characters to uppercase.
Make Lowercase
Changes all selected characters to lowercase.
Adds comment characters to the selected line(s), or to the current line if no characters are selected.
Uncomment Selection
Removes comment characters from the selected line(s), or from the current line.
Increase Line Indent
Adds spaces to the selected lines or the current line.
Decrease Line Indent
Removes spaces from the selected lines or the current line.
Managing NuGet Packages
The NuGet Package Manager in SnapDevelop allows you to install, uninstall, and update NuGet packages for your projects and solutions. It has three tabs, each tab displaying a list of related packages on the left side of the manager and details (e.g., version information, general description, author, license, installation location, and links to other relevant information) about the selected package on the right side.
Browse, which displays packages to install. If a package is already installed, the Install button on the right side changes to Uninstall.
Installed, which displays all installed and loaded packages. A green dot next to a package indicates that the package is loaded into the R session. The red X icon to the right of a package or the Uninstall button on the right side of the manager can be used to uninstall the package. A blue up arrow to the right of a package can be used to update the package if a newer version of the installed package is available.
Update, which displays packages that have available updates from the currently selected package source.
There is a search box on the upper left side of the manager that you can use to filter the list of packages.
Managing Packages for Single Projects
Finding and Installing a Package
In Solution Explorer, select either a project or References and then select Manage NuGet Packages from the right-click context menu.
Select the Browse tab to display packages from the currently selected package source. Search for the desired package using the search box on the upper left side of the package manager and select the package to view its detailed information.
Note
To include prerelease versions in the search, and to make prerelease versions available in the Version drop-down, enable the Include prerelease option.
Select the desired version from the Version drop-down and select Install. SnapDevelop installs the package and its dependencies into the project. You may be asked to accept license terms. When installation is complete, the installed packages appear under the Installed tab, and in the References node of Solution Explorer, indicating that you can use the using statements to refer to them in the project.
Uninstalling a Package
In Solution Explorer, select either the desired project or References and then select Manage NuGet Packages from the right-click context menu.
Select the Installed tab to display the installed packages.
Select the package to uninstall and select Uninstall.
Updating a Package
In Solution Explorer, select either the desired project or References, and then select Manage NuGet Packages from the right-click context menu.
Select the Update tab to display packages that have available updates from the selected package source. Select Include prerelease to include prerelease packages in the update list.
Select the package to update, select the desired version from the Version drop-down list, and select Update.
To update multiple packages to their newest versions, select them individually in the list or enable the Select all packages option and select the Update button above the list.
Managing Packages for Solution
If you want to manage NuGet packages for multiple projects simultaneously, it is recommended that you manage packages for the entire solution. Compared with the package manager for single projects, the package manager for solution has a Consolidate tab, in addition to the three tabs.
To manage packages for solution, you:
Select the solution and Manage NuGet Packages for Solution from the right-click context menu. Alternatively, select Tools > NuGet Package Manager > Manage NuGet Packages for Solution.
When you are managing packages for the entire solution, you are required to select the affected project(s).
Consolidating Package Versions
It is uncommon to use different versions of the same NuGet package across different projects in the same solution. When you manage NuGet packages for solution, you will see a Consolidate tab, which allows you to easily see where packages with distinct version numbers are used by different projects in the solution.
To consolidate package versions, you:
Select the project(s) to update in the project list.
Select the version to use in all those projects from the Version dropdown.
Select Install.
The Package Manager installs the selected package version into all selected projects. Then, the package disappears from the Consolidate tab.
Managing Package Sources
To change the source from which SnapDevelop obtains packages, select the desired one from the Package source dropdown list:
To manage package sources, you:
Select Tools > Options from the menu bar and scroll to NuGet Package Manager. Alternatively, click the Settings icon in the Package Manager.
Select the Package Sources node.
To add a source, select the plus icon at the upper right side of the page, specify the name and the URL of the package source, and select Update. The added source will appear in the list of Available package sources.
To change a package source, select the desired package source, reedit its name and URL, and select Update.
To disable a package source, clear the box to the left of the package name in the list.
To remove a package source, select it and then select the trash icon.
The up and down arrows do not change the order of the package sources. SnapDevelop uses the package that is first to respond to requests, regardless of the order of the package sources.
Working with the Package Manager Options
When you select a package, you will see an expandable Options control below the version selector. For some project types (for example, .NET Core projects), only the Show preview window option is provided.
Showing Preview Window
If you enable the Show preview window option, a window shows the dependencies of a selected package before the package is installed.
Working with the Install and Update Options
Dependency behavior specifies how NuGet selects the version of dependent packages to install.
File conflict action specifies how NuGet deals with packages existing in the project or local machine.
Working with the Uninstall Options
Source Control
SnapDevelop currently supports two types of source control systems:
- SnapDevelop Git;
- SnapDevelop SVN (To use SnapDevelop SVN, make sure that you have installed TortoiseSVN 1.9 or a later version).
This tutorial describes how you can perform source control using SnapDevelop Git or SnapDevelop SVN in SnapDevelop. In this tutorial, you will learn how to:
- Configure source control plug-in;
- Work with Git source control using SnapDevelop Git, including:
- Create a Git repository;
- Connect to a Git repository;
- Commit changes to a Git repository;
- Sync code changes to a Git repository;
- Share code among the team;
- View Git commit history;
- Work with Subversion source control using SnapDevelop SVN, including:
- Import code into a Subversion repository;
- Commit changes to a Subversion repository;
- Update a local working copy;
- Merge different revisions;
- Resolve conflicts;
- Lock/release a file.
Configuring Source Control Plug-in.
Working with Git Source Control Using SnapDevelop Git
Git source control allows you to fully copy a remote repository housing the source code to your own computer. You can then commit the changes on your own computer and perform source control operations without network connection. When you need to switch contexts, you can create a private local branch. You can quickly switch from one branch to another to pivot among different variations of your codebase. Then, you can merge, or publish the branch.
Creating a Git Repository
Manage a SnapDevelop solution in Git by creating a repo for them. Later you can connect this Git repo to a remote Git repo to share your work with others.
Creating a Local Git Repository
From an Existing Solution
To create a repository from an existing solution not in source control, select Add to Source Control in the bottom right corner of the SnapDevelop IDE. This creates a new Git repository in the same directory as your solution and opens up the Push view in Team Explorer, which allows you to push your code to a remote Git repository.
In an Empty Folder
Open the Connections view by selecting the Manage Connections icon in Team Explorer. Under Local Git Repositories, select New and enter a folder where the repository will be created. This directory must be empty. Then, select Create to create the repository.
Creating a Remote Git Repository
You may need to create a remote Git repository to which your local repository can connect so that you can share your code with other developers. For the purpose of demonstration, we signed up for the Bonobo Git Server and created a new Git repository.
Cloning an Existing Git Repository
In the Connections page in Team Explorer, you can copy an existing Git repository to your local computer. Before you can copy the Git repository, you are required to provide its URL, which represents the source of the repository you want to clone. If you want to clone a particular Git repository on the Bonobo Git Server, for example, you simply select the repository and then copy the Git URL.
Connecting to a Remote Git Repository
To connect a local repository to a remote Git repository to share your code, go to the Settings page in Team Explorer. Under Remotes, select Add. Enter ‘origin’ in the Name field and enter the clone URL for your repository in the Fetch field. Make sure that Push matches fetch is checked and select Save.
Specifying Global Settings
Providing User Name and Email
You need to specify the user name and Email address so that your team members can know exactly who committed changes to the branches and, if necessary, contact the user who committed the changes.
Adding Remotes
You need to add a remote Git repository so that you can specify the name of the remote repository (usually ‘origin’) and the URL of the repository that you can use for fetch and push purposes. The push and fetch URLs are the same by default. If you want them to be different, you can uncheck the Push matches fetch box.
Working with Git Tags
Creating a Tag
In SnapDevelop, you can create lightweight tags, which are a pointer to specific commit. To create a tag, you need to:
In Team Explorer, select Tags and then Create Tag.
To select a branch to create a tag from, designate a tag name in the **Enter a tag name **box (no space allowed), optionally supply a tag message, uncheck the Create tag against tip of current branch box, select a branch from the Select a branch dropdown list and select Create Tag.
To create a tag against the tip of the current branch, designate a tag name in the **Enter a tag name **box (no space allowed), optionally supply a tag message, check the Create tag against tip of current branch box, and select Create Tag.
Right-click the new tag and select Push to push it to the remote repository. Select Push All to push all new local tags to the remote repository.
Deleting a Tag
To delete a local tag, you right-click the tag to delete and select Delete Locally.
Creating a Branch from a Tag
Take the following steps to create a branch from a tag:
Select Create Branch From Tag; Alternatively, right-click a tag and select New Local Branch From.
Specify a branch name, select a tag from the local tag list, optionally check the Checkout branch box (if you want to check out the newly created branch) and then select Create Branch.
Select Branches from the Home view to view your newly created branch.
Right-click the new tag and select Push Branch to push the branch to the remote repository.
Viewing Tags
Viewing Tags in the Tags View
You can select Tags from the Home view to view all tags in a local repository. All tags are listed under the currently connected repository. If you want to learn more about the tagged commit, you can right-click the tag and then select View Commit Details.
Viewing Tags in the History View
You can also view tags by following the steps below:
Navigate to the Branches view.
Right-click the desired branch.
Select View History from the right-click context menu.
Select the Show Tags icon in the upper left corner of the History view that appears.
Saving Changes with Commits
When you have finished editing the files in Solution Explorer, you can go to the Changes page in Team Explorer to commit your changes to your repository. Git does not automatically snapshot your code as you edit the files in your local repository. You need to tell Git exactly what changes you want to add to the next snapshot before you can create a commit to save the snapshot to your repository.
You are offered three commit options:
Commit All, which means that you can commit all changes to the local branch.
Commit and Push, which means that you can commit all changes to the local branch and then push the changes to the remote repository.
Commit and Sync, which means that you can commit all changes to the local branch and then synchronize synchronizes the commits on the local and remote branch.
A commit contains the following information:
A brief description of what changes you have made in the commit.
A snapshot of the changed files saved in the commit.
A reference to the parent commit(s).
Working with Git Branches
Git branches are simply a reference that records the history of commits. A Git branch allows you to isolate changes for a feature or a bug fix from the master branch, which makes it very easy to change what you are working on by simply changing your current branch. You can create multiple branches in a repository and work on a branch without affecting the other branches, and you can share branches with your team members without merging the changes into the master branch.
It is easy to switch between branches in the same repository because the branches are lightweight and independent. When working with branches, Git uses the history information stored in commits to recreate the files on a branch, rather than creating multiple copies of your source code.
Creating a Git Branch
Take the following steps to create a Git branch:
In Team Explorer and then open the Branches view.
Right-click the parent branch (usually master) to base your changes and then select New Local Branch From.
Give a branch name in the required field, (optionally) check the Checkout branch box (SnapDevelop automatically checks out to the newly created branch), and then click Create Branch.
Deleting a Git Branch
Take the following steps to delete a Git branch:
In Team Explorer and then open the Branches view.
Locate the branch you want to delete. Make sure the branch is not checked out since you can't delete the branch you are currently working in.
Right-click the branch name and select Delete.
Sharing Code with Push
If you have committed changes to the branches in the local repository, you can then share your code with team members by pushing your local branches to the remote repository. Your commits are added to an existing remote branch or to a new remote branch that contains the same commits as your local branch. Team members can then fetch or pull your commits from the remote repository and review the commits before merging them into the master branch of their local repository.
Take the following steps to share your code in the local repository:
In Team Explorer and select Home and then Sync.
Select Push to upload your commits to the remote branch.
Updating Code with Sync, Fetch, Pull, Merge and Rebase
Synchronizing Local/Remote Commits with Sync
The Sync command pulls remote changes and then pushes local ones. It synchronizes the commits on the local and remote branches.
Take the following steps to synchronize the local/remote commits:
In Team Explorer, select Home and then Sync.
Select Sync.
Downloading Changes with Fetch
The Fetch command allows you to download all commits and new branches pushed to the remote repository but absent in your local repository into your own local repository. It downloads the new commits for your review only, without merging any changes into your local branches.
Take the following steps to fetch changes from the remote repository:
In Team Explorer, select Home and then Sync.
Select Fetch.
Fetching and Merging with Pull
The Pull command performs a fetch and then a merge to download the commits and integrate them into your local branch.
Take the following steps to perform a pull operation.
In Team Explorer, select Home and then Sync.
Select Pull.
Updating Branches with Merge
The Merge command takes the commits retrieved from fetch and integrates the latest changes from one branch into another.
Take the following steps to merge the latest changes from one branch into another:
In Team Explorer, select Home and then Branches.
Select a source branch from the Merge from branch dropdown list.
Check the box for Commit changes after merging (optional) and then select Merge.
Note
If any merge conflict occurs, you will see a message reminding you to resolve the conflict and commit the change. Refer to Resolving Merge Conflicts for instructions on how to resolve merge conflicts.
Updating Branches with Rebase
Rebase serves to address the problem of updating a branch with the latest changes from the main branch. It takes the commits in your current branch and replays them on the commit history of another branch. The commit history of your current branch will be rewritten so that it starts from the most recent commit in the target branch, thus keeping a clean commit history.
Take the following steps to perform a rebase operation:
In Team Explorer, select Home and then Branches.
Check out your source branch for rebasing and then select Rebase.
Select a target branch from the Onto branch dropdown list.
Select Rebase.
Resolving Merge Conflicts
If any merge conflicts occur, you need to resolve the conflicts manually.
Reviewing Commit History
Git manages a full history of your development by using the parent reference information stored in each commit. The commit history allows you to figure out when file changes are made, who made the changes, and what differences exist between the various versions of commits.
To review the commit history, you:
Navigate to the Branches view.
Right-click the desired branch.
Select View History from the right-click context menu.
Working with Subversion Source Control Using SnapDevelop SVN
Subversion source control system maintains all your files, including a complete history of all changes to the files, in a central database called repository. With SnapDevelop SVN in SnapDevelop, you can save changes to your repository with commits, show what changes are made to the repository and who makes the changes, merge various revisions, update your code to a particular revision, and resolve conflicts, if any.
Importing Code into a Subversion Repository
To use SnapDevelop SVN for source control, you need to select Add to Source Control in the bottom right corner of the SnapDevelop IDE and then SnapDevelop SVN.
Then, verify the working copy root.
Importing Code into a New Repository
If you want to import your solution to a new repository, you should:
Select the New Repository radio button.
Specify the location of the new repository. The new repository can be created on the local disk or on a remote server.
Select Import to add your solution to the new repository.
Select Finish to close the Add Solution to Subversion dialog box.
Importing Code into an Existing Repository
If you want to import your solution to an existing repository, you should:
Select the Existing Repository radio button.
Enter the URL of the existing repository.
Select Import to add your solution to the existing repository.
Select Finish to close the Add Solution to Subversion dialog box.
Checking out a Working Copy
You need to check out a working copy from the connected repository. In so doing, you need to specify a directory where you want to place your working copy. Right click in the directory so that the context menu pops up and then select the SVN Checkout command.
In the Checkout dialog box that appears, specify the URL of the repository and the checkout directory, and leave the default settings unchanged.
Committing Changes to the Connected Repository
When you have checked out a working copy from the repository, you can edit and modify the files in Solution Explorer. When you have finished modifying a file, you can commit your changes to the remote repository so that other team members can see your changes and update the changes to their own local working copy.
To save changes with commits, you simply select the modified file, folder, project, or solution and then select Commit from the right-click context menu.
Then, you will see the Commit dialog box, which displays the changed files, such as versioned, non-versioned, added, and deleted files. The changed files are selected by default. If you don’t want a changed file to be committed, you can simply uncheck that file. If you want to commit a non-versioned file, you can check that file. You can quickly check or uncheck files by clicking the links immediately above the list of displayed items. In addition, you can optionally enter a message that describes the changes you have made so that your team members can know what happened.
Updating Your Local Working Copy
When working on a project involving multiple developers, you should periodically ensure that the changes made by your team members are updated into your local working copy. Two update options are offered in SnapDevelop: update to the latest revision and update to a non-latest revision.
Updating to the Latest Revision
To update your local working copy with the latest changes from the remote repository, you simply select a desired file, folder, project, solution, or directory, and select Update from the right-click context menu.
Updating to a Non-Latest Revision
To update your local working copy with the changes from a specific earlier revision, you need to:
Select a desired file, folder, project, solution, or directory.
Select SnapDevelop SVN and then Update to Revision from the right-click context menu.
Specify the specific revision.
Refreshing Status
After a project is loaded into the SnapDevelop IDE, it might be changed outside the IDE. For example, if a developer merges certain changes from the other revisions, such changes will not be detected by the SnapDevelop IDE. Therefore, you need to reload the project into the IDE by selecting an affected file, folder, project or solution and selecting Refresh Status from the right-click context menu.
Showing Change Lists
When you have modified the files, you can view the changes by selecting a modified file, project, or solution and then SnapDevelop SVN > Show Changes from the right-click context menu before committing the changes to the Subversion repository.
Showing Revision Logs
For every change you make and commit, you should provide a log message so that you can later figure out what changes you made and why you made such changes.
To view the file revision logs, you need to select a particular file, folder, project, or solution and then SnapDevelop SVN > Show Log from the right-click context menu.
Reverting Changes
If you change a file in your solution and you find that the changes are not appropriate, you can undo the changes by selecting the modified file and SnapDevelop SVN > Revert Changes from the right-click context menu. Note that you can revert changes only when they are not committed to the Subversion repository.
Specifying Subversion Properties
You can read and set the Subversion properties in the Subversion property page. To go to this page, select a particular file, folder, project or solution and SnapDevelop SVN > Properties from the right-click context menu.
Merging
Branches are used to maintain separate lines of development. At some stage of development, you need to merge the changes made on one branch back into the trunk, or vice versa. SnapDevelop allows you to perform two types of merge: merge a range of revisions and merge two different trees.
Merging a Range of Revisions
To merge a range of revisions, you need to:
Specify the URL of the branch that contains the changes you want to incorporate into your local working copy.
Specify the list of revisions you want to merge.
Merging Two Different Trees
To merge two different trees, you need to:
Specify the URL of the trunk in the From field.
Specify the URL of the feature branch in the To field.
Enter the revision number at which the two trees are synchronized in both the From and To fields.
Resolving Conflicts
Conflicts may occur when more than one developer is changing the same lines of code in the same file and committing the changes to the shared repository.
If any conflict occurs, you will see the name of the conflicted file(s) and a warning message.
In Solution Explorer, the conflicted file(s) will be marked with a little red dot. You have to resolve the conflict(s) manually if you want to synchronize the code successfully.
Editing Text Conflicts
To resolve conflicts, you need to:
Select the conflicted file (marked with little red dot).
Select SnapDevelop SVN and Edit Text Conflicts from the right-click context menu.
Edit the text conflicts.
Marking as Resolved
When you have resolved the text conflicts, you can mark them as resolved by selecting SnapDevelop SVN > Mark as Resolved from the right-click context menu.
Locking
No file is locked by default and any team member who has commit access can commit changes to any file in the repository. If you lock a file, only you can commit changes to that file, and commits by other team members will be rejected until you release the lock. A locked file cannot be modified in any way in the repository.
Getting a Lock
To lock a file, you simply select the file in your working copy and then select SnapDevelop SVN > Get Lock.
Then, you will see the Lock Files dialog box, where you can optionally enter a message so that your team members can see why you have locked the file. If you want to steal the locks from someone else, you can check the Steal the locks box.
Releasing a Lock
To release a lock, you should first know which file is locked. Locked files are displayed in the commit dialog box and selected automatically. If you proceed with the commit, the locks on the selected files are removed even if the files haven’t been modified. If you don’t want to unlock particular files, you can uncheck them. If you want to keep a lock on a file you have modified, you have to check the Keep locks box before committing your changes.
You can release a lock manually by selecting the locked file in your working copy and then selecting SnapDevelop SVN > Release Lock.
Tracking Changes with Blame
Blame displays the author and revision information for the specified files or URLs. Each line of text is annotated at the beginning with the user name and the revision number for the last change to that line.
Showing Disk-Browser and Repo-Browser
The repository browser allows you to view the structure and status of the repository and to work directly on the repository without checking out a working copy.
To display the repository in your local disk, select a file, folder, project or solution and SnapDevelop SVN > Disk-Browser.
To display the repository in the repository browser, select a file, folder, project or solution and SnapDevelop SVN > Repo-Browser.
Excluding from Subversion/Adding to Subversion
Sometimes you don’t want to commit all edited files into the central Subversion repository. In the event that you want to selectively commit the edited files, you can do this by selecting the files you don’t want to include in the repository and selecting SnapDevelop SVN > Exclude from Subversion.
When you have excluded a file from Subversion, the little (yellow or green) dot preceding the file in Solution Explorer disappears. You can add the file to Subversion by selecting the file and then Add to Subversion from the right-click context menu.
Scaffolding Services and Controllers
An independent tutorial on how to scaffold services and controllers in SnapDevelop is available on the following link:
Injecting Services/DataContexts
This tutorial describes how you can use SnapDevelop to inject services or DataContexts in the ConfigureServices method of the project.
Facts you need to know about the ConfigureServices method before you inject a service or DataContext:
ConfigureServices is used to add services to the application.
By default, ConfigureServices has one parameter, of type IServiceCollection, which is a container. Services added to the container will be available for dependency injection, which means that you can inject those services anywhere in your application.
ConfigureServices will be called by the Web host if the Web host instantiates Startup first. Therefore, the Startup constructor, which usually contains the configuration and logging setup, will execute before ConfigureServices.
Injecting Service
You can only inject services in the ConfigureServices method in the Startup.cs file of your project.
To inject service into the service container, right-click at an empty line in the ConfigureServices method and then select Inject Service.
In the Inject Service(s) configuration page that appears, specify the project and folder that accommodates the services and then select the services to inject.
Specifying the Project and Folder Where the Services Reside
Project
Specifies the project that accommodates the services you may want to inject.
Folder
Specifies the folder that accommodates the services you may want to inject. The backslash indicates that all folders are selected.
Selecting the Services to Inject
Uses *Service to filter out all services in the selected folder in the target project. You can select the service to inject from all available services by selecting the corresponding checkbox.
Injection Mode
Scoped
Scoped lifetime services (AddScoped) are created once each time they are requested.
Singleton
Singleton lifetime services (AddSingleton) are created the first time they are requested, or when Startup.ConfigureServices is run and an instance is specified with the service registration. All following requests use the same instance. If the application requires singleton behavior, it is recommended that you allow the service container to manage the service’s lifetime and that you should not implement the singleton design pattern and provide user code to manage the object's lifetime in the class.
Transient
Transient lifetime services (AddTransient) are created each time they are requested from the service container. This lifetime works best for lightweight and stateless services.
Injecting DataContext
Configuring DataContextOptions
DataContext sets up the connection that your project uses to connect to the database. To create a new DataContext for a project, right-click on your project, select Add > New Item > DataContext and then configure the database connection.
After you have created the DataContext, you can inject the DataContext into the service container of the project. Right-click at an empty line in the ConfigureServices method and then select Inject DataContext.
In the Inject DataContext(s) configuration page that appears, specify the project and folder that accommodates the DataContexts and then select the DataContexts to inject.
Specifying the Project and Folder Where the DataContext Resides
Project
Specifies the project that accommodates the DataContexts you may want to inject.
Folder
Specifies the folder that accommodates the DataContexts you may want to inject. The backslash indicates that all folders are selected.
Selecting the DataContext to Inject
Uses *DataContext* to filter out all DataContexts in the selected folder in the target project.
Connection Key
Refers to the connection string name in the appsettings.json file that sets up the site connection automatically.
Performing SQL Queries
An independent tutorial on how to perform SQL queries in SnapDevelop is available on the following link:
Creating a Web API
An independent tutorial on how to create a Web API in SnapDevelop is available on the following link:
Testing a Web API
An independent tutorial on how to test a Web API in SnapDevelop is available on the following link:
Unit Testing
Use SnapDevelop to run unit tests to keep code health, and find errors before the release of an application. Run your unit tests frequently to make sure your code is working properly.
Creating Unit Tests in SnapDevelop
This section describes how to create a unit test project.
Open the project you want to test in SnapDevelop.
For the purpose of demonstration, this tutorial tests a simple "Hello World" project.
In Solution Explorer, select the solution node. Then, right-click the selected node and choose Add > New Project.
In the Add New Project dialog box, select the xUnit Test (.NET Core) as the test framework. Give a name for the test project, and then click OK.
The test project is added to your solution.
In the unit test project, add a reference to the project you want to test by right-clicking on Dependencies and then choosing Add Reference.
Select the project that contains the code you'll test and click OK.
Add code to the unit test method.
You can add as many unit test methods as possible so that you can examine your project more comprehensively. In the following image, a test method is offered as an example.
Running Unit Tests in Test Explorer
Open Test Explorer by choosing Test > Test Explorer from the top menu bar.
Run all unit tests by clicking Run All, or run the selected tests by choosing Test > Run Selected Tests from the top menu bar (you can also select the tests you want to run, right-click the tests and then choose Run Selected Tests).
You can also choose to run Failed/Not Run/Passed tests via Test from the top menu bar.
The pass/fail bar at the top of the Test Explorer window is animated as the tests run. After the tests have completed, the pass/fail bar turns green if all tests passed or turns red if any test failed.
Checking Test Results
As the tests run, Test Explorer displays the results. The details pane at the bottom of Test Explorer displays a summary of the test run.
You can select a specific test to view the details of such a test. The test details pane displays the following information:
- The source file name and the line number of the test method.
- The status of the test.
- The elapsed time that the test method took to run.
If the test fails, the details pane displays the following information:
- The message returned by the unit test framework for the test.
- The stack trace at the time the test failed.
Debugging C# Code
Debugging is used to track the running process of code. Exceptions may occasionally occur during the running of a program. A debugger can be used to effectively and accurately locate an exception.
The SnapDevelop debugger is a Mono debugger, a cooperative debugger that is built into the Mono runtime. It offers a number of ways to see what your code is doing while it runs. For example, you can set breakpoints; you can step through your code and examine the values stored in variables; you can set watches on variables to see when values change; you can inspect the call stack to understand the execution flow of an app.
This tutorial presents a brief introduction to the basic features of the SnapDevelop debugger in a step-by-step manner. In this tutorial, you will learn to:
- Set a breakpoint
- Navigate code in the debugger
- Check variables via data tips and Locals window
- Add a watch on a variable
- Check the call stack
Creating a Project and Add Code
This section shows you how to create a C# project using SnapDevelop and to add code that can help to demonstrate the key features of the SnapDevelop debugger. The code added here is simple enough for demonstration purposes.
Open SnapDevelop 2019.
From the top menu bar, choose File > New > Project. In the left pane of the New Project dialog box, under C#, choose .NET Core, and then in the middle pane choose Console App (.NET Core). Then, give a name for the project and click OK.
SnapDevelop creates the project.
In Program.cs, replace the current code
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; namespace Debugging { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
with the following code:
using System; using System.Collections.Generic; public class Animal { public int X { get; private set; } public int Y { get; private set; } public int Appetite { get; set; } public int Weight { get; set; } public virtual void Feed() { Console.WriteLine("Performing base class feeding tasks"); } } class Dog : Animal { public override void Feed() { Console.WriteLine("Feeding a dog"); base.Feed(); } } class Cat : Animal { public override void Feed() { Console.WriteLine("Feeding a cat"); base.Feed(); } } class Pig : Animal { public override void Feed() { Console.WriteLine("Feeding a pig"); base.Feed(); } } class Program { static void Main(string[] args) { var animals = new List<Animal> { new Dog(), new Cat(), new Pig() }; foreach (var animal in animals) { animal.Feed(); } Console.WriteLine("Press any key to exit."); Console.ReadKey(); } }
In this tutorial, we'll look closely at this app using the debugger and demonstrate the key features of the SnapDevelop debugger.
Launching the Debugger
Press F5 (or select Debug > Start Debugging) or the Debugging button on the toolbar.
F5 starts the app with the debugger attached to the app process. Since nothing was done to debug the code, the app just loads and generates the following console output.
Feeding a dog Performing base class feeding tasks Feeding a cat Performing base class feeding tasks Feeding a pig Performing base class feeding tasks Press any key to exit.
Stop the debugger by pressing the red stop button on the toolbar.
Setting a Breakpoint and Launching the Debugger
Breakpoints are the most basic feature of the SnapDevelop debugger. A breakpoint indicates where the debugger should suspend your running code so that you can look at the values of variables, or whether or not a block of code is executed.
In the foreach loop of the Main function, set a breakpoint by clicking animal.Feed() and then pressing F9:
A red dot appears in the left margin of the line of code you clicked.
If you want to delete or disable the breakpoint, you just hover over the breakpoint, right-click your mouse and make the choice accordingly.
Press the Debugging button or F5, the app launches, and the debugger runs to the line of code where you set the breakpoint.
The yellow arrow in the left margin points to the line of code where the debugger paused, which suspends app execution simultaneously (this line of code has not yet been executed).
If the app is not yet running, F5 launches the debugger and stops at the first breakpoint. Otherwise, F5 continues running the app to the next breakpoints (if any).
Breakpoints are useful when you know a block of code or a line of code that you decide to examine in detail.
Navigating Code in the Debugger
Usually we use the step commands (step into, step over and step out) and Run to Cursor command to navigate code in the debugger.
Step Into
When the debugger pauses at the animal.Feed method call in the Main method, press F11 (or select Debug > Step Into) to advance into code for the Dog class.
F11 is the Step Into command. It advances the app execution one statement each time and serves to examine the execution flow in the most detail. By default, the debugger skips over non-user code.
Step Over
When the debugger advances to the Feed method in the Dog class, press F10 (or select Debug > Step Over) several times until the debugger stops on the base.Feed method call, and then press F10 one more time.
The debugger does not step into the Feed method of the base class. F10 steps over the methods or functions in your app code, but the block of code is still executed. By pressing F10, rather than F11, on the base.Feed method call, we skip over the implementation code for base.Feed.
Step Out
When you have examined the Feed method in the Dog class, press Shift + F11 (or select Debug > Step Out) to get out of the method but stay in the debugger.
The Step Out command resumes app execution and advances the debugger until the current function returns.
You will be back in the foreach loop in the Main method.
Run to Cursor
While the program is in debug mode, right-click a line of code in your app and select Run to Cursor. This command sets a temporary breakpoint at the current line of code. If breakpoints have already been set, the debugger pauses at the first breakpoint that it hits. You can use this command when you need to quickly set a temporary breakpoint.
Checking Variables with Data Tips and Locals Window
Variables can be checked via data tips and Locals window.
Checking Variables with Data Tips
Usually, when you are debugging an issue, you try to figure out whether variables are storing the desired values. The data tips are a good way to do it.
When you pause on the animal.Feed() method, hover over the animal object and you see its default property value, which is Dog.
Expand the animal object to see its properties, such as the Weight property, which has a value of 0.
Press F10 (or select Debug > Step Over) several times to iterate once through the foreach loop until the debugger pauses again on the animal.Feed() method.
Hover over the animal object again, and this time you have a new object type Cat.
Checking Variables with the Locals Window
The Locals window displays the variables that are in the current execution context.
When the debugger pauses in the foreach loop, click the Locals window, which is by default open in the lower left pane of the code editor.
If it is closed, open it by selecting Debug > Windows > Locals from the top menu bar.
Press F11 to advance the debugger so that you can check the variables in the execution contexts.
Adding a Watch
In the code editor window, right-click the animal object and select Add Watch.
The Watch window opens at the bottom left corner of the code editor. You can use a Watch window to specify an expression or a variable you want to monitor.
After you have set a watch on the animal object, you can see its value change as you run through the debugger. Unlike the other variable windows, the Watch window always displays the variables you are monitoring. They're grayed out when out of scope and an error is reported.
Checking the Call Stack
The Call Stack window displays the order in which functions or methods are called. It can be used to examine and understand the execution flow of a program.
When the debugger pauses in the foreach loop, click the Call Stack window, which is by default open in the lower right pane.
If it is closed, open it while paused in the debugger by selecting Debug > Windows > Call Stack.
Click F11 several times until you see the debugger pause in the base.Feed method for the Dog class in the code editor. Look at the Call Stack window.
The first line displays the current function (the Dog.Feed method in this app). The second line shows that Dog.Feed is called from the Main method.
Compiling and Building
Project files must be compiled and built before they can be used to generate an application to be deployed to the users. Before you compile and build a project or solution, you need to configure the various project properties and solution properties so that you can build your project or solution in desired ways.
Configuring Project Properties
This section describes how you can configure the various project properties, including properties related to application, build, build events, package, debug and signing, so that you can build your projects in ways that best fit your needs. To configure the Project Properties settings, right-click a project node in Solution Explorer, and then select Properties so that the project designer pops up.
Application
The Application settings allow you to specify the various application configurations, such as assembly name, target framework, output type, and ways of resources management. To access these settings, click the Application tab in the project designer.
General Settings
The following settings allow you to specify some basic configurations for the application, including the assembly name, default namespace, target framework, output type, as well as the entry point when you start an application.
Assembly Name
Designates the name of the output files that contain the assembly metadata. If it is changed here, the output assembly name will be changed too.
Default Namespace
Specifies the base namespace for files newly added to the project.
Target Framework
Defines the .NET version that your application targets. The dropdown list can have different values depending on the .NET versions that are installed on your current machine. If your project targets .NET Core, you can choose any of the following .NET Core versions.
Output Type
Specifies the type of application to build. The output type varies depending on the type of project you create. For a Web application project, for example, you must select Console Application as the output type. For a Console App project, you must select Console Application.
Startup
Designates the entry point to be called when you start an application. The entry point is usually set either to the main form in your program or to the Main procedure that runs when you start the application. If your compilation has multiple types that contain a Main method, you can specify which type contains the Main method that you want to use as the entry point into the application. This property for class libraries defaults to (Not set) because they don’t have an entry point.
Resources
The Resources settings allow you to specify how resources of your application will be managed.
Icon and Manifest
The Icon and Manifest option is enabled by default. These settings allow you to select your own icon, or to select different manifest generation options. In most cases, you can rely on this radio button to manage your application resources. If you want to provide a resource file for the project, select the Resource file radio button instead.
Icon
Designates the .ico file that you want to use as the icon for your application. Enter the name of the .ico file, or click Browse to select an existing icon. Note that this feature is available only for .NET Framework apps created outside SnapDevelop.
Manifest
Selects a manifest generation option when the application runs on Windows Vista under User Account Control (UAC). This option can have the following values:
- Embed manifest with default settings. This is the default option. It embeds security information into the executable file of the application, specifying that
requestedExecutionLevelbe
AsInvoker.
- want to provide a resource file for the project. Enter a path name of a resource file or click Browse to add a Win32 resource file to the project.
Build
The Build settings allow you to configure the various build properties, such as conditional compilation symbols, target platform, warning and error treatment, as well as output management. To access these settings, click the Build tab in the project designer.
Configuration and Platform
SnapDevelop currently supports two ways of build: Debug and Release, and the platform on which an application runs is set to Any CPU by default. Therefore, you can choose to perform the Debug configuration on any CPU or the Release configuration on any CPU. However, the Debug and Release options cannot be selected at the same time. The Configuration and Platform options make it possible for you to select the specific configuration and platform you want to display or modify.
Configuration
Specifies the configuration settings (Debug or Release) you want to display or modify.
Platform
By default, the platform is set to Any CPU, which means that you can build and deploy your application on any development platform.
General Settings
The general settings allow you to configure various C# compiler properties, including conditional compilation symbols and target development platform.
Conditional Compilation Symbols
Designates one or more symbols to perform conditional compilation. Use a semi-colon or comma to separate symbols if you specify multiple symbols here. If you designate a conditional compilation symbol here, you can use the symbol to compile your project-wide source files conditionally without having to define such a symbol in individual files.
The -define option defines names as symbols in all source code files in your program. The -define option is equivalent to a #define preprocess directive except that this option is applicable for all files in your project. A symbol specified here remains defined unless otherwise undefined using an #undef directive. If you use the -define option, an #undef directive in one file has no project-wide effect.
You can use symbols created by this option with #if, #else, #elif, and #endif to perform conditional compilation for the source files in your project.
For example, if you have created a Console App using SnapDevelop and you enter ABC; HelloWorld in the Conditional compilation symbols box,
you can set the project compilation conditions in the .cs file in Solution Explorer.
If you run the application, you will get the following output.
If you un-define ABC using an #undef directive and then run your application again,
you will get the following output:
Define DEBUG Constant
Select this option to treat DEBUG as a constant symbol in all source code files in your project. Enabling this option has the same effect as entering DEBUG in the Conditional compilation symbols box.
Define TRACE Constant
Select this option to treat TRACE as a constant symbol in all source code files in your project. Enabling this option has the same effect as entering TRACE in the Conditional compilation symbols box.
Platform Target
Designates the processor on which the output file is to run. Select Any CPU if you want your output file to run on any processor, select x86 if you want it to run on any 32-bit Intel-compatible processor, or select x64 if you want it to run on any 64-bit Intel-compatible processor.
Prefer 32-bit
If you enable this option, your application runs as a 32-bit application on both 32-bit and 64-bit Windows operating systems. If you disable this option, your application runs as a 32-bit application on 32-bit Windows operating systems and as a 64-bit application on 64-bit Windows operating systems. Note that the Prefer32-bit option is available only if the Platform target list is set to Any CPU.
If you run an application as a 64-bit application, the pointer size doubles, and incompatibility may occur with other libraries that are exclusively 32-bit. Run a 64-bit application only if the application requires over 4 GB of memory or 64-bit instructions significantly improve performance.
Allow Unsafe Code
Allows code that uses the unsafe keyword to compile. By default, C# does not support pointer arithmetic in order to ensure type security. However, you can use the unsafe keyword to define an unsafe context in which pointers can be used. Unsafe code has the following properties:
- Unsafe code can be types, methods, or code blocks.
- Unsafe code may improve an application’s performance by removing array bounds checks.
- Unsafe code must be compiled with the unsafe compiler option.
- Unsafe code causes security problems.
- Unsafe code is necessary when you call native functions that require pointers.
Optimize Code
If you enable this option, the C# compiler optimizes your code by making your output files smaller, faster, and more efficient.
Errors and Warnings
The following settings configure the error and warning options when you are building your project.
Warning Level
Specifies the warning level for the compiler to display. The warning levels range from 0 to 4. Higher warning levels show more warnings while lower warning levels show more serious warnings. The following table illustrates the meanings of individual warning levels.
Suppress Warnings
Blocks the generation of one or more warnings. Use semi-colon or comma to separate warning numbers if there is more than one warning.
Treat Warnings as Errors
The following settings allow you to determine which warnings are treated as errors.
None
Treats no warnings as errors.
All
Treats all warnings as errors.
Specific Warnings
Treats specific warnings as errors. Use a semi-colon or comma to separate warning numbers if you want to specify multiple warnings as errors.
Output
The following settings allow you to specify the output configuration for the build process.
Output Path
Designates the path of the output files. Select Browse to specify a path, or directly enter a path in this box. If you don’t specify a path, the compiled files will be output into the default path, which is bin\Debug or bin\Release\.
XML Documentation File
Designates the name of a file which contains documentation comments. If you enable this option and then build your project, you will see a .xml file in Solution Explorer.
Register for COM Interop
Specifies that your managed application will generate a COM callable wrapper that allows a COM object to interact with your managed application. COM is a language-neutral way of implementing objects that can be used in environments other than the one in which they are created, even across machine boundaries. It offers a stable application binary interface that does not change between compiler releases.
Note that your application should be a .NET Framework application and the Output type in the Application page of the project designer for this application must be Class Library so that the Register for COM interop option is available.
Advanced Build Settings
The Advanced Build Settings dialog box allows you to specify advanced build configurations, including the version of a programming language, the way compiler errors are reported, debugging information, file alignment, as well as library base address.
General Settings
The following settings allow you to specify general build configurations, including the version of a programming language and the way compiler errors are reported.
Language Version
Designates the version of the programming language to use. Each version has its own specific features. This option makes it possible for you to force the compiler to enable some of the implemented features, or to allow only the features that are compatible with an existing standard. The following table displays the various language versions that are currently available in the SnapDevelop compiler.
Internal Compiler Error Reporting
Specifies whether to report internal compiler errors. The prompt is selected by default, which means that you will receive a prompt if any compiler error occurs. If you select none, the error will be reported only in the output of the text compiler. If you select send, an error report will be sent automatically. If you select queue, error reports will be queued.
Check for Arithmetic Overflow/Underflow
Specifies whether runtime exceptions occur if an integer arithmetic statement exceeds the scope of the checked or unchecked keywords and leads to a value that exceeds the range of the data type.
Output Settings
The following settings allow you to specify advanced output configurations, including the type of generated debugging information, the size of the output file, as well as the library base address.
Debugging Information
Determines the type of debugging information that is generated by the compiler. The following table displays the various types of debugging information and their respective meanings.
File Alignment
Determines the size of the output file. You can select any value, measured in bytes, from the dropdown list, which includes 512, 1024, 2048, 4096, and 8192. The size of the output file is determined by aligning a section on a boundary that is a multiple of this value.
Library Base Address
Designates the preferred base address where a DLL can be loaded. The default base address for a DLL is specified by the .NET Framework common language runtime.
Build Events
The Build Events settings allow you to specify build configuration instructions and specify the conditions for the running of post-build events. To access these settings, click the Build Events tab in the project designer.
Configuration and Platform
Configuration
This setting is read-only on this page.
Platform
This setting is read-only on this page.
Pre-build Event Command Line
Allows you to write commands to execute before the build starts. If you want to write long commands, you can enter your commands in the Pre-build Event Command Line input box that pops up when you click Edit Pre-build.
To demonstrate the Pre-build Event Command Line feature, we use an ECHO command as an example.
If you build the project, you can see the following output.
Macros
We have actually used a macro (OutDir) in the previous example. The command line input boxes allow you to insert a variety of macros, which are case-insensitive, and which can be used to designate locations for files or to get the actual name of the input file. The following table lists the macros and illustrates the meanings of individual macros.
Insert
Allows you to insert the macro(s) you select from the macro table into the command line input box.
Post-build Event Command Line
Allows you to write commands to execute after the build finishes. If you want to write long commands, you can enter your commands in the Post-build Event Command Line input box that pops up when you click Edit Post-build.
Take the following steps to demonstrate the Post-build Event Command Line feature:
Create a .bat file in a designated path (e.g., disk D).
Edit the .bat file (e.g., add an ECHO command).
Enter commands in the command line input box (e.g., add a Call statement).
If you build the project, you can see the following output.
Run the Post-build Event
Specifies the conditions for the running of the post-build events. The following table lists the various conditions and the results of applying these conditions.
Package
The Package settings allow you to configure the various package properties, such as package generation, package ID, package version, package author, output path, as well as all details about the package. To access these settings, click the Package tab in the project designer.
Configuration and Platform
Configuration
This setting is read-only on this page.
Platform
This setting is read-only on this page.
Require License Acceptance
If you enable this option, you will be asked whether to accept the package license before you install a particular package.
Generate NuGet Package on Build
If you enable this option, you can use SnapDevelop to automatically generate the NuGet package when you build the project. Note that your project must be a Class Library project and that you can meaningfully configure the various package properties only if the Generate NuGet Package on Build option is enabled. The following table lists the variety of package properties and the meanings of individual package properties.
Debug
The Debug settings allow you to specify how the SnapDevelop debugger behaves in a C# project. To access these settings, click the Debug tab in the project designer.
Configuration and Platform
Configuration
This setting is read-only on this page.
Platform
This setting is read-only on this page.
Start Settings
You can configure the start settings, including the start actions and start options, for an application. The following table lists the variety of start actions and options you can configure for your application and explains the meaning of each start action/option.
The Signing settings allow you to sign the application and deployment manifests and also to sign the strong named assembly. To access these settings, click the Signing tab in the project designer.
Configuration and Platform
Configuration
This setting is read-only on this page.
Platform
This setting is read-only on this page.
Assembly Signing
The following settings allow you to sign the strong named assembly.
Sign the Assembly
You can enable this check box to sign the assembly and create a strong name key file. Enabling this option allows you to sign the assembly using the Al.exe tool supported by the Windows Software Development Kit (SDK).
Choose a Strong Name Key File
You can specify a new or existing strong name key file that can be used to sign the assembly. Select New to create a new key file or Browse to choose an existing one. If you select New, the Create Strong Name Key dialog box pops up, which allows you to designate a key file name and protect the key file with a password. The password must be at least six characters in length. If you specify a password, a Personal Information Exchange (.pfx) file is created. If you don’t specify a password, a Strong Name Key (.snk) file is created. The two types of files are briefly introduced in the following table.
If you build a project with one of the key files created, the created file will be used to sign the assembly.
Delay Sign Only
You can enable this check box to delay assembly signing. If this option is enabled, your project cannot be debugged and will not run. However, you can use the strong name tool (Sn.exe) with the
-Vr option to skip verification during development.
You can take the following steps to sign an assembly:
Enable the Sign the assembly option.
Create a strong name key file and specify a password (a .pfx file is created in Solution Explorer).
Build the project.
You can view the components (for example, .dll files) in the compiled project files to check if an assembly is signed.
This image indicates that an assembly is signed.
This image indicates that an assembly is not signed (probably because the Sign the assembly option is not enabled).
If you enable the Delay sign only option, you will obtain the public key but the assembly is currently not signed.
Configuring Solution Properties
A solution configuration specifies how projects in the solution are to be built and deployed. To configure the Solution Properties, right-click your solution node in Solution Explorer, and then select Properties so that the Solution Properties dialog box pops up.
Common Properties
The common properties specify which project to run in your solution that contains two or more projects, and determine the dependency relationship between the various projects in your solution.
Startup Project
The Startup Project options allow you to specify which project to run when you launch the SnapDevelop debugger. To configure the Startup Project settings, expand the Common Properties node, and then select Startup Project.
Current Selection
Enable this option if you want the current project to run when you launch the SnapDevelop debugger.
Single Startup Project
Enable this option if you want any project to run when you launch the SnapDevelop debugger.
Project Dependencies
When you build a solution, you need to build some projects first in order to generate executable code that can then be used by the other projects. The Project Dependencies settings allow you to determine the desired build order for projects in your solution. To configure the Project Dependencies settings, expand the Common Properties node, and then select Project Dependencies.
Projects
The dropdown list has all projects in your solution. You can select any project that uses executable code generated by another project or other projects.
Depends on
You can select any project that generates executable code used by the project you selected in the Projects dropdown list.
Note that circular dependency is not allowed. For example, if project A depends on project B, which in turn depends on project C, then project C cannot depend on project A or project B and project B cannot depend on project A. If you create a circular dependency, you will receive the following warning message.
Please also note that the projects selected on the Depends on pane may not be actually built. Whether the projects are built or not depends on the selection of the check boxes for the projects in the active solution build configuration.
Configuration Properties
The Configuration Properties settings allow you to manage the various properties for the entire solution. You can decide whether to build or deploy a particular project, and what configuration of a particular project to build or deploy on what development platform. To configure the Configuration Properties settings, expand the Configuration Properties node, and then select Configuration.
Configuration Manager
You can select Configuration Manager to open the Configuration Manager dialog box, which allows you to create and specify configurations and platforms at both the solution level and the project level.
Active Solution Configuration
Specifies what configuration is built at the solution level when you select Build Solution on the Build menu. You can select one of the two default configurations - Debug and Release, or you can add new ones. If you realize that an existing configuration name is not appropriate, you can rename it. If you do not need a particular configuration anymore, you can remove it from the Active Solution Configuration box. This will remove all solution and project configurations you specified for that combination of configuration and platform.
To add a new configuration, you need to:
Select <New> from the Active Solution Configuration box.
Enter a name for the new configuration on the New Solution Configuration window.
Select a configuration from the Copy settings from box if you want to use the settings from an existing solution configuration or select <Empty> otherwise.
Select the Create new project configurations check box if you want to create project configurations simultaneously.
To rename a solution configuration, you need to:
Select <Edit> from the Active Solution Configuration box.
Select the configuration you want to modify on the Edit Solution Configurations window.
Select Rename and then enter a new name.
To remove a solution configuration, you need to:
Select <Edit> from the Active Solution Configuration box.
Select the configuration you want to remove on the Edit Solution Configurations window.
Select Remove.
Active Solution Platform
Specifies the solution-level development platform you want your solution to target. The development platform defaults to Any CPU, but you can also create new platforms. If you realize that an existing platform name is not appropriate, you can rename it. If you do not need a particular platform anymore, you can remove it from the Active Solution Platform box. This will remove all solution and project configurations you specified for that combination of configuration and platform.
To create a new solution platform, you need to:
Select <New> from the Active Solution Platform box.
Select a new platform (x64 or x86) from the Type or select the new platform box.
Select a platform from the Copy settings from box if you want to use the settings from an existing solution platform or select <Empty> otherwise.
Select the Create new project configurations check box if you want to create project platforms simultaneously.
To rename a solution platform, you need to:
Select <Edit> from the Active Solution Platform box.
Select the platform you want to modify on the Edit Solution Platforms window.
Select Rename and then enter a new name.
To remove a solution platform, you need to:
Select <Edit> from the Active Solution Platform box.
Select the platform you want to remove on the Edit Solution Configurations window.
Select Remove.
Project Contexts
Although the combination of configuration and development platform has been specified at the solution level when you specify active solution configuration and active solution platform, Project Contexts allows you to finally decide whether to build a particular project, and what configuration of the project to build or deploy on what development platform. The following table lists the various project contexts and their respective meanings.
Configuration
Displays the results of your setting of the Active Solution Configuration in Configuration Manager.
Platform
Displays the results of your configuration of the Active Solution Platform in Configuration Manager.
Project Contexts
Displays the results of your configuration of the project contexts in Configuration Manager. Note that the configurations specified in Configuration Manager can be modified here. The modifications made here are equivalent to those made in Configuration Manager. If you have modified the project contexts properly, you need to click Apply to validate such modifications.
Building, Rebuilding, or Cleaning Projects and Solutions
After you have configured the project properties and solution properties, you can then build, rebuild, or clean the projects and solutions.
Building, Rebuilding, or Cleaning a Single Project
You can take the following steps to build, rebuild, or clean a single project.
Select the node of a project you want to build/rebuild/clean in Solution Explorer.
Select Build on the menu bar, and then select one of the following options:
Building, Rebuilding, or Cleaning an Entire Solution
You can take the following steps to build, rebuild, or clean an entire solution.
Select the solution node in Solution Explorer.
Select Build on the menu bar, and then select one of the following options:
Publishing and Hosting Apps Developed in SnapDevelop
An independent tutorial on how to publish a project in SnapDevelop is available on the following link:
|
https://docs.appeon.com/appeon_online_help/snapdevelop2019r2/SnapDevelop_Users_Guide/index.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
dnsres_init, dnsres_gethostbyname, dnsres_gethostbyname2, dnsres_gethostbyaddr, dnsres_getaddrinfo - non blocking DNS resolving library
#include <dnsres.h> int dnsres_init(struct dnsres *_resp); void dnsres_gethostbyname(struct dnsres* res, const char *name, void (*cb)(struct hostent *hp, int error, void *arg, void *arg); void dnsres_gethostbyname2(const char *name, int af, void (*cb)(struct hostent *hp, int error, void *arg, void *arg); void dnsres_gethostbyaddr(const char *addr, int len, int af, void (*cb)(struct hostent *hp, int error, void *arg, void *arg); void dnsres_getaddrinfo(struct dnsres *, const char *, const char *, const struct addrinfo *, void (*)(struct addrinfo *, int, void *), void *);;
The dnsres_init() is used to initialize the dnsres library. If you are developing a multi-threaded application, you need one struct dnsres per thread. The dnsres_gethostbyname(), dnsres_gethostbyname2() and dnsres_gethostbyaddr() functions each call their provided callback function with dnsres_hostent { char *h_name; /* official name of host */ char **h_aliases; /* alias list */ int h_addrtype; /* host address type */ int h_length; /* length of address */ char **h_addr_list; /* list of addresses from name server */ }; The members of this structure are: h_name Official name of the host. h_aliases A NULL compatibility. The function dnsres_gethostbyname() will search for the named host in the current domain and its parents using the search lookup semantics detailed in resolv.conf(5) and hostname(7). dnsres_gethostbyname2() is an advanced form of gethostbyname() which allows lookups in address families other than AF_INET. Currently, the only supported address family besides AF_INET is AF_INET6. The dnsres_gethostbyaddr() function will search for the specified address of length len in the address family af. The only address family currently supported is AF_INET.
HOSTALIASES A file containing local host aliases. See hostname(7) for more information. RES_OPTIONS A list of options to override the resolver’s internal defaults. See resolver(3) for more information.
/etc/hosts /etc/resolv.conf
Error return status from dnsres_gethostbyname(), dnsres_gethostbyname2(), and dnsres_gethostbyaddr() is indicated by a null pointer passed to the callback function. The integer dr_errno may then be checked in struct dnsres to see whether this is a temporary failure or an invalid or unknown host. The variable h_errno can have the following values: DNSRES_HOST_NOT_FOUND No such host is known. DNSRES_TRY_AGAIN This is usually a temporary error and means that the local server did not receive a response from an authoritative server. A retry at some later time may succeed. DNSRES_NO_RECOVERY Some unexpected server failure was encountered. This is a non-recoverable error. DNSRES. DNSRES_NETDB_INTERNAL An internal error occurred. This may occurs when an address family other than AF_INET or AF_INET6 is specified or when a resource is unable to be allocated. DNSRES_NETDB_SUCCESS The function completed successfully.
getnameinfo(3), resolver(3), hosts(5), resolv.conf(5), hostname(7), named(8)
The herror() function appeared in 4.3BSD. The endhostent(), gethostbyaddr(), gethostbyname(), gethostent(), and sethostent() functions appeared.
The dnsres library was hacked together by Niels Provos. Heavy use was made of the exisiting BSD resolver library.
|
http://huge-man-linux.net/man3/dnsres.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Scraping with JavaScript.
Initially, I wrote a comment response to the reviewer and left it at that, but then another person on Twitter mentioned the same thing -- a small chapter on JavaScript just isn't enough, when most websites today use JavaScript.
Because this seems to be a common reaction, and because I think it's a very interesting topic -- "what is JavaScript and how do you scrape it?" I'd like to address it in greater detail here, and explain why I wrote the book the way I did, and what I will change in upcoming editions.
To understand how to approach the problem of scraping JavaScript, you have to look at what it does. Ultimately, all it does is modify the HTML and CSS on the page, as well as send requests back to the server. "But, but..." you might be thinking "What about drag and drop? Or JavaScript animations? It makes pretty things move!" Just HTML and CSS changes. All of them. Ajax loading -- HTML and CSS changes. Logging users in in an Ajax form -- a request back to the server followed by HTML and CSS changes.
Yes, sure, you can scrape the JavaScript itself, and in some cases this can be useful -- such as scraping latitudes and longitudes directly from code that powers a Google Map, rather than scraping the generated HTML itself. And, in fact, this is one technique I mention in the book. However, 99% of the time, what you're going to be doing (and what you can fall back on in any situation), is executing the JavaScript (or interacting with the site in a way that triggers the JavaScript), and scraping the HTML and CSS changes that result.
Contrary to, what seems to be, popular belief, scraping, parsing, cleaning, and analyzing HTML isn't useless in the world of JavaScript -- it's necessary! HTML is HTML is HTML, whether it's generated by JavaScript on the front end, or a PHP script on the back end. In the case of PHP, the server takes care of the hard work for you, and in the case of JavaScript, you have to do that yourself.
But how? If you've read the book, you already know the answer: Selenium and PhantomJS.
from selenium import webdriver
import time
driver = webdriver.PhantomJS(executable_path='')
driver.get("") time.sleep(3)
print(driver.find_element_by_id("content").text)
driver.close()
These seven lines (including the print statement) can solve your Ajax loading problems. Note: There are also ways of waiting to return content by checking to see if a particular element on the page has loaded or not before returning, but waiting a few seconds usually works fine as well.
But, of course, there's another class of HTML and CSS changes JavaScript can create -- those are user-triggered. And in order to get user-triggered changes, well, the user has to trigger the page. In Chapter 13, "Testing with Selenium," I discuss these in detail.
Key to this sort of testing is the concept of Selenium elements. This object was briefly encountered in Chapter 10, and is returned by calls like:
usernameField = driver.find_element_by_name('username')
Just as there are a number of actions you can take on various elements of a website in your browser, there are many actions Selenium can perform on any given element. Among these are:
myElement.click()
myElement.click_and_hold()
myElement.release()
myElement.double_click()
myElement.send_keys_to_element("content to enter")
All of these actions can be strung together in chains, put into functions to act on variable elements, and can even be used to drag and drop elements (see Github:...)
After your JavaScript has been executed, whether it's something you had to wait around for to finish, or take action to make happen -- You scrape the resulting HTML! That's all covered in the first half of the book. Let me say that again: Knowing how to scrape HTML is not just good for (as one reviewer put it) scraping Angelfire and Geocities sites -- you need it to scrape every site, whether it's loaded with JavaScript, a server side script, or monkey farts*. If there's content you can see in your browser, there's HTML there. You don't need special tools to scrape JavaScript pages (other than the tools necessary to execute the JavaScript, or trigger it to execute) just like you don't need special tools to scrape .aspx pages and PHP pages.
So there you have it, in just a few paragraphs, I've covered all you need to know to scrape every JavaScript-powered website. In the book, I devote a full 10 pages to the topic, followed by sections in later chapters that revisit Selenium and JavaScript execution. In future editions, I will likely take some time to explain why you don't need an entire book devoted to "scraping JavaScript sites" but that information about scraping websites in general is relevant -- and necessary -- to scraping JavaScript. Hindsight is 20/20!
*I know someone's going to take this as an opportunity to mention Flash, Silverlight, or other third-party browser plugins. I know, I know. You don't have to mention it. I'm hoping they go away! Sans extra software you have to add to your browser to make it work however, this principle holds true.
Matt Ritter (not verified)
Thu, 07/30/2015 - 07:18
Permalink
Two points to add:
Two points to add:
- I tried to be a smart alec and asked what you'd do if the website is actually one big jpg. Turns out there's a chapter on that
- People are entirely too dismissive of the amazing things you can find with a dedicated search of Angelfire sites . For example:-...
Data Extraction (not verified)
Tue, 08/11/2015 - 08:23
Permalink
Web Scraping Is Awesome
First of all nice tutorial.I am web scraper and created many web scraper using php and dotnet along with java script.
Add new comment
|
http://pythonscraping.com/blog/javascript
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Hello,
I'm trying to follow this tutorial in order to manage my WebSphere Traditional 8.0 (deploy and remove applications) through programming. But I cannot find the necessary dependencies
import com.ibm.websphere.management.application.; import com.ibm.websphere.management.application.client.; import com.ibm.websphere.management.*;
I've looked in the Maven Repo but I cannot seem to find it. Can someone point me in the right direction? Which jar should I be using?
Thank you
Answer by Phil_Hirsch (272) | Jun 13, 2018 at 11:06 AM
I think those are in AppServer/runtimes/com.ibm.ws.admin.client_8.0.0.jar .
Found. But is this .jar in some online repository? I would like to import it using Gradle
145 people are following this question.
Why is WebSphere Application server logging the following ffdc: com.ibm.ws.management.event.PushRemoteSender.pushNotifications 1 Answer
WAS ND Error HMGR0024W 1 Answer
Why am I receiving a 404 Error on the browser when attempting to access the WebSphere Application Server Administrative Console which resides on Unix? 1 Answer
Why does the attempt to copy the IBM WebSphere Application Server 8.5.5.4 for Jazz for Service Management for Linux, 64-bit package (part number CN553ML) fail when using the IBM Packaging Utility ? 1 Answer
Is it possible to configure WebSphere datasource against impala DB on hadoop to authenticate using Kerberos? 1 Answer
|
https://developer.ibm.com/answers/questions/453204/managing-websphere-through-programming.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
If your algorithm needs to read or write data from a MySql database, you can do so by either making the database connection directly from within your own code, or by using our helper algorithms.
Option 1: Connect directly from within your own algorithm code
There are a variety of MySql packages publicly available. For Python, we recommend PyMySql. For other languages, see w3resource.", "passwd":"somepass", "db":"somedb" }
Then, inside your own algorithm, add a MySql library to your dependencies file (in this example,
PyMySql), then load the credentials from the JSON file and use them to make your DB connection:
import Algorithmia import pymysql client = Algorithmia.client() def apply(input): query = "SELECT name, address FROM employees" # load the credentials file and make sure it has the required fields try: # replace the data:// path below with your credentials file creds = client.file('data://.my/SomePrivateFolder/MySqlCredentials.json').getJson() assert {'host','user','passwd','db'}.issubset(creds) except: raise Exception('Unable to load database credentials') # connect to the database and run a query conn = pym MySql Database connection via MySqlConfig ( docs). Note that this creates credentials which are available only to you, so if another user wants to utilize this connection, they’ll need to run MySqlConfig as well.
Then, access the data in your DB via the MySql (docs).
Here’s an example of using a preconfigured connection inside one of your own algorithms:
import Algorithmia client = Algorithmia.client() def apply(input): query = "SELECT name, address FROM employees" results = client.algo('util/MySql').pipe(query).result # now use results (a list of lists) in any way you like
|
https://cdn.algorithmia.com/developers/data/mysql
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Check your RF power with an easy to read and build digital display
Note: All the details, instructions, software, and extra pictures can be downloaded from this link.
I love to home brew and repair equipment but my bench lacked a dedicated dummy load. After dismantling my station more than several times to borrow a dummy load I decided it was time to dedicate one to the bench. Sure you can just buy a nice 200-300W dummy load for about $50 but thought it would be more fun to actually build one. After scouring the internet I can across a few nice articles from K4EAA and AI4JI which used common 1K 3W resistors and some mineral oil in a paint can.
I noticed many of these schematics contained a simple BAV21 diode detector to read a relative voltage and calculate the power being sent into the load. We know from Ohms law that we can calculate power by converting the peak voltage across our load to RMS then squaring RMS voltage and dividing by the resistance.
Power = (Vpeak * .707)^2 / Load
Not wanting to look up a table of values or do a math calculation every time made me curious if a micro-controller could give me an easy to read approximation to the power output. During testing I ran across several problems with the diode detector. One was the diode was easily destroyed by accidental shorting to ground (which will cause a high SWR!). Second I discovered the BAV21 diode only has a switching speed of 50nS, which is about 20MHz. With that I noticed large amount of errors in voltage above 15M (21MHz). Lastly I did not want to play with 150 plus volts coming out of the load.
Between these problems I decided to use an RF probe method to detect the peak RF voltage and incorporate the voltage divider as shown in the schematic below. I chose 3 1N4148 diodes in series to handle the peak reverse voltage while allowing higher frequencies. The values of R1 and R2 were chosen to allow up to 250W (or 158Vpeak) of RF input while keeping the output voltage below the maximum 5V our micro-controller can handle. Note: for QRP you can only use 1 diode, change the resistance divider and gain a little more sensitivity!
The parts can be laid out on a piece of proto board. I also later on created a PCB board to keep things clean and have a few spares available upon request.
Attach the “TO LOAD” ends across the Dummy Load while observing ground and keeping the leads as short as possible. The “METER” ends will eventually attach to the micro-controller via an RCA connector and cable. Before connecting to the controller board, test the adapter by transmitting into the load with a known power/SWR meter. A high SWR indicates something is incorrect (like a bad diode or miss-wiring). Optionally you can also use an oscilloscope to measure the RF voltage at the Dummy Load (not shown, also dangerous voltages are present!). Using a DMM you should see positive DC voltages at the output of the adapter. For example using 100W at 7.15Mhz(40M) should be about 3Vdc. Using your highest power (250W max), make sure the voltage does not exceed 5Vdc or is negative. A voltage above +5V or below 0V can destroy the controller IC!! Notice there are some variations between the expected and measured readings which grow at higher power and bands. Much if this is caused by the diode voltage drops and tolerances of our divider resistors. Fortunately we can correct most of this easily in software.
Now that we have a 0-5Vdc signal we can interface it to our micro-controller board. I settled on using an Arduino micro-controller board and LCD keypad shield to keep things simple and allow new people a chance to experiment with micro-controllers. Since there is only a Zener protection diode and RCA connector needed we can just wire those onto the shield module to analog port A1 and ground. Optionally, an audible tuning tool can be added by connecting a piezo buzzer and switch to digital pin #2 which produces a tone proportional to the voltage being read. This can also be substituted with a regular speaker and 220 ohm resistor in series. Power to the board is supplied by a USB type phone charger or via the PC USB cable.
I have also kept the software a minimal design to help beginners get started in programming. Note: the download package contains software with greater functionality.
// Digital Dummy Load Power Meter – Minimal Design
#include <LiquidCrystal.h>
LiquidCrystal lcd(8,9,4,5,6,7); // Pins used on display
float Rratio = 31.3; // Resistor divider ratio (R1+R2)/R2
float Calibration = -10.9; // Calibration adjustment
void setup() {
lcd.begin(16,2); // Set our LCD size to 16×2
}
void loop() {
// Read ADC1 16 times and get an average
int ADCvalue = 0;
for(int x = 0; x < 16; x++) {
ADCvalue += analogRead(1);
delay(1); // wait 1 mS
}
ADCvalue = ADCvalue / 16;
//Calculate Volts read to Watts
float Vpeak = ADCvalue * 0.004888 * Rratio;
float Vrms = (Vpeak – (Vpeak * Calibration / 100)) * .707;
float watts = pow(Vrms,2.0) / 50;
// Print to the LCD
lcd.setCursor(0,0);
lcd.print(watts);
lcd.print(” Watts “);
tone(2,ADCvalue);
delay(500); // Wait 1/2 second
}
The first part of the program sets up the LCD display telling the controller what I/O pins are being used. We also set up our serial port as well as tell our controller any calibration needed and the resistor divider ratio we used from the adapter board. This is calculated as (R1+R2)/R2 or in our case (100K+3.3K)/3.3K = 31.1.
The main loop of the program collects a few average samples of voltage from our load and calculates the voltage read into Watts. After calculating we write the value to the LCD display and serial port, produce a tone for the speaker, and finally wait half a second and start over.
To program the Arduino controller is a simple matter of installing the PC software and plugging the controller board into your PC with a USB cable. A great step by step tutorial can be found on the Arduino web site. Once the software is loaded and USB cable plugged in to our Arduino we can enter our program into the editor. Pressing the check box button will make sure everything is correct. Pressing the upload button will send the program to the controller board and if successful will begin displaying our power readings.
While this simple adapter is not the most accurate device, it compares pretty close with my Bird 43 watt meter +/-5% tolerance, as well as some of my other meters (if not better!). The cost of the whole project was about the same as purchasing a dummy load and there is a digital display as a bonus. Don’t be afraid to experiment, improve, and add on to this project.
Post project notes:
Since building the dummy load and watt meter I have slightly modified it for 2M operation. The mod consists of adding a 100uH inductor between the adapter board and RCA connector. It may not hurt to include the inductor even for HF to reduce stray RF coming down the meter cable. The SWR on 146MHz is about 1.4:1 and the power readings are slightly lower than actual for low powers but provides a basic reference for transmitter testing.
|
https://kc9on.com/archives/digital-dummy-load/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
[C++] – A plain simple sample to write to and read from shared memory
If you have two programs ( or two threads ) running on the same computer, you might need a mechanism to share information amongst both programs or transfer values from one program to the other.
One of the possible solutions is “shared memory”. Most of us know shared memory only from server crashes and the like.
Here is a simple sample written in C to show, how you can use a shared memory object. The sample uses the BOOST libraries. BOOST libraries provide a very easy way of managing shared memory objects independent from the underlying operating system.
#include <boost/interprocess/managed_shared_memory.hpp> #include using namespace boost::interprocess; int main() { // delete SHM if exists shared_memory_object::remove("my_shm"); // create a new SHM object and allocate space managed_shared_memory managed_shm(open_or_create, "my_shm", 1024); // write into SHM // Type: int, Name: my_int, Value: 99 int *i = managed_shm.construct("my_int")(99); std::cout << "Write into shared memory: "<< *i << '\n'; // write into SHM // Type: std::string, Name: my_string, Value: "Hello World" std::string *sz = managed_shm.construct("my_string")("Hello World"); std::cout << "Write into shared memory: "<< *sz << '\n' << '\n'; // read INT from SHM std::pair<int*, std::size_t> pInt = managed_shm.find("my_int"); if (pInt.first) { std::cout << "Read from shared memory: "<< *pInt.first << '\n'; } else { std::cout << "my_int not found" << '\n'; } // read STRING from SHM std::pair<std::string*, std::size_t> pString = managed_shm.find("my_string"); if (pString.first) { std::cout << "Read from shared memory: "<< *pString.first << '\n'; } else { std::cout << "my_string not found" << '\n'; } }
|
https://www.eknori.de/2015/09/15/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Two years ago this month, I started as a developer at The Outline. At the time, the site was just an idea that existed as a series of design mock ups and small prototypes. We had just three months to build a news website with some pretty ambitious design goals, as well as a CMS to create the visually expressive content that the designs demanded. We chose Elixir and Phoenix as the foundation of our website after being attracted to its concurrency model, reliability, and ergonomics.
Over this time, I have gained a major appreciation for Elixir, not only for the productivity it affords me, but of the business opportunities it has opened up for us. In these past two years, Elixir has gone from 1.3 to 1.7, and great improvements have been introduced by the core team:
- GenStage / Flow
- mix format
- Registry
- Syntax highlighting
- IEx debugging enhancements
- Exception.Blame and other stack trace improvements
- Dynamic Supervisor
As I reach this two year mark, I thought others might benefit from an explanation of why I love Elixir so much after two years, what I still struggle with, and some beginner mistakes that I made early on.
90ms is a 90th percentile response time on The Outline. Our post page route is even faster! We got this performance out of the box, without really any tuning or fine-grained optimizations. For other routes that do not hit the database, we see response times measures in microseconds. This speed allows us to build features that I wouldn’t have even considered possible in other languages.
Elixir is so fast that we haven’t had much need for CDN or service level caching. It’s been a luxury to not have to spend time debugging caching issues between Redis and memcached, which are issues that have kept me up into the wee hours of the morning in past roles. The lack of public cache opens up the path for dynamic content and user-based personalization on initial page load.
While we don’t cache routes at the CDN, we do cache some expensive database queries. For that we use light in-memory caching via ConCache, a wonderful library by Saša Jurić.
It seems like people get started with Phoenix writing JSON apis, and leave the HTML to Preact and other front end frameworks. A lot of the raw site performance we get from Elixir and Phoenix is from its ability to render HTML extremely quickly, on the order of microseconds. Phoenix allows us to have really fast server-rendered pages, and then we let Javascript kick in to add dynamic features. Before reaching for Vue.js or Svelte, consider going old school and rendering your HTML on the server; you might be delighted.
ExUnit gives you so much out of the box. In most of the other languages that I’ve used, testing frameworks are third-party, and setup is often a pain. ExUnit comes bundled with a code coverage tool, and its assertion diffs keep improving! Not only that, you can
mix test --slowestto find your slowest tests, or
mix test --failedto rerun only the tests that failed the last run.
Doctests are easily my favorite part of ExUnit. For the uninitiated, doctests are tests that you write inline in your documentation. They get compiled and run when you do mix test. The power here is two-fold; you get code examples right next to the definition of your code and you know that the examples work.
Having a consistent way to read docs across packages makes things really easy to find. I spent some time taking a data science and machine learning course in Python last month, and I realized exactly how spoiled I’ve been with Elixir documentation. It’s hard to measure the value of a consistent, familiar, and pervasive documentation system. The latest distillery release excepted, every Elixir library’s documentation has the same look and feel..
Think of Phoenix Channels as controllers for Websockets. The socket registers topics which are analogous to a router. At The Outline, we were able to remove thousands of lines of JavaScript by moving code into the Channel. Moving mutable JavaScript into Elixir was a great feeling. It’s always been our goal to ship as little code to the client as possible, and keeping user state in Channels facilitates that in a way that I would not have considered if I was using Node.js or Ruby. The memory overhead of channels has been relatively low, and we didn’t need to make any changes to our infrastructure to support them.
Elixir has been a friendly and helpful community these past two years. I’ve received a ton of advice on the Elixir Slack channel when I’ve asked for help. I’ve also enjoyed attending and speaking at the NYC Elixir Meetup, as well as the Empex and Empex West conferences. I’ve met some great people through these events, including several leaders in the community, and I hope to meet more passionate people in the future!
I’d like to also call out both the ElixirTalk (hi Chris and Desmond) and Elixir Outlaws podcasts, which are fantastic and do a really great job of breaking down interesting problems in the ecosystem.
Sometimes you change a line in a controller or a view, and you end up with a stack trace in your 1000 line module that starts at line 1. The problem? Meta-programming! Despite all the great things that meta-programming gives us in terms of ergonomics, its makes certain types of exceptions really hard to pinpoint. Luckily, not all stack traces are this way, but it can be extremely frustrating when the stack trace leaves you empty handed.
Asynchronous and concurrent code is notoriously hard to debug. What’s harder to debug is asynchronous and concurrent code that you haven’t written. We have some lingering error messages that get printed during random test runs. Attempts to debug them have been futile, so they appear to be heisenbugs. I have a suspicion that our particular issue is with Phoenix Channels and Ecto Sandbox mode, but I haven’t quite narrowed it down. Please let me know if you have!
While I’m really comfortable working with changesets and writing join queries in Ecto, breaking down my code for associations is still hard. Its pretty straightforward when dealing with simple associations, but when you have a data model that involves multiple entities, and you want to create new entities while associating them to existing entities, some things break down for me.
What still does not feel natural to me is where to place code that deals with the
put_assoc and
cast_assoc family of functions. My first tendency would be to put it in the
changeset/2 function in the schema, but you do not always want that logic. Of course, you can have multiple changeset functions, but I haven’t found the right balance for that either. What I’ve started doing is moving association code outside of the schema and changeset, and into the bounded context thats building the association.
What really drew me into Elixir at first was how wonderful it felt to pattern match in function heads. The utility of multiple function heads,
if as an expression rather than a statement, and immutable data structures had me hooked really fast (especially coming from Javascript).
What ended up happening is that I would pattern match at every single opportunity. Without a static type system, pattern matching felt like a friendlier replacement, and I wanted to make use of it at every corner. The problem is that it’s not a type system, and using it as such has drawbacks that are not immediately obvious until you write a certain amount of Elixir code. When you pattern-match gratuitously, you over-specify your code, and you miss opportunities to apply generic code to wider domains, and make that code more difficult to refactor in the future.
While my love of pattern-matching has not gone away, it has become clearer to me when to pattern-match, and more importantly, what level of specificity should I pattern match on. Do I need to pattern-match on this struct, or will a map suffice? Does this private function need to pattern match it arguments when the shape is already clear in its only caller? These nuances become clearer as you write more code, and deciding when and when not to pattern-match is a matter of preference and style.
This is a problem that’s closely related with the desire to pattern-match. Once you start rendering more than the Hello World example of Phoenix, you’re gonna have to start passing data through nested views and templates to fully render a page. When you start passing data down, tend towards being additive rather than regressive.
# Here we’re possibly over pattern matching and over specifying.
# If we want to pass more data down in the future, we have to
# change this function in addition to its caller
def render(“parent.html”, %{content: content}) do
render(“child.html”, %{content: content, extra: data})
end
# This way is less restrictive, and makes maintenance easier
# in the future if we decide to pass more data
def render(“parent.html”, params) do
render(“child.html”, Map.put(params, :extra, :data))
end
When starting to learn about Elixir / Erlang, it’s so tempting to start writing GenServers, Tasks, processes, etc for the problem at hand. Before you do, please read Saša’s To spawn, or not to spawn?, which breaks down when you should reach for processes and when modules / functions are good enough.
Knowing when to implement a protocol, such as Phoenix’s HTML.Safe protocol, can be extremely powerful. I wrote a bit about protocols in my last blog post, Beyond Functions in Elixir: Refactoring for Maintainability. In that post, I walk through implementing a custom Ecto.Type for Markdown, and then automatically converting it to HTML in your templates via protocols.
As soon as you get data from the external world, cast it into a well known shape. For this, Ecto.Changeset is your best friend. When I first started out, I resisted using changesets, as there is a bit of a learning curve, and it seemed easier to shove data right into the database. Don’t do this.
Ecto.Changeset is such a wonderful tool that will save you so much time, and there are many ways to learn it. I haven’t read the Ecto book, but I do recommend reading through the documentation as well as the free What’s new in Ecto 2.1?. José Valim also wrote an excellent blog post describing how to use Ecto Schemas and Changesets to map data between different domains, without those domains necessarily being backed by a database.
- nerves — Craft and deploy bulletproof embedded software in Elixir
- raft — An Elixir implementation of the raft consensus protocol
- Property testing — via PropEr and StreamData
- LiveView — Upcoming Phoenix compatible library from Chris McCord that blends Phoenix Channels and reactive html
Well, thank you for reading this far! These past two years have been a wonderful time. I’m excited to get more involved in the community, and to write more! Say hi on twitter and let me know what else you’d like to hear about!
|
https://www.tefter.io/bookmarks/52076/readable
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
If you are new to Angular, you’ll be happy to know that it has its own dependency injection framework. This framework makes it easier to apply the dependency inversion principle, which is sometimes boiled down to the axiom “depend upon abstractions, not concretions.” In many programming languages, including TypeScript and C#, this could be translated by “depend upon interfaces, not classes.”
What Not to Do
So what does life look like without the warm glow of the dependency inversion principle? Let's have a look. The following
AuthServiceImpl class has a
isLoggedIn method that other classes in the app can depend upon:
class AuthServiceImpl { constructor() {} isLoggedIn() { return true; } }
Now, an instance of
AuthServiceImpl can be created to make the
isLoggedIn method available to other classes in the app. Perhaps we have a
HomeComponent view that uses it to determine if it should display a login button:
class HomeComponent { authService = new AuthServiceImpl(); constructor() {} shouldDisplayLoginButton() { return !this.authService.isLoggedin(); } }
HomeComponent creates and directly depends on
AuthServiceImpl and this is problematic and should be avoided for the following reasons:
- If we ever wanted to use a different implementation instead of
AuthServiceImpl, we would have to modify
HomeComponent.
- If
AuthServiceImplitself had dependencies, they would have to be configured inside of
HomeComponent. In any non-trivial project, with other components and services depending on
AuthServiceImpl, such configuration code would quickly become scattered across the app.
- Good luck unit testing
HomeComponent. We would normally use a mock or stub
AuthServiceImplclass, which is impossible to do here.
What to Do
Use dependency injection, that's what. Here's how it works in three simple steps:
- Use an interface to abstract the dependency implementation.
- Register the dependency with Angular's dependency injection framework.
- Angular injects the dependency into the constructor of whatever class that uses it.
So let's modify our code to make this happen.
Use an interface
First, we write an interface:
interface AuthService { isLoggedIn(): boolean; }
This interface is then implemented by our concrete class:
class AuthServiceImpl implements AuthService { constructor() {} isLoggedIn() { return true; } }
Now, instead of depending directly on
AuthServiceImpl as we were before, our
HomeComponent will depend on the
AuthService interface.
class HomeComponent { constructor(private authService: AuthService) {} shouldDisplayLoginButton() { return !this.authService.isLoggedin(); } }
Notice how the
new keyword is gone from our class. That's what we generally want to see. Like Steve Smith points out in his blog post "new is glue" and we don't want to glue our
HomeComponent to a particular implementation of
AuthService.
Good, now we need to register our dependency with Angular's dependency injection framework.
Register the dependency with Angular
In this step, we need to tell Angular what to do when it is instantiating one of our classes and it notices that the constructor is expecting to be passed an
AuthService type.
Without a dependency injection framework, we would have to instantiate all of our app's classes manually. Something like this:
const authService = new AuthServiceImpl(); const homeComponent = new HomeComponent(authService);
This is an incredibly simple example, but imagine how much work this would be for a large app with tens, hundreds or thousands of classes.
Thankfully, Angular takes care of that for us. To make this happen, we first need to tell Angular that our
AuthServiceImpl is expected to be injected anywhere in our app and as such, we want it to be available for injection at the root level and below:
@Injectable() class AuthServiceImpl implements AuthService { constructor() {} isLoggedIn() { return true; } }
So we've now marked out
AuthServiceImpl as a service that can be injected, but Angular can't actually inject it anywhere until we configure an Angular dependency injector with a provider of that service. The most common place to do that is in the
NgModule that declares the component using our dependency. Let's do that:
@NgModule({ declarations: [HomeComponent], providers: [ { provide: AuthService, useClass: AuthServiceImpl } ], }) export class HomeModule { }
And... this won't work. It looks like it should, but it won't. What we are telling Angular here is that, when it is instantiating a class that has a dependency of type
AuthService in its constructor, it should pass in an instance of
AuthServiceImpl. And this seems right, especially if you are used to other backend dependency injection frameworks like ASP.NET's where you would do something like
services.AddSingleton<IAuthService, AuthService>();
The big difference is that interfaces do not exist at runtime in JavaScript. They are a compile time construct of TypeScript. Therefore Angular's dependency injection system can't use an interface as a token at runtime. So how do we deal with this? We need to make a custom injection token:
import { InjectionToken } from '@angular/core'; export const AUTH_SERVICE = new InjectionToken<AuthService>('AuthService');
We can then use that token in the provider:
@NgModule({ declarations: [HomeComponent], providers: [ { provide: AUTH_SERVICE, useClass: AuthServiceImpl } ], }) export class HomeModule { }
And in
HomeComponent using the
Inject property decorator :
class HomeComponent { constructor(@Inject(AUTH_SERVICE) private authService: AuthService) {} shouldDisplayLoginButton() { return this.authService.isLoggedin(); } }
Conclusion
Using Angular's powerful dependency injection system, it is possible to write decoupled, testable code, that follows the dependency inversion principle. Remember, depend on interfaces instead of concrete classes, and injection tokens are your friends.
|
https://blog.snowfrog.dev/dependency-inversion-in-angular/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Intro
We aim to predict the winner of the FIFA world cup solely based on data. The method applied is not fancy at all, but it should do the trick to get some neat results (spoiler alert: Germany wins!). We use three datasets obtained by Kaggle which contain the outcome of specific pairings between teams, rank, points and the weighted point difference with the opponent. Then, we create a model to predict the outcome of each match during the FIFA world cup 2018. To make the results more appealing, we translate the outcome probabilities to fair odds.
Data
The first dataset stems from Tadhg Fitzgerald and contains all available FIFA men’s international soccer rankings from August 1993 to April 2018. The rankings and points have been scraped from the official FIFA website. The second dataset used includes the results of additional 40k international football matches from the very first official match in 1972 up to 2018. Again, the games are strictly men’s full internationals and stem from Mart Jüriso. his will be used to quantify the effect of the difference in ranks, point and current rank of the international teams on a match’s outcome. As we aim to predict the result of the ongoing FIFA world cup, we use a third data set from Nuggs to get its matches.
import numpy as np import pandas as pd from matplotlib import pyplot as plt #import datasets my_ranking = pd.read_csv('./data_i_o/input_data/fifa-international-soccer-mens-ranking-1993now/fifa_ranking.csv') my_ranking = my_ranking.loc[:,['rank', 'country_full', 'country_abrv', 'cur_year_avg_weighted', 'rank_date', 'two_year_ago_weighted', 'three_year_ago_weighted']] #rename country names my_ranking = my_ranking.replace({"IR Iran": "Iran"}) my_ranking['weighted_points'] = my_ranking['cur_year_avg_weighted'] + my_ranking['two_year_ago_weighted'] + my_ranking['three_year_ago_weighted'] my_ranking['rank_date'] = pd.to_datetime(my_ranking['rank_date']) #again, rename country names my_matches = pd.read_csv('./data_i_o/input_dat/international-football-results-from-1872-to-2017/results.csv') my_matches = my_matches.replace({'Germany DR': 'Germany', 'China': 'China PR'}) my_matches['date'] = pd.to_datetime(my_matches['date']) #read data world_cup_2018 = pd.read_csv('./data_i_o/input_dat/fifa-worldcup-2018-dataset/World Cup 2018 Dataset.csv') world_cup_2018= world_cup_2018.loc[:, ['Team', 'Group', 'First match \nagainst', 'Second match\n against', 'Third match\n against']] world_cup_2018 = world_cup_2018.dropna(how='all') #rename country names world_cup_2018 = world_cup_2018.replace({"IRAN": "Iran", "Costarica": "Costa Rica", "Porugal": "Portugal", "Columbia": "Colombia", "Korea" : "Korea Republic"}) world_cup_2018 = world_cup_2018.set_index('Team')
Feature Engineering
There is no magic here, we keep things as simple as possible. First, we join the rankings of each team and extract the pairwise point and rank differences. From an agnostic point of view, friendly games should be harder to predict as players have no high incentive to reach their performance upper bound and try instead to avoid injuries. Hence, to not confuse our model in the next, we also mark friendly games in our dataset. Resting days between matches might even hide a useful pattern for players’ performance which should be positively correlated with the probability of winning a game (unless you play for Mexico!). As the last step, we additionally hot-encode the participant countries.
#get daily rankings my_rankings = my_rankings.set_index(['rank_date'])\ .groupby(['country_full'], group_keys=False)\ .resample('D').first()\ .fillna(method='ffill')\ .reset_index() #merge the my_matches = my_matches.merge(my_rankings, left_on=['date', 'home_team'], right_on=['rank_date', 'country_full']) my_matches = my_matches.merge(my_rankings, left_on=['date', 'away_team'], right_on=['rank_date', 'country_full'], suffixes=('_home', '_away')) #feature engineering my_matches ['rank_diff'] = my_matches ['rank_home'] - my_matches ['rank_away'] my_matches ['average_rank'] = (my_matches ['rank_home'] + my_matches ['rank_away'])/2 my_matches ['point_diff'] = my_matches ['weighted_points_home'] - my_matches ['weighted_points_away'] my_matches ['score_diff'] = my_matches ['home_score'] - my_matches ['away_score'] my_matches ['is_won'] = my_matches ['score_difference'] > 0 #draw=lost my_matches ['is_stake'] = my_matches ['tournament'] != 'Friendly' #rest days max_rest = 30 my_matches ['rest_days'] = my_matches .groupby('home_team').diff()['date'].dt.days.clip(0,max_rest).fillna(max_rest) #hot encode participants my_matches ['wc_participant'] = my_matches ['home_team'] * my_matches ['home_team'].isin(world_cup_2018.index.tolist()) my_matches ['wc_participant'] = my_matches ['wc_participant'].replace({'':'Other'}) my_matches = my_matches .join(pd.get_dummies(my_matches ['wc_participant']))
Methodology
We use a simple logistic model to keep everything. If the feature engineering part is well done, you can also beat some fancy deep learning networks with straightforward (linear) models.
from sklearn import linear_model from sklearn import ensemble from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, roc_curve, roc_auc_score from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures #create the dataset X, y = my_matches.loc[:,['average_rank', 'rank_diff', 'point_diff', 'is_stake']], my_matches['is_won'] X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=1234) logreg = linear_model.LogisticRegression(C=1e-5) features = PolynomialFeatures(degree=2) model = Pipeline([ ('polynomial_features', features), ('logistic_regression', logreg) ]) model = model.fit(X_train, y_train) # figures fpr, tpr, _ = roc_curve(y_test, model.predict_proba(X_test)[:,1]) plt.figure(figsize=(15,5)) ax = plt.subplot(1,3,1) ax.plot([0, 1], [0, 1], 'k--') ax.plot(fpr, tpr) ax.set_title('AUC score is {0:0.2}'.format(roc_auc_score(y_test, model.predict_proba(X_test)[:,1]))) ax.set_aspect(1) ax = plt.subplot(1,3,2) cm = confusion_matrix(y_test, model.predict(X_test)) ax.imshow(cm, cmap='Blues', clim = (0, cm.max())) ax.set_xlabel('Prediction') ax.set_title('Out of sample performance') ax = plt.subplot(1,3,3) cm = confusion_matrix(y_train, model.predict(X_train)) ax.imshow(cm, cmap='Blues', clim = (0, cm.max())) ax.set_xlabel('Prediction') ax.set_title('Out of sample performance') pass
The out of sample results are quite satisfying with an AUC score of 0.735. The results also suggest that teams with lower ranks are not very well predictable. The same applies to matches with very similar ranks (which seems to be reasonable).
Forecasting the World Cup 2018
First we tackle the group rounds:
#if the winning probability margin is smaller than 0.05 then we classify the outcome as a draw margin = 0.05 #guess what - world cup rankings world_cup_2018_rankings = rankings.loc[(my_rankings['rank_date'] == my_rankings['rank_date'].max()) & my_rankings['country_full'].isin(world_cup_2018.index.unique())] world_cup_2018_rankings = world_cup_2018_rankings.set_index(['country_full']) from itertools import combinations opponents = ['First match \nagainst', 'Second match\n against', 'Third match\n against'] world_cup_2018['points'] = 0 world_cup_2018['total_prob'] = 0 for group in set(world_cup_2018['Group']): print('Group {}:'.format(group)) for home, away in combinations(world_cup_2018.query('Group == "{}"'.format(group)).index, 2): print("{} vs. {}: ".format(home, away), end='') row = pd.DataFrame(np.array([[np.nan, np.nan, np.nan, True]]), columns=X_test.columns) home_rank = world_cup_2018_rankings.loc[home, 'rank'] home_points = world_cup_2018_rankings.loc[home, 'weighted_points'] opp_rank = world_cup_2018_rankings.loc[away, 'rank'] opp_points = world_cup_2018_rankings.loc[away, 'weighted_points'] row['average_rank'] = (home_rank + opp_rank) / 2 row['rank_difference'] = home_rank - opp_rank row['point_difference'] = home_points - opp_points home_win_prob = model.predict_proba(row)[:,1][0] world_cup_2018.loc[home, 'total_prob'] += home_win_prob world_cup_2018.loc[away, 'total_prob'] += 1-home_win_prob points = 0 if home_win_prob <= 0.5 - margin: print("{} wins with {:.2f}".format(away, 1-home_win_prob)) world_cup_2018.loc[away, 'points'] += 3 if home_win_prob > 0.5 - margin: points = 1 if home_win_prob >= 0.5 + margin: points = 3 world_cup_2018.loc[home, 'points'] += 3 print("{} wins with {:.2f}".format(home, home_win_prob)) if points == 1: print("Draw") world_cup_2018.loc[home, 'points'] += 1 world_cup_2018.loc[away, 'points'] += 1
And these are the prediction results for the group games:
Group B: Portugal vs. Spain: Draw Portugal vs. Morocco: Portugal wins with 0.64 Portugal vs. Iran: Portugal wins with 0.64 Spain vs. Morocco: Spain wins with 0.61 Spain vs. Iran: Spain wins with 0.61 Morocco vs. Iran: Draw Group C: France vs. Australia: France wins with 0.63 France vs. Peru: Draw France vs. Denmark: Draw Australia vs. Peru: Peru wins with 0.65 Australia vs. Denmark: Denmark wins with 0.71 Peru vs. Denmark: Draw Group F: Germany vs. Mexico: Germany wins with 0.62 Germany vs. Sweden: Germany wins with 0.65 Germany vs. Korea Republic: Germany wins with 0.74 Mexico vs. Sweden: Draw Mexico vs. Korea Republic: Mexico wins with 0.65 Sweden vs. Korea Republic: Sweden wins with 0.63 Group H: Poland vs. Senegal: Poland wins with 0.63 Poland vs. Colombia: Draw Poland vs. Japan: Poland wins with 0.75 Senegal vs. Colombia: Colombia wins with 0.62 Senegal vs. Japan: Senegal wins with 0.59 Colombia vs. Japan: Colombia wins with 0.71 Group G: Belgium vs. Panama: Belgium wins with 0.72 Belgium vs. Tunisia: Belgium wins with 0.59 Belgium vs. England: Belgium wins with 0.59 Panama vs. Tunisia: Tunisia wins with 0.72 Panama vs. England: England wins with 0.73 Tunisia vs. England: England wins with 0.54 Group E: Brazil vs. Switzerland: Draw Brazil vs. Costa Rica: Brazil wins with 0.61 Brazil vs. Serbia: Brazil wins with 0.64 Switzerland vs. Costa Rica: Switzerland wins with 0.58 Switzerland vs. Serbia: Switzerland wins with 0.63 Costa Rica vs. Serbia: Draw Group D: Argentina vs. Iceland: Argentina wins with 0.61 Argentina vs. Croatia: Argentina wins with 0.58 Argentina vs. Nigeria: Argentina wins with 0.71 Iceland vs. Croatia: Draw Iceland vs. Nigeria: Iceland wins with 0.62 Croatia vs. Nigeria: Croatia wins with 0.62 Group A: Russia vs. Saudi Arabia: Saudi Arabia wins with 0.56 Russia vs. Egypt: Egypt wins with 0.67 Russia vs. Uruguay: Uruguay wins with 0.82 Saudi Arabia vs. Egypt: Egypt wins with 0.65 Saudi Arabia vs. Uruguay: Uruguay wins with 0.81 Egypt vs. Uruguay: Uruguay wins with 0.7
Second, we tackle the knock-out games:
pairing = [0,3,4,7,8,11,12,15,1,2,5,6,9,10,13,14] world_cup_2018 = world_cup_2018.sort_values(by=['Group', 'points', 'total_prob'], ascending=False).reset_index() next_round_wc = world_cup_2018.groupby('Group').nth([0, 1]) # select the top 2 next_round_wc = next_round_wc.reset_index() next_round_wc = next_round_wc.loc[pairing] next_round_wc = next_round_wc.set_index('Team') finals = ['Round_of_16', 'Quarter-Finals', 'Semi-Finals', 'Final'] labels = list() odds = list() for f in finals: print("{}:".format(f)) iterations = int(len(next_round_wc) / 2) winners = [] for i in range(iterations): home = next_round_wc.index[i*2] away = next_round_wc.index[i*2+1] print("{} vs. {}: ".format(home, away), end='') row = pd.DataFrame(np.array([[np.nan, np.nan, np.nan, True]]), columns=X_test.columns) home_rank = world_cup_rankings_2018.loc[home, 'rank'] home_points = world_cup_rankings_2018.loc[home, 'weighted_points'] opp_rank = world_cup_rankings_2018.loc[away, 'rank'] opp_points = world_cup_rankings_2018.loc[away, 'weighted_points'] row['average_rank'] = (home_rank + opp_rank) / 2 row['rank_difference'] = home_rank - opp_rank row['point_difference'] = home_points - opp_points home_win_prob = model.predict_proba(row)[:,1][0] if model.predict_proba(row)[:,1] <= 0.5: print("{0} wins with probability {1:.2f}".format(away, 1-home_win_prob)) winners.append(away) else: print("{0} wins with probability {1:.2f}".format(home, home_win_prob)) winners.append(home) labels.append("{}({:.2f}) vs. {}({:.2f})".format(world_cup_rankings_2018.loc[home, 'country_abrv'], 1/home_win_prob, world_cup_rankings_2018.loc[away, 'country_abrv'], 1/(1-home_win_prob))) odds.append([home_win_prob, 1-home_win_prob]) next_round_wc = next_round_wc.loc[winners] print("\n")
And these are the predictions for the knock-out games:
Round_of_16: Uruguay vs. Spain: Spain wins with probability 0.56 Denmark vs. Croatia: Denmark wins with probability 0.60 Brazil vs. Mexico: Brazil wins with probability 0.63 Belgium vs. Colombia: Belgium wins with probability 0.57 Egypt vs. Portugal: Portugal wins with probability 0.83 France vs. Argentina: Argentina wins with probability 0.54 Switzerland vs. Germany: Germany wins with probability 0.61 England vs. Poland: Poland wins with probability 0.55 Quarter-Finals: Spain vs. Denmark: Denmark wins with probability 0.51 Brazil vs. Belgium: Belgium wins with probability 0.53 Portugal vs. Argentina: Portugal wins with probability 0.52 Germany vs. Poland: Germany wins with probability 0.56 Semi-Finals: Denmark vs. Belgium: Belgium wins with probability 0.58 Portugal vs. Germany: Germany wins with probability 0.57 Final: Belgium vs. Germany: Germany wins with probability 0.59
Network visualisation of the Knock-Out games:
import networkx as nx import pydot from networkx.drawing.nx_pydot import graphviz_layout node_sizes = pd.DataFrame(list(reversed(odds))) scale_factor = 0.3 # for visualization G = nx.balanced_tree(2, 3) pos = graphviz_layout(G, prog='twopi', args='') centre = pd.DataFrame(pos).mean(axis=1).mean() plt.figure(figsize=(10, 10)) ax = plt.subplot(1,1,1) # add circles circle_positions = [(230, 'black'), (180, 'blue'), (120, 'red'), (60, 'yellow')] [ax.add_artist(plt.Circle((centre, centre), cp, color='grey', alpha=0.2)) for cp, c in circle_positions] nx.draw(G, pos, node_color=node_sizes.diff(axis=1)[1].abs().pow(scale_factor), node_size=node_sizes.diff(axis=1)[1].abs().pow(scale_factor)*2000, alpha=1, cmap='Reds', edge_color='black', width=10, with_labels=False) shifted_pos = {k:[(v[0]-centre)*0.9+centre,(v[1]-centre)*0.9+centre] for k,v in pos.items()} nx.draw_networkx_labels(G, pos=shifted_pos, bbox=dict(boxstyle="round,pad=0.3", fc="white", ec="black", lw=.5, alpha=1), labels=dict(zip(reversed(range(len(labels))), labels))) texts = ((10, 'Best 16', 'black'), (70, 'Quarter-\nFinal', 'blue'), (130, 'Semi-Final', 'red'), (190, 'Final', 'yellow')) [plt.text(p, centre+20, t, fontsize=12, color='grey', va='center', ha='center') for p,t,c in texts] plt.axis('equal') plt.title('Knock-Out Games \n Predictions', fontsize=20) plt.show()
|
http://wp.firrm.de/index.php/2018/06/16/who-is-going-to-win-the-football-worldcup-2018/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
See the LogMF or LogSF classes. LogMF used MessageFormat type format specifiers ("{1}") while
LogSF uses SLF4J format specifiers ("{}") .
varargs were introduced in Java 5 and have an unavoidable array construction/destruction expense
even if the level is not reached. The LogMF and LogSF classes approximate the usability of
varargs by having a lot of different signatures.
On Apr 4, 2011, at 10:27 AM, <[email protected]> <[email protected]>
wrote:
> Hi,
> why don't we add a method in the Logger class or at least a Util with a method
like the following one in order to avoid the cost of the message build.
> In this way we'll not have to build the message unless the priority is enabled
for the logger.
>
> I'll wait for your thoughts.
>
> Regards,
>
> import org.apache.log4j.Logger;
> import org.apache.log4j.Priority;
>
> public class LoggerUtil {
>
> static public void log(Logger logger, Priority priority, Object... objs){
>
> if (logger.isEnabledFor(priority) ){
> StringBuffer sb = new StringBuffer();
> for(Object obj : objs){
> sb.append( obj.toString() );
> }
> logger.log(priority, sb.toString() );
> }
> }
> }
>
>
> Cusano Pineda, Gerardo H.
> Information System Engineer
> Accenture – System Integration & Technology
> Buenos Aires, Argentina
> * [email protected]
>
> This message is for the designated recipient only and may contain privileged, proprietary,
or otherwise private information. If you have received it in error, please notify the sender
immediately and delete the original. Any other use of the email by you is prohibited.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
|
http://mail-archives.us.apache.org/mod_mbox/logging-log4j-dev/201104.mbox/%[email protected]%3E
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
V:
npm install ts-loader vue vue-class-component vue-loader vue-style-loader vue-template-compiler css-loader --save.!"); } }:
import Vue from "vue"; import SimpleTsComponent from "./simple-ts-component"; var app = new Vue({ el: "#app", components: { SimpleTsComponent } });
And use it in the index.html file:
<div id="app"> <simple-ts-component></simple-ts-component> <global-component></global-component> </div>:
} });.
|
https://javascripttuts.com/vue-as-an-angular-alternative-for-ionic-the-components/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
NAME
Data::UUID - Globally/Universally Unique Identifiers (GUIDs/UUIDs)
SEE INSTEAD?
The module Data::GUID provides another interface for generating GUIDs. Right now, it relies on Data::UUID, but it may not in the future. Its interface may be just a little more straightforward for the average Perl programer.
SYNOPSIS
use Data::UUID; $ug = Data::UUID->new; $uuid1 = $ug->create(); $uuid2 = $ug->create_from_name(<namespace>, <name>); $res = $ug->compare($uuid1, $uuid2); $str = $ug->to_string( $uuid ); $uuid = $ug->from_string( $str );
DESCRIPTION
This module provides a framework for generating v3. (See RFC 4122.). In all methods,
<namespace> is a UUID and
<name> is a free form string.
# # Note that digits A-F are capitalized, which is contrary to rfc4122 $ug->create_str(); $ug->create_from_name_str(<namespace>, <name>); # creates UUID string as a hex string, # such as: 0x4162F7121DD211B2B17EC09EFE1DC403 # Note that digits A-F are capitalized, which is contrary to rfc4122 (using upper, rather than lower, case letters) ; # this creates a new UUID in string form, based on the standard namespace # UUID NameSpace_URL and name "" $ug = Data::UUID->new; (RFC 4122)
|
https://metacpan.org/pod/distribution/Data-UUID/UUID.pm
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
5.6.0
released on 2019-06-13
694 Added possibility in JaVers MongoDB starter to configure a dedicated Mongo database, which is used by Javers. See JaversRepository configuration.
775 Fixed issue: Spring Boot stops when SQL Schema Manager can’t establish the connection.
851 Fixed exception: java.lang.ClassCastException: class org.javers.core.metamodel.property.MissingProperty cannot be cast to class java.util.List.
5.5.2
released on 2019-05-23
5.5.1
released on 2019-05-18
5.5.0
released on 2019-05-18
Breaking changes in CustomPropertyComparator and constructors of all
PropertyChangesubclasses.
CustomPropertyComparatorinterface is changed from:
public interface CustomPropertyComparator<T, C extends PropertyChange> { Optional<C> compare(T left, T right, GlobalId affectedId, Property property); ... }
to:
public interface CustomPropertyComparator<T, C extends PropertyChange> { Optional<C> compare(T left, T right, PropertyChangeMetadata metadata, Property property); ... }
PropertyChangeobjects that are produced by comparators now accept
PropertyChangeMetadatain constructors, for example:
public class CustomBigDecimalComparator implements CustomPropertyComparator<BigDecimal, ValueChange> { ... @Override public Optional<ValueChange> compare(BigDecimal left, BigDecimal right, PropertyChangeMetadata metadata, Property property) { if (equals(left, right)){ return Optional.empty(); } return Optional.of(new ValueChange(metadata, left, right)); } ... }
830 & 834 Important new feature in PropertyChange. It gained the new enum, which allows to distinguish between ordinary
nullvalues and the case when a property is added/removed after refactoring:
/** * When two objects being compared have different classes, * they can have different sets of properties. * <br/> * When both objects have the same class, all changes have PROPERTY_VALUE_CHANGED type. */ public enum PropertyChangeType { /** * When a property of the right object is absent in the left object. */ PROPERTY_ADDED, /** * When a property of the left object is absent in the right object. */ PROPERTY_REMOVED, /** * Regular value change — when a property is present in both objects. */ PROPERTY_VALUE_CHANGED }
The new enum can be checked using these four new methods in
PropertyChange:
public abstract class PropertyChange extends Change { ... public PropertyChangeType getChangeType() { return changeType; } public boolean isPropertyAdded() { return changeType == PropertyChangeType.PROPERTY_ADDED; } public boolean isPropertyRemoved() { return changeType == PropertyChangeType.PROPERTY_REMOVED; } public boolean isPropertyValueChanged() { return changeType == PropertyChangeType.PROPERTY_VALUE_CHANGED; } }
837 Fixed bug in SQL
JaversRepositoryfor Oracle and MS SQL databases.
5.4.0
released on 2019-05-11
- 625 Composite-Id is now available in JaVers. Multiple properties can be mapped with
@Id, and the
localIdis constructed as a Map.
class Person { @Id String name @Id String surname @Id LocalDate dob int data } def "should support Composite Id assembled from Values"(){ given: def first = new Person(name: "mad", surname: "kaz", dob: LocalDate.of(2019,01,01), data: 1) def second = new Person(name: "mad", surname: "kaz", dob: LocalDate.of(2019,01,01), data: 2) when: javers.commit("author", first) javers.commit("author", second) def snapshot = javers.getLatestSnapshot( [ name: "mad", surname: "kaz", dob: LocalDate.of(2019,01,01) ], Person).get() then: snapshot.globalId.value().endsWith("Person/2019,1,1,mad,kaz") snapshot.globalId.cdoId == "2019,1,1,mad,kaz" snapshot.getPropertyValue("name") == "mad" snapshot.getPropertyValue("surname") == "kaz" snapshot.getPropertyValue("dob") == LocalDate.of(2019,01,01) snapshot.getPropertyValue("data") == 2 snapshot.changed == ["data"] }
5.3.6
released on 2019-04-10
5.3.5
released on 2019-04-08
5.3.4
released on 2019-03-29
5.3.3
released on 2019-03-26
5.3.2
released on 2019-03-20
810 Fixed issue when comparing Sets with nested Value Objects with
@DiffIgnore.
806 Fixed bug in schema management on MS SQL Server.
5.3.1
released on 2019-03-16
5.3.0
released on 2019-03-16
5.2.6
released on 2019-03-12
5.2.5
released on 2019-03-10
5.2.4
released on 2019-02-26
- 788 Added experimental support for Amazon DocumentDB, a document database compatible with MongoDB.
If you are using our MongoDB Starter, enable DocumentDB flavour in your
application.yml:
javers: documentDbCompatibilityEnabled: true
5.2.2
released on 2019-02-23
- 789 Fixed “error calling Constructor for CustomType” when CustomPropertyComparator is registered for Value’s parent class.
5.2.0
released on 2019-02-16
751 New aspect annotation
@JaversAuditableDeletefor triggering
commitShallowDelete()with each method argument.
784 Fixed bug in handling
SortedSet.
753 Fixed
MANAGED_CLASS_MAPPING_ERRORafter refactoring Entity type to Value type.
769 Fixed NPE in
CustomBigDecimalComparator.
782 Fixes NPE after upgrading Javers to 5.1. NPE was thrown when committing entities created prior to 5.1.
5.1.3
released on 2019-01-25
- 777 Fixed bug in persisting
commitDateInstanton modern JVM’s where
Instanthas microseconds precision. Removed dependency on
javax.annotation.PostConstructannotation, which is not available on OpenJDK.
5.1.2
released on 2019-01-07
5.1.0
released on 2018-12-30
/** * Commit creation timestamp in UTC. * <br/><br/> * * Since 5.1, commitDateInstant is persisted in JaversRepository * to provide reliable chronological ordering, especially when {@link CommitIdGenerator#RANDOM} * is used. * * <br/><br/> * * Commits persisted by JaVers older then 5.1 * have commitDateInstant guessed from commitDate and current {@link TimeZone} * * @since 5.1 */ public Instant getCommitDateInstant() { return commitMetadata.getCommitDateInstant(); }
761 Fixed
DateTimeParseExceptionwhen deserializing Snapshots of a refactored class.
762 Fixed Snapshots sorting in MongoRepository when
CommitIdGenerator.RANDOMis used.
5.0.3
released on 2018-12-23
45 Fixed bug in SQL
SchemaInspectorin
polyjdbcwhen JaVers’ tables are created in public schema.
Added more descriptive message in
NOT_INSTANCE_OFexception.
5.0.1
released on 2018-12-05
- Fixes for
CustomPropertyComparatorcombined with
LEVENSHTEIN_DISTANCEand
AS_SETalgorithms.
5.0.0
released on 2018-12-01
JaVers’ Spring integration modules are upgraded to be fully compatible with Spring 5.1 and Spring Boot 2.1.
If you are using Spring 5.x, it’s recommended to use JaVers 5.x. Otherwise you can fall into dependencies version conflict.
Current versions of dependencies:
springVersion = 5.1.2.RELEASE springBootVersion = 2.1.0.RELEASE springDataCommonsVersion = 2.1.2.RELEASE springDataMongoVersion = 2.1.2.RELEASE springDataJPAVersion = 2.1.2.RELEASE springSecurityVersion = 5.1.1.RELEASE mongoDbDriverVersion = 3.8.2 hibernateVersion = 5.3.7.Final
Since now, the last JaVers version compatible with Spring 4 is 3.14.0.
- 747 Two breaking changes in
CustomPropertyComparator. Now, it has to implement
boolean equals(a, b)method, which is used by JaVers to calculate collection-to-collection diff. Return type of
compare(...)method is changed to
Optional. See updated examples and doc.
public interface CustomPropertyComparator<T, C extends PropertyChange> { /** * This comparator is called by JaVers to calculate property-to-property diff. */ Optional<C> compare(T left, T right, GlobalId affectedId, Property property); /** * This comparator is called by JaVers to calculate collection-to-collection diff. */ boolean equals(T a, T b); }
746 Added default comparator for raw
Collections. Previously, raw
Collectionswere ignored by JaVers, now, they are converted to Lists and then compared as Lists.
738 Added
DBRefUnproxyObjectAccessHookto support lazy
@DBReffrom Spring Data MongoDB. The hook is registered automatically in
javers-spring-boot-starter-mongo.
3.14.0
released on 2018-11-10
All SQL queries are rewritten using the new, faster JaVers SQL framework. Poly JDBC is no longer used for queries (but is still used to schema management). Thanks to that, performance of JaVers commits with SQL repo is significantly better, especially when committing large object graphs.
Experimental support for DB2 and DB2400 is discontinued.
3.12.4
released on 2018-10-27
3.12.3
released on 2018-10-25
3.12.1
released on 2018-10-19
3.12.0
released on 2018-10-19
CompletableFuture<Commit> commitAsync(String author, Object currentVersion, Executor executor);
3.11.7
released on 2018-10-11
- 723 Added possibility to load Snapshots even if user’s class is removed. Prevents JaversException TYPE_NAME_NOT_FOUND.
3.11.6
released on 2018-09-29
- 712 Fixed issue with auto-audit aspect for JPA CRUD repositories for entities with Id generated by Hibernate (
@GeneratedValue).
3.11.5
released on 2018-09-19
3.11.4
released on 2018-08-27
3.11.3
released on 2018-08-22
Fixed JaversException PROPERTY_NOT_FOUND reported here.
Fixed bugs in Maps and Multimaps serialization.
3.11.2
released on 2018-08-14
3.11.1
released on 2018-08-09
3.11.0
released on 2018-08-04
511 Added handling of property type changes in domain classes. Now JaVers is able to load a Snapshot from JaversRepository, even if property types are different in a current domain class.
692 Fixed bug in javers-core dependencies. Guava is a truly optional dependency.
3.10.2
released on 2018-07-10
3.10.1
released on 2018-07-07
- 682 Fixed JaVers bootstrap error — COMPONENT_NOT_FOUND: JaVers bootstrap error - component of type ‘org.javers.core.CommitIdGenerator’
3.10.0
released on 2018-06-22
- Stream API for Shadow queries —
javers.findShadowsAndStream(). Using
Stream.skip()and
Stream.limit()is the only correct way for paging Shadows (see 658). See the example in ShadowStreamExample.java.
Stream<Shadow<Employee>> shadows = javers.findShadowsAndStream( QueryBuilder.byInstanceId("Frodo", Employee.class).build()); //then Employee employeeV5 = shadows.filter(shadow -> shadow.getCommitId().getMajorId() == 5) .map(shadow -> shadow.get()) .findFirst().orElse(null);
- 650
@DiffIgnoreand
@DiffIncludeannotations can mixed now in one class. When
@DiffIncludeis used in a class, JaVers ignores
@DiffIgnoreor
@Transientannotations in that class.
3.9.7
released on 2018-05-17
3.9.6
released on 2018-05-11
3.9.5
released on 2018-05-09
3.9.4
released on 2018-05-01
3.9.3
released on 2018-04-26
- 664 Fixed commidDate persistence in MySql. Column type is changed from
timestampto
timestamp(3)— milliseconds precision.
3.9.2
released on 2018-04-22
3.9.1
released on 2018-04-19
- 657 Fixed implementaton of
RANDOMCommitIdGenerator. You can use it in distributed applications:
Javers javers = javers().withCommitIdGenerator(CommitIdGenerator.RANDOM) .build();
3.9.0
released on 2018-04-11
- New API for processing Changes, convenient for formatting a change log. Now you can group changes by commits and by objects. See groupByCommit(). For example:
Changes changes = javers.findChanges(QueryBuilder.byClass(Employee.class) .withNewObjectChanges().build()); changes.groupByCommit().forEach(byCommit -> { System.out.println("commit " + byCommit.getCommit().getId()); byCommit.groupByObject().forEach(byObject -> { System.out.println(" changes on " + byObject.getGlobalId().value() + " : "); byObject.get().forEach(change -> { System.out.println(" - " + change); }); }); });
- Fixed bug in
queryForChanges(), which could cause NPE in some corner cases. Especially, for complex graphs with multiple levels of nested Value Objects.
3.8.5
released on 2018-03-27
springVersion=4.3.14.RELEASE springBootVersion=1.5.10.RELEASE springDataCommonsVersion=1.13.10.RELEASE springDataJPAVersion=1.11.10.RELEASE guavaVersion=23.0 gsonVersion=2.8.2 fastClasspathScannerVersion=2.18.1 jodaVersion=2.9.7 mongoDbDriverVersion=3.6.3 hibernateVersion=5.0.12.Final polyjdbcVersion=0.7.2 aspectjweaverVersion=1.8.13
- Added support for customizing date formats in the Diff pretty print. See JaVers Core configuration.
3.8.4
released on 2018-03-04
3.8.3
released on 2018-03-02
3.8.2
released on 2018-02-28
3.8.1
released on 2018-02-25
- 542 Added possibility to disable SQL schema auto creation.
The flag
withSchemaManagementEnabled()is added to
SqlRepositoryBuilder. The flag is also available in the Spring Boot starter for SQL.
3.8.0
released on 2018-02-06
- 616 New annotation —
@DiffInclude— for properties whitelisting. See property level annotations.
3.7.9
released on 2018-01-14
- 558 Performance improvement in Hibernate unproxy hook. Now, ShallowReferences can be created without initializing Hibernate proxies.
3.7.8
released on 2018-01-05
Marvin Diaz added support for DB2 and DB2400 (beta).
Fixed comparing of complex ID values in
ValueChangeAppender. Now their
equals()is not used.
3.7.7
released on 2017-12-20
596 Fixed NullPointerException when commit property value is null (by Sergey Rozhnov).
519 Added index on Entity typeName in MongoDB.
3.7.6
released on 2017-12-09
- 614 Custom
toStringfunction. Ismael Gomes Costa contributed the method for registering
toStringfunction for complex
ValueTypesused as Entity Id. See registerValueWithCustomToString javadoc.
3.7.5
released on 2017-12-01
Shadow queries performance optimization. Less DB queries executed for each Deep+ query.
Changes in Shadow Scopes. Now, JaVers always loads child ValueObjects owned by selected Entities. So there is no need to call
QueryBuilder.withChildValueObjects(). See ShadowScope javadoc
Shadow queries execution statistics logger. Enable it:
<logger name="org.javers.JQL" level="DEBUG"/>
and you will get detailed logs from query execution, for example:
DEBUG org.javers.JQL - SHALLOW query: 1 snapshots loaded (entities: 1, valueObjects: 0) DEBUG org.javers.JQL - DEEP_PLUS query for '...SnapshotEntity/2' at commitId 3.0, 1 snapshot(s) loaded, gaps filled so far: 1 DEBUG org.javers.JQL - warning: object '...SnapshotEntity/3' is outside of the DEEP_PLUS+1 scope, references to this object will be nulled. Increase maxGapsToFill and fill all gaps in your object graph. DEBUG org.javers.JQL - queryForShadows executed: JqlQuery { IdFilter{ globalId: ...SnapshotEntity/1 } QueryParams{ aggregate: true, limit: 100 } ShadowScopeDefinition{ shadowScope: DEEP_PLUS, maxGapsToFill: 1 } Stats{ executed in millis: 12 DB queries: 2 all snapshots: 2 SHALLOW snapshots: 1 DEEP_PLUS snapshots: 1 gaps filled: 1 gaps left!: 1 } }
Statistics are also available in
Stats object that you can get from
an executed query:
Stats stats = jqlQuery.stats();
3.7.0
released on 2017-11-24
605 Compare Lists as Sets. New List comparing algorithm contributed by drakin. See List comparing algorithms
601 Fixed bug in the type mapping algorithm. In this case, an Entity with complex inheritance structure was mapped as Value.
3.6.3
released on 2017-11-13
Changes in Shadow Scopes. Commit-deep+ is renamed to Deep+. See ShadowScope javadoc. Now, deep+ scope doesn’t include commit-deep scope. They are independent scopes.
597 Second fix for MySQL error: Specified key was too long; max key length is 767 bytes.
3.6.2
released on 2017-11-01
New snapshotType filter in JQL. Allows selecting snapshots by type:
INITIAL,
UPDATE,
TERMINAL.
Improved exception handling in
byInstancequery.
3.6.1
released on 2017-10-29
- Fix for ValueObject loading in Shadow queries. See updated docs of Shadow scopes.
3.6.0
released on 2017-10-05
- 431 Auto-audit aspect also on JpaRepository.saveAndFlush().
This task forced a major refactoring.
Javers-spring module was split into two parts:
javers-springwith general purpose auto-audit aspect and auto-audit aspect for Spring Data CrudRepository.
javers-spring-jpa— a superset of
javers-spring— with JPA & Hibernate integration, so: auto-audit aspect for Spring Data JpaRepository, HibernateUnproxyObjectAccessHook, JpaHibernateConnectionProvider, and JaversTransactionalDecorator.
If you are using JaVers with MongoDB, you don’t need to change anything.
If you are using JaVers with SQL but without Spring Boot,
you need to change the
javers-spring dependency to
javers-spring-jpa.
If you are using Spring Boot with our starter (
javers-spring-boot-starter-sql),
you don’t need to change anything. Our starters always provide the right configuration.
3.5.2
released on 2017-10-05
3.5.1
released on 2017-09-24
- Dependencies versions update:
springVersion=4.3.11.RELEASE springBootVersion=1.5.7.RELEASE guavaVersion=23.0 gsonVersion=2.8.1 fastClasspathScannerVersion=2.4.7 jodaVersion=2.9.7 mongoDbDriverVersion=3.5.0 hibernateVersion=5.0.12.Final polyjdbcVersion=0.7.1 aspectjweaverVersion=1.8.6
3.5.0
released on 2017-07-30
- 568 Added the new scope for Shadow queries — commit-depth+. In this scope, JaVers tries to restore an original object graph with (possibly) all object references resolved. See Shadow Scopes.
3.3.5
released on 2017-07-14
3.3.4
released on 2017-07-04
3.3.3
released on 2017-06-29
- 548 Added support for classes generated by Google @AutoValue.
3.3.2
released on 2017-06-25
3.3.1
released on 2017-06-25
3.3.0
released on 2017-06-21 at Devoxx PL, Cracow
- Added possibility to register the CustomValueComparator function for comparing ValueTypes (it works also for Values stored in Lists, Arrays and Maps). Solved issues: 492, 531.
For example, BigDecimals are (by default) ValueTypes
compared using
BigDecimal.equals().
Now, you can compare them in the smarter way, ignoring trailing zeros:
javersBuilder.registerValue(BigDecimal.class, (a,b) -> a.compareTo(b) == 0);
3.2.1
released on 2017-06-12
3.2.0
released on 2017-05-26
133 New JQL queries — Shadows. See Shadow query examples.
455 Fixed error in schema creation on MySQL database with non UTF-8 encoding — MySQL error: Specified key was too long; max key length is 767 bytes
3.1.1
released on 2017-05-07
532 Added the method to clear sequence allocation in PolyJDBC. See JaversSqlRepository
evictSequenceAllocationCache().
539 Added annotation priorities. Now, Javers’ annotations have priority over JPA annotations.
3.1.0
released on 2017-03-27
403 Added
@PropertyNameannotation. Now, property names can be customized which means easier domain classes refactoring.
27 Fixed resource leak in PolyJDBC.
3.0.5
released on 2017-03-24
- 524 Fixed version conflict between Hibernate and Spring Boot. Hibernate version downgraded to 5.0.11.Final
3.0.4
released on 2017-03-14
3.0.3
released on 2017-03-05
3.0.2
released on 2017-03-02
501 Fixed exception (Don’t know how to extract Class from type) for complex class hierarchies with generic type variables.
499 Fixed problem with hash collision for some method names.
3.0.0 — Java 8 release
released on 2017-02-01
We rewrote whole JaVers’ code base from Java 7 to 8.
Now, JaVers is lighter, faster, and more friendly for Java 8 users.
Breaking changes
All javers-core classes like:
Change,
Commit, or
CdoSnapshotnow use standard Java 8 types
java.util.Optionaland
java.time.LocalDateTime.
The old good Joda Time is no longer used in javers-core but still supported in users’ objects.
JaVers’
Optionalis removed.
All
@Deprecatedmethods in public API are removed.
Since 3.0, JaVers is not runnable on Java 7 Runtime. If you still use Java 7, stay with 2.9.3 version, which will be maintained for a while, but only for bug fixing.
Misc
- All JaVers’ dependencies are bumped to the latest versions:
gson : 2.8.0 mongo-java-driver : 3.4.2 picocontainer : 2.15 fast-classpath-scanner : 2.0.13 spring : 4.3.6.RELEASE spring-boot : 1.4.4.RELEASE hibernate : 5.2.7.Final joda : 2.9.7 (optional) guava : 21.0 (optional)
- SQL Repository schema migration scripts for JaVers 1.x are removed. Upgrade from JaVers 1.x to 3.0 is still possible, but first run 2.9.x to perform overdue SQL Repository schema migration.
3.0.0-RC
released on 2017-01-28
2.9.2 — the last version runnable on Java 7 Runtime
released on 2017-01-25
- 494 Fixed bug in MongoRepository introduced in 2.9.1 (IllegalArgumentException for Boolean JsonPrimitive).
2.9.1
released on 2017-01-17
2.9.0
released on 2017-01-14
2.8.2
released on 2017-01-03
- #485 Fixed MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes when creating indexes on MySQL.
2.8.1
released on 2016-12-13
- #475 Fixed concurrency issue in SQL sequence generator resulting in SequenceLimitReachedException: [SEQUENCE_LIMIT_REACHED]
2.8.0
released on 2016-12-09
- #476 Added support in
javers-springfor multiple Spring Transaction Managers.
Since now,
transactionManagerbean should be explicitly provided when configuring
javersbean:
@Bean public Javers javers(PlatformTransactionManager txManager) { JaversSqlRepository sqlRepository = SqlRepositoryBuilder .sqlRepository() .withConnectionProvider(jpaConnectionProvider()) .withDialect(DialectName.H2) .build(); return TransactionalJaversBuilder .javers() .withTxManager(txManager) .withObjectAccessHook(new HibernateUnproxyObjectAccessHook()) .registerJaversRepository(sqlRepository) .build(); }
See full example of Spring configuration.
- #461 Fix for
CANT_DELETE_OBJECT_NOT_FOUNDexcepting throw from
@JaversSpringDataAuditableaspect when deleted object not exists in JaversRepository.
2.7.2
released on 2016-11-29
#467 Fixed bug in GlobalId PK cache in SQl Repository. Now, when Spring Transaction Manager rolls back a transaction, the cache is automatically evicted.
#462 Fixed problem with commit property column size in SQL databases. Max length increased from 200 to 600 characters.
2.7.1
released on 2016-11-17
2.7.0
released on 2016-11-10
- #452 New
@IgnoreDeclaredPropertiesannotation, see class annotations.
2.6.0
released on 2016-10-30
#411 New commitId generator for distributed applications. Now you can use cluster-friendly
CommitIdGenerator#RANDOM, see
withCommitIdGenerator()
#209 Added multi-class query —
QueryBuilder.byClass(Class... requiredClasses).
#435 Added flags for deactivating auto-audit aspects in Spring Boot starters.
javers: auditableAspectEnabled: false springDataAuditableRepositoryAspectEnabled: false
2.5.0
released on 2016-10-26
#412
@ShallowReferenceannotation can now be used for properties.
Empty snapshots for
@ShallowReferenceEntities are no longer created.
#443 Fix for Gson stackoverflow exception when using complex Value types (with circular references).
2.4.1
released on 2016-10-18
2.4.0
released on 2016-10-12
2.3.0
released on 2016-09-21
- #263
@TypeNameannotation scanner implemented. Now you can easily register your classes with the
@TypeNameannotation in order to use them in all kinds of JQL queries
(without getting TYPE_NAME_NOT_FOUND exception). See
JaversBuilder.withPackagesToScan(String packagesToScan).
2.2.2
released on 2016-09-09
2.2.1
released on 2016-09-06
#417 Fixed dependency management in
javers-spring. Now
spring-data-commonsdependency is optional and should be on an application’s classpath only when you are using the
@JaversSpringDataAuditableannotation.
The aspect class
JaversAuditableRepositoryAspectwas removed and split into two aspects:
JaversAuditableAspectand
JaversSpringDataAuditableRepositoryAspect.
First one should be enabled when you are using
@JaversAuditable. Second one should be enabled when you are using
@JaversSpringDataAuditable.
If you are using
javers-spring-boot-starter-*, both aspects are enabled by default so you don’t have to change anything.
See auto-audit aspects documentaton.
#425 Fixed some bugs in ShallowReference type handling.
2.1.2
released on 2016-08-28
#416 Added map key dot replacement in MongoRepository.
#415 Key in
TypeMapperState.mappedTypeschanged from
Typeto
Type.toString().
2.1.1
released on 2016-07-30
- #395 Spring Boot version bumped to 1.4.0-RELEASE, fixed MongoDB Driver version conflict between JaVers and spring-data-mongodb.
2.1.0
released on 2016-07-28
#220 New aggregate filter in JQL. Now child ValueObjects can be selected when querying for Entity changes. See childValueObjects filter example.
#408 Added equals() and hashCode() in ContainerElementChange and EntryChange classes.
2.0.4
released on 2016-07-23
#407 Fixed bug that causes PropertyChange.equals() to always return false.
#394 Error message enhancement.
2.0.3
released on 2016-06-29
- #396 Fixed javers-spring integration problem: cannot access its superclass org.javers.spring.jpa.
JaversTransactionalDecorator.
2.0.2
released on 2016-06-17
2.0.1
released on 2016-06-15
- #384 Value-based equals() and hashCode() implemented in concrete Change types
- #380 Fixed CLASS_EXTRACTION_ERROR for non-concrete array types (like T[])
2.0.0
released on 2016-06-09
JaVers 2.0 comes with major improvements and new features in JQL.
Unified semantics of changes and snapshot queries
In JaVers 2.0, change queries work in the same way as snapshot queries and change queries accept all filters.
For example, in JaVers 1.x, this change query:
javers.findChanges(QueryBuilder.byInstanceId(Person.class,1).withVersion(5).build());
returns empty list, which is not very useful.
In JaVers 2.0 this query returns changes introduced by the selected snapshot, so changes between versions 4 and 5 of a given object.
JaVers implements change queries on the top of snapshot queries. Change sets are recalculated as a difference between subsequent pairs of snapshots fetched from a JaversRepository. In 1.x, only explicitly selected snapshots are involved in the recalculation algorithm. In 2.0, for each snapshot selected by a user query, JaVers implicitly fetches previous snapshot (if needed). Thanks to that, change queries are far more useful and they work as you could expect.
New features
New query for any domain object. See any domain object query example.
#334 New JQL
author()filter. See author filter example.
#305 New JQL
commitProperty()filter. See commit property filter example.
#375 Added support for commit properties in auto-audit aspect. See CommitPropertiesProvider.
SQL Schema migration
JaVers 2.0 comes with the new database schema for SQL repository:
- table
jv_cdo_classis no longer used
- new column
jv_global_id.type_name
- new column
jv_snapshot.managed_name
- new table
jv_commit_property
JaVers automatically launches a data migration script when old schema is detected.
Data from
jv_cdo_class are copied to new columns (
jv_global_id.type_name and
jv_snapshot.managed_name).
It should take a few seconds for medium size tables but for very large tables it could be time consuming.
Breaking changes
The only one breaking change is new semantics of changes query which is actually an improvement.
If you are using SQL repository, and your
jv_snapshot table is large (millions of records),
run JaVers 2.0 on your test environment for the first time and check if data migrations is done correctly.
1.6.7
released on 2016-05-06
- #368 Improvements in Spring Boot starters.
SpringSecurityAuthorProviderbean is created by default when SpringSecurity is detected on classpath.
1.6.4
released on 2016-04-26
- #362 Default behaviour for non-parametrized Collections instead of throwing JaversException: GENERIC_TYPE_NOT_PARAMETRIZED.
1.6.3
released on 2016-04-17
1.6.2
released on 2016-04-13
#355 Fixed exception handling in JaversAuditableRepositoryAspect.
#216 JQL - added basic support for nested ValuObjects queries.
1.6.1
released on 2016-04-12
#353 Fixed misleading error message for raw Collections fields.
#18 Fixed resource leak in PolyJDBC, resulting in ORA-01000: maximum open cursors exceeded (Oracle).
1.6.0
released on 2016-03-16
- #191 Added support for sets of ValueObjects, SET_OF_VO_DIFF_NOT_IMPLEMENTED exception should not appear anymore.
1.5.1
released on 2016-03-04
1.5.0
released on 2016-02-28
New JaVers Spring Boot starter for SQL and Spring Data —
javers-spring-boot-starter-sql. See Spring Boot integration.
Starting from this version we use SemVer scheme for JaVers version numbers.
1.4.12
released on 2016-02-25
1.4.11
released on 2016-02-12
#333 GroovyObjects support. Now JaVers can be used in Groovy applications. See Groovy diff example.
@DiffIgnorecan be used on class level (for example, GroovyObjects support is implemented by ignoring all properties with
groovy.lang.MetaClasstype). See class annotations.
#211 New annotation
@ShallowReferenceadded. It can be used as the less radical alternative to
@DiffIgnore. See ignoring things.
1.4.10
released on 2016-02-02
#325 Fixed bug in persisting commitDate in SQL repository.
#249 Fixed bug in JSON deserialization of Id property with Type tokens.
#192 Added support for well-known Java util types:
UUID,
Fileand
Currency.
#16 Fixed bug in PolyJDBC sequence generating algorithm.
1.4.7
released on 2016-01-29
- #322 New JQL
withVersion()filter for snapshot queries. See Snapshot version filter example.
1.4.5
released on 2016-01-25
- #309 New JQL
withCommitId()filter for snapshot queries. See CommitId filter example.
1.4.4
released on 2016-01-20
#286 New properties in
ReferenceChange:
getLeftObject()and
getRightObject().
#294 Added version number to Snapshot metadata:Warning!
CdoSnapshot.getVersion().
All snapshots persisted in JaversRepository before release 1.4.4 have version 0. If it isn’t OK for you, run DB update manually.
For SQL database:
UPDATE jv_snapshot s SET version = ( SELECT COUNT(*) + 1 FROM jv_snapshot s2 WHERE s.global_id_fk = s2.global_id_fk and s2.snapshot_pk < s.snapshot_pk)
1.4.3
released on 2016-01-18
- #179 New JQL
skip()filter, useful for pagination. See Skip filter example.
1.4.2
released on 2016-01-15
- #243 New JQL filters by createDate
from()and
to(). See CommitDate filter example.
1.4.1
released on 2016-01-08
- New JaVers module —
javers-spring-boot-starter-mongo. See Spring Boot integration.
1.4.0
released on 2015-12-18
- Added @TypeName annotation and support for domain classes refactoring, see Entity refactoring example. Fixed issues: #178, #232.
- #192 Fixed bug in persisting large numbers in MongoDB.
- #188 Diff is now
Serializable.
Breaking changes:
- Most of
@DeprecatedAPI removed.
- Slight API changes in few places.
GlobalIdis now decoupled from
ManagedType, reference from globalId to concrete managedType is replaced with
typeNameString field.
PropertyChangeis now decoupled from
Property, reference from propertyChange to concrete property is replaced with
propertyNameString field.
- Visibility of
ManagedClassis reduced to
package private.
1.3.22
released on 2015-11-27
1.3.21
released on 2015-11-13
1.3.20
released on 2015-11-08
#177 Added long-awaited
javers.compareCollections()feature. See compare top-level collections example.
#240 Fixed NPE in
LevenshteinListChangeAppender.
1.3.18
released on 2015-11-04
- #244 Added support for upper-bounded wildcard types, like
List<? extends Something>. Contributed by dbevacqua.
1.3.17
released on 2015-10-17
- #224 Fixed bug in
org.javers.common.collections.Optional.equals()which caused strange ClassCastException.
1.3.16
released on 2015-10-14
- #221 Fixed
JaversException.CANT_SAVE_ALREADY_PERSISTED_COMMITthrown when concurrent writes happened to hit JaversSqlRepository.
1.3.15
released on 2015-10-13
- Fixed Java 7 compatibility problem introduced in the previous version.
1.3.14
released on 2015-10-13
- #218 Fixed concurrency issue in TypeMapper which caused ClassCastExceptions, i.e.: java.lang.ClassCastException: com.example.MyObject cannot be cast to org.javers.core.metamodel.object.
GlobalId
1.3.13
released on 2015-10-09
- #207 Fixed bug in serialization ValueObject arrays. Fixed bug in comparing deserialized primitive arrays.
1.3.12
released on 2015-10-03
- #208 Added support for legacy date types:
java.util.Date,
java.sql.Date,
java.sql.Timestampand
java.sql.Time. Added milliseconds to JSON datetime format. All local datetimes are now serialized using ISO format
yyyy-MM-dd'T'HH:mm:ss.SSS.
1.3.11
released on 2015-10-01
- #213 Fixed bug in calculating changed properties list in
CdoSnapshot.getChanged()for nullified values.
1.3.10
released on 2015-09-30
- #206 Fixed NPE when reading ValueObject changes from SQL repository. It was caused by error in serializing ValueObjectId to JSON.
1.3.9
released on 2015-09-24
- #205 Fixed
AFFECTED_CDO_IS_NOT_AVAILABLE JaVers runtime errorwhen serializing
Changesto JSON using Jackson.
1.3.8
released on 2015-09-21
- #126 Added support for Java 8
java.util.Optionaland types from Java 8 Date and Time API (like
java.time.LocalDateTime). JaVers can still run on JDK 7.
- #197 Added JSON prettyPrint switch —
JaversBuilder.withPrettyPrint()
- #199 Added support for comparing top-level Arrays, i.e.:
javers.compare(new int[]{1}, new int[]{1,2}). Contributed by Derek Miller.
1.3.5
released on 2015-09-15
1.3.4
released on 2015-08-24
- #190 Fixed bug in ManagedClassFactory, Id property can be registered even if it has @Transient annotation.
1.3.3
released on 2015-08-12
- Javers-hibernate module merged to javers-spring.
- #186 Fixed another concurrency issue in CommitSequenceGenerator.
1.3.2
released on 2015-08-09
1.3.1
released on 2015-08-03
1.3.0
released on 2015-07-17
1.2.11
released on 2015-06-30
1.2.10
released on 2015-06-12
- #172 Fixed bug when registering more than one CustomPropertyComparator
- #167 Fixed bug in Levenshtein algorithm (comparing lists of Entities)
1.2.9
released on 2015-06-10
- Pretty-print feature:
javers.getTypeMapping(Clazz.class).prettyPrint()describes given user’s class in the context of JaVers domain model mapping.
1.2.8
released on 2015-05-31
1.2.7
released on 2015-05-29
- Fixed problem with build 1.2.6, which wasn’t built from the master branch
1.2.6
released on 2015-05-26
- #157 Fixed JsonIOExcpetion when trying to deserialize property with nested generic type. Contributed by Dieler.
1.2.5
released on 2015-05-24
- #146 #156 MongoDB Java Driver updated to 3.0. Thanks to that, JaVers is compatible with MongoDB versions: 2.4, 2.6 and 3.0.
1.2.1
released on 2015-05-18
- #127 Implemented tolerant comparing strategy for ValueObjects when one has more properties than another. For example, now you can compare
Bicyclewith
Mountenbike extends Bicycle.
1.2.0 JQL
released on 2015-04-20
- #36 Javers Query Language. New fluent API for querying JaversRepository. New query types: by class, by property and more, See JQL examples.
- #98 Track changes in collection. Tracking VO changes while looking at master Entity.
- #118 API to get change history for a given property.
- #128 Changes of a set of entities.
- #129 Lists: newObject and ValueChange?
1.1.1
released on 2015-03-17
- #97 Levenshtein distance algorithm for smart list compare. Contributed by Kornel Kiełczewski.
1.1.0
released on 2015-03-13
- #67 JaversSQLRepository with support for MySQL, PostgreSQL and H2.
- #89 Spring JPA Transaction Manager integration for Hibernate.
1.0.7
released on 2015-02-25
- #47 Spring integration. Added
@JaversAuditableaspect for auto-committing changed done in Repositories.
gessnerfl contributed
@JaversSpringDataAuditable, which gives a support for Spring Data Repositories.
1.0.6
released on 2015-02-10
1.0.5
released on 2015-02-01
- #76 AddedSupport for nested generic types like
List<List<String>>or
List<ThreadLocal<String>>. Reported by Chuck May
- Fixed NPE in MongoRepository.
1.0.4
released on 2015-01-20
- #80 Added custom comparators support. This allows you to register comparators for non-standard collections like Guava Multimap.
1.0.3
released on 2015-01-12
#47 Spring integration. Added
@JaversAuditableannotation for repository methods.
#77 Added missing feature in Generics support.
Reported by Bryan Hunt.
#71 Tracking a top-level object deletion.
Reported by Chuck May.
1.0.2
released on 2015-01-08
1.0.1
released on 2015-01-03
1.0.0
released on 2014-12-25
- Production-ready release with stable API.
|
https://javers.org/release-notes
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
How to create a Windows service
Let’s create a Windows service - the thing that will run at the background and do stuff. For example, our service will write phrase “ololo” into Event Log.
Creating service
Open Visual Studio and create new Windows Service project:
I named it
SillyService.
The project starts with opened
Service1.cs. Rename it to
Service.cs. Actually, it does not matter, but looks better.
This file can be opened from Solution Explorer in 2 modes:
- Just double click on it and you will get a Design mode. That’s how it’s opened now;
- Or right-click on it and choose View Code - you will get the code of the file.
But now we need Design-mode. Click with right button on any free space of the edit area (it’s all free, because we haven’t added anything yet) and choose
Properties. Edit them like this (just change the
ServiceName):
Now find the sliding Toolbox on the left and drag the
EventLog element from there into edit area of
Service.cs opened in Design mode:
Click on it and choose
Properties. Edit them like this:
From all my experiments I got that
Log and
Source should have different names (and be different from the name of service itself). So, I added
_log and
_source suffixes accordingly.
Ok, that’s done. Now right-click somewhere at free space of the edit area of
Service.cs (still in Design mode) and choose
Add Installer:
A new document will appear in a separate tab containing two elements: serviceProcessInstaller and serviceInstaller, both also available in two modes (Design and View Code). Edit
Properties for both elements like this:
It’s about giving proper names, description and choosing the right authority level (
LocalSystem).
Now let’s create some settings for the service. Right-click on project and choose
Properties:
Go to
Settings tab and click on the only label there (This project does not contain…). The settings file will be created, and it will be displayed as a table. We will create an int parameter there -
timerInterval - which will store a value for timer (how often our service should perform some action):
Save everything and open
Service.cs in View Code mode (right-click on the file in Solution Explorer). We will implement some actual stuff that our service will do, which is to write a text string to the Events Log every 10 seconds:
using System.Diagnostics; using System.Reflection; using System.ServiceProcess; using System.Text; namespace SillyService { public partial class Service : ServiceBase { /// <summary> /// Main timer /// </summary> private System.Timers.Timer timer2nextUpdate; /// <summary> /// Timer interval in seconds /// </summary> private int timerInterval = Properties.Settings.Default.timerInterval; // here you can see how this value is being pulled out from Settings public Service() { InitializeComponent(); // create new Source if it doesn't exist EventSourceCreationData escd = new EventSourceCreationData(eventLog.Source, eventLog.Log); // eventLog instance was created in Service.cs in Design mode, as you remember if (!EventLog.SourceExists(eventLog.Source)) { EventLog.CreateEventSource(escd); } } protected override void OnStart(string[] args) { // using System.Text; StringBuilder greet = new StringBuilder() .Append("SillyService has been started.\n\n") .Append(string.Format("Timer interval (in seconds): {0}\n", timerInterval)) // using System.Reflection; .Append(string.Format("Path to the executable: {0}", Assembly.GetExecutingAssembly().Location)); write2log(greet.ToString(), EventLogEntryType.Information); // timer settings this.timer2nextUpdate = new System.Timers.Timer(timerInterval * 1000); this.timer2nextUpdate.AutoReset = true; this.timer2nextUpdate.Elapsed // what timer's event will do += new System.Timers.ElapsedEventHandler(this.timer2nextUpdate_tick); this.timer2nextUpdate.Start(); } protected override void OnStop() { write2log("SillyService has been stopped", EventLogEntryType.Information); } /// <summary> /// Writing to log /// </summary> /// <param name="message">message text</param> /// <param name="type">type of the event</param> private void write2log(string message, EventLogEntryType type) { try { eventLog.WriteEntry(message, type); } catch { } } /// <summary> /// timer's event /// </summary> private void timer2nextUpdate_tick(object sender, System.Timers.ElapsedEventArgs e) { write2log("ololo", EventLogEntryType.Information); } } }
Build the project.
Installing and launching the service
Now you have 2 files in the
path\to\SillyService\bin\Debug directory:
SillyService.exe- executable of the service;
SillyService.exe.config- settings-file for the service.
There are actually more files there, but you don’t need them. Copy these 2 into some new directory, like
C:\services\SillyService\. By the way, later you might want to use Release build rather then Debug.
Make sure, that Services and Event Viewer applications are closed.
Open command line with administrator rights (or, if you don’t want to deal with command line, use my application), find
InstallUtil.exe path (mine was here:
C:\Windows\Microsoft.NET\Framework\v4.0.30319\) and execute the following:
C:\Windows\Microsoft.NET\Framework\v4.0.30319>InstallUtil.exe c:\services\SillyService.exe
You’ll get something like this:
Sorry for russian text on the screenshots (some UI on my Windows is in russian), but there is nothing important there anyway.
Now you can start the service from Services (
services.msc). Just in case, run Services with administrator rights:
After the start of the service you can open Event Viewer to see your service’s log there:
So, service is running, it will start automatically each time you reboot your computer and it will write a line of text to the Event Log every 10 seconds.
Possible problems
Service might not start and give you some error about not answering. Most probably, that is related to
Source/
Log stuff:
- Check if you’ve set
Local Systemin the
Accountfield of serviceProcessInstaller
Properties;
- Perhaps, you ignored my notice about naming
Logand
Source;
- Some other access problems. Try to set administrator’s credentials at
Log Ontab of service
Propertiesin Services.
If you would like to uninstall service, use the same command, but just add
/u key:
C:\Windows\Microsoft.NET\Framework\v4.0.30319>InstallUtil.exe /u c:\services\SillyService.exe
Sources
The source code of described
SillyService
|
https://retifrav.github.io/blog/2017/02/06/how-to-create-a-windows-service/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
A pure swift PNG decoder and encoder for accessing the raw pixel data of a PNG file
PNG
A pure Swift PNG library. Enjoy fast PNG encoding and decoding with strong data types, strict validation, and a safe, expressive, and Swifty API.
getting started
Decode a PNG file to a type of your choice in just one function call.
import PNG let (pixels, (x: width, y: height)) = try PNG.rgba(path: "example.png", of: UInt8.self) // pixels: [PNG.RGBA<UInt8>] // width: Int // height: Int
Use a component type of
UInt16 to capture the full color depth of a 16-bit PNG.
let (pixels, (x: width, y: height)) = try PNG.rgba(path: "example.png", of: UInt16.self) // pixels: [PNG.RGBA<UInt16>] // width: Int // height: Int
Return only the components you need with the grayscale and grayscale-alpha APIs.
let (pixels, (x: width, y: height)) = try PNG.va(path: "example.png", of: UInt8.self) // pixels: [PNG.VA<UInt8>] // width: Int // height: Int
let (pixels, (x: width, y: height)) = try PNG.v(path: "example.png", of: UInt8.self) // pixels: [UInt8] // width: Int // height: Int
features
- Supports all standard PNG formats, including indexed and interlaced formats
- Supports common graphics API interchange formats such as ARGB32
- Supports ancillary chunks, including private ancillary chunks
- Supports chroma key transparency
- Multi-level APIs, including raw chunk-level APIs
- Strong typing and expressive enumerations to catch invalid states at compile time
- Fixed-layout currency types for efficient C interop
- No Foundation imports and one system dependency, zlib
- Tested on MacOS and Linux
- Thorough API documentation
What’s the difference between bit depth and color type?
Color type refers to the channels present in a PNG. A grayscale PNG has only one color channel, while an RGB PNG has three (red, green, and blue). An RGBA PNG has four — three color channels, plus one alpha channel. Similarly, a grayscale–alpha PNG has two — one grayscale “color” channel and one alpha channel. An indexed-color PNG has one encoded channel in the image data, but the colors the indices represent are always RGBA quadruples. The vast majority of PNGs in the world are either of color type RGB or RGBA.
Bit depth goes one level lower; it represents the size of each channel. A PNG with a bit depth of
8 has
8 bits per channel. Hence, one pixel of an RGBA PNG is
4 * 8 = 32 bits long, or
4 bytes.
What is interlacing?
Interlacing is a way of progressively ordering the image data in a PNG so it can be displayed at lower resolution even when partially downloaded. Interlacing is sometimes used in images on social media such as Instagram or Twitter, but rare elsewhere. Interlacing hurts compression, and so it usually significantly increases the size of a PNG file, sometimes as much as thirty percent.
Why does this package depend on
zlib?
ZLib is a standard compression/decompression library that is installed by default on MacOS and most Linux systems. Although it is written in C, it is wrapped by almost every major programming language including Java and Python. The only other Swift PNG decoder library in existence at the time of writing, SwiftGL Image, actually implements its own, pure Swift,
INFLATE algorithm. (Note that it doesn’t compile on Swift ≥3.1.)
Does this package work on MacOS?
Yes.
How do I access/encode custom PNG metadata chunks?
Use the ancillary chunk API on the
Data.Uncompressed or
Data.Rectangular types, which expose ancillary chunk types and data through the
ancillaries instance property. See this tutorial for more details. Note that, except for
tRNS, PNG does not parse ancillary chunks, it only provides their data as a
[UInt8] buffer. Consult the PNG specification to interpret the ancillary chunks.
Does this package do gamma correction?
No. Gamma is meant to be applied at the image display stage. PNG only gives you the base pixel values in the image (with indexed pixel dereferencing and chroma key substitution). Gamma is also easy to apply to raw color data but computationally expensive to remove. Some PNGs include gamma data in a chunk called
gAMA, but most don’t, and viewers will just apply a
γ = 2.2 regardless.
building
Build PNG with the swift package manager,
swift build (
-c release). Make sure you have the
zlib headers on your computer (
sudo apt-get install libz-dev).
|
https://iosexample.com/a-pure-swift-png-decoder-and-encoder-for-accessing-the-raw-pixel-data-of-a-png-file/
|
CC-MAIN-2019-26
|
en
|
refinedweb
|
SUSE Linux Enterprise Server for Raspberry Pi
Raspberry.
JumpZero
Yeah! Great news
John
64-Bits. Go!
Peter Jones
There is some documentation provided here. Maybe this needs adding to the main post?
Robert
That’s a great step forward. Now we just need a 4 GB RAM Pi4 ? to come out to make the most of the processor capability..
RoundDuckMan
Isn’t there some advantage for the Pi3 anyways, like faster performance and other 64-bit advantages, besides the “you can use more than 4GB RAM” point?
Carl Jacobsen
Expanding the Pi to 4 GB of RAM isn’t just a matter of slapping on a few more RAM chips – they need to reengineer the SoC that is the heart of the Pi to change that limit – a non-trivial, and very expensive, task (at that point they’ll likely make numerous other improvements). It’ll happen, but don’t hold your breath. Enjoy the Pi 3B we have now.
Lynn Fredricks
Great news, and why not? Ill have to see if our Valentina Server 64 bit will work on it.
Russell Davis
I was a longtime SuSE user sounds like I will be a again
EverPi
Nice!
Alan Mc (Irish Framboise)
Tada! The great exciting announcements just keep coming, never quite got my system around Suse in the past, now might be time to try so. Good job folks.
PeterF
I’ve been running SuSE and openSuSE on Intel & AMD H/W since at least version 5.2 (Mar 1998!) and I’ve found it a very stable & productive environment. I’ve only dabbled a little with SLES, but I’m looking forward to getting this Pi version going asap. I’ve downloaded it. Now where’s an empty card…
Great news, and thanks!
Elfen
Finally!
Robert Cromer
Now all you need is a RP4 that can support multiple disk drives (like SATA/NAS) and has twin NICs, then you would have the “teacher’s” computer. Since you now have PIXE booting, all students could be served for their OSs and a given day’s worth of course work.
Better yet, place the above RP4 onto a single/double sided Compute Module board and also design a multiple 4-Core Processor based Compute Module. So you could have the “School’s” computer.
Food for thought,
Bob Cromer
W. H. Heydt
SSDs really remove the need for multiple drives unless you want to use RAID to protect against drive failure.
Steelystar
You forgot the Pi 3 does have two NICs: wired ethernet and WiFi! So PXE/TFTP/DHCP booting can be set for the ethernet port which would be connected to the switch that students’ Pis are plugged into as well. Then teacher continue to use the WiFi for normal activities and internet access, which may be better that way anyways since you don’t want those kids distracted on Facebook, etc.
As for disk drives, you just answered your own question, which is a NAS (_NETWORK_ attached storage). In such usage it would probably be better to get a dedicated NAS device, with multiple drive slots for room to grow. Then enable network shares via CIFS/samba or NFS and configure PXE booting to redirect so the Pis mount and get files from there.
So even with Pi 3 it is doable with much the same hardware, just a little thought into infrastructure and design.
daniel obrien
Free trial? Have to buy it? Hmmm.
Simon Flood
No, you can request a one year self-support subscription at
W. H. Heydt
Wow… Talk about coming around “full circle”… In 2002, I built a system around two AMD Opteron 240 CPUs and put SuSE 9.2 on it because it was the only 64-bit Linux available. That system has been my “benchmark” for where I’d like to see the Pi get to for use as a server. The Pi3 is actually pretty close. If the “Pi4B” is able to open up the speed for mass storage, I think it’ll be there, or–at least–close enough.
Kevin
So awesome!! Now I can finally run a familiar and supported server based environment at home on a tiny $35 box. But it seems like the Pi is sneaking its way even more into vertical markets and the corporate realm.
I will be checking this out. I wonder which boot method they are using, with u-boot or the binary blob. In any case, SUSE has just thrown down the gauntlet. How will Red Hat and others respond???
Richard
This is ace, and thank you to Electron752 who ever you are. :)
fabo
Up and running. There is some issue with registration and certificate and WLAN, but I play with only few minutes :-)
fabo
OK. Registration issue is about time settings and root rights. If you skip to register to SuSe with your code from SuSe registration during the installation, you have to register after. You need to set the right time and you have to registrate your installation to SuSe server using the command line using sudo: sudo SUSEConnect –regcode . Graphical tool from menu is missin relevant rights.
TC
Thanks Fabio.
I found my way to sysconfig fine. Missed the time setting. The tip about time was the ticket.
MW
This presumably will install on the Raspberry Pi 2B new revision with the BCM2837 SoC ??
Simon Flood
No, SUSE Linux Enterprise Server (SLES) for Raspberry Pi requireso a 64-bit Pi and the Pi 2B is 32-bit. Currently only the Pi 3 is supported as that’s the only 64-bit Pi currently available.
azbest
Fyi: There is a v1.2 raspi 2B with 64 bit
Jeremy
Not so, the new Pi2 (V1.2) has a 64-bit Cortex-a53 processor identical to the Pi3.
Alec Clews
So do we have any idea if the move to 64bit is worth it?
How much bloat does it add to the binaries to use 64 vs 32 bit addressees?
Is it slower or faster? I assume that with the move to 64 bit instruction set might be faster…..
Remember that currently the Pi can’t have more that 1Gb of memory.
Jeremy
How much bloat does it add to the binaries to use 64 vs 32 bit addressees?
No much that I have seen. All the 64-bit programs I have compiled have a smaller binary than the 32-bit version. As for memory size, it depends on your program.
Max
Is noticably faster/slower than 32bit?
Marco Alvarado
I was thinking the same … so I made some tests (very primitive, but they are just to figure how they work):
The first case was with some integer based operations. They are comparable.
But, in the second case, with float point operations, an openSUSE 64 bit is 4 times faster than a Raspbian 32 bit, both on RPI3 machines. This is very similar to a test I made thousands of years ago with the first Xeon 64 bit processors when comparing 32 vs 64 bit compiled programs. Take into consideration that the test only involved ONE core.
As an extra comparison, a Mac with i7 2.3 GHz is 5 times faster than the openSUSE machine.
I know this is not a perfect comparison, but I was trying to understand how expensive are the RPI3 machines comparing these results. In this case, the idea is to multiply the quantity of machines, and their price, needed to match the i7 base reference computer (again, forgetting Amdahl rule, cluster management and related stuff).
MAC-i7 : 2 seconds ($800) / Base unit
RPI3-32 : 43 seconds ($35) / 21.5 times slower ~ $752.5 needed
RPI3-64 : 10 seconds ($35) / 5 times slower ~ $175 needed
As can be seen, the RPI3 with Raspbian and 32 bit has a very similar final price to match the i7 on pure float point price. However, with 64 bits it is really much cheaper. But the real issue here is that you don’t need 100% of the time the i7 raw power, so to have RPI3 machines has a lot of sense because they help you to refine your investments.
Note: The price for the MAC it is because it has 16GB RAM and it is the Quad Core model from 2012 that no longer exist (now they are dual core ones). Although a more “professional” comparison is needed
— this is my “primitive” test source code:
#include
#include
#include
using namespace std;
int main() {
double numero = 1000000;
time_t tinicial, tfinal;
time(&tinicial);
for (int a=0; a<100000000; a++) {
double numero2 = numero / 3.15;
numero2 = sqrt(numero) / sqrt(numero2);
}
time(&tfinal);
cout << tfinal – tinicial << endl;
}
caperjack
the download is only a trial and one has to sign up to get it ,,why
Jeremy
There is an OpenSUSE version I believe
Mike Shock
SUSE Linux is my favourite OS for many years already: at work and at home. It’s stable, reliable, convenient and stylish. And it’s really great that “chameleon” can now be used with “Raspberry Pi”!
keamas
Can I run real sever staff like on the normal SuSe Enterprise Server or is there any limitation of Software which would not run?.
I am thinking about an SuSe directory, dhcp, dns server (like Windows AD).
Packi
FYI any linux computer can run “real” server software such as bind dns or openldap directory software. It is only the appropriate packages that needs to be installed to turn it into a server. Even Raspian can run those functions you mentioned to become your dedicated Windows AD server:
yum install openldap dhcpd bind samba winbind
Packi
Sorry that previous command is for Centos that I am working on at the moment and was on my mind. For Raspian the corresponding packages are:
apt-get install slapd isc-dhcp-server bind samba samba-common-bin
Lionel
Nice to hear. Now, just wait for raspbian on 64b ;)
Electron752
Just wanted to let everyone know that you can build your own 64-bit kernel from the downstream tree(or upstream tree).
I just reconfirmed that branch rpi-4.8.y from works. Just use the bcmrpi3_defconfig build config and the aarch64 cross compiler.
As a warning I’m just a forum user, but I understand that the upstream tree is preferred for 64-bit support at.
I don’t use u-boot, but I understand u-boot gives some extra security features and flexibility by implementing EUFI. I understand it’s possible to put grub in the boot process to enable menus. I think it also provides extra security features such as the ability to load the kernel at a random address.
Richard Sierakowski
Hi Electron752,
Thanks for the great work on 64 bit. Hopefully this will speed up the inevitable move to 64 bit Raspbian even though it will be a major burden maintaining 32 and 64 bit versions.
Richard
jaltek
openSUSE images and infos can be found at
Winkleink
Thank you.
I will download from this link
Winkleink
The need to register with SUSE (just another login) means I’ll give this a skip for now.
When Canonical get 64 bit Ubuntu running on the Pi3 then I will try it out.
No super urgent need as of today.
Still love seeing the advancement and the working being done by multiple Open Source communities to make the Pi even more awesome.
Electron752
Just wanted to let you know that I was able to set Ubuntu Xenial LTS server to boot on a RPI 3 in ARM64 mode this morning. It seems to work just fine.
Here are the basic steps:
1. Use debootstrap –arch arm64 –foreign pointed at to install from a X86 machine.
2. Copy over the kernel and run debootstrap –second-stage on the RPI 3.
3. Install tasksel using apt.
4. Run tasksel install server.
Easy as PI:)
It probably wouldn’t get hard to get their graphical installer to work by copying over the RPI linux kernel.
fanoush
Thank you for your whole 64bit on Pi effort.
R One
Hi all,
Installing it needs a keyboard (user and root passwords must be typed). I didn’t need one for Raspbian (once Raspbian installed, I could ssh to it via the eth0 interface and insert the wlan0 passphrase via my tablet).
Could the installation process have those fields preset so that just a mouse is enough to set the whole thing up ?
/R One
Milliways
This may be an interesting interim development, but can anyone can suggest ANY benefit running a 64 bit OS on 1GB memory? It is hard to even imagine any benefit from mapping virtual memory to a SD Card or HDD on USB2?
At best code will be slightly larger and marginally slower (due to longer instructions).
Richard
Please go and learn what 64 bit means in this use case. 64 bit has many facets that depend on CPU arch, ABI used and how default types are defined. Peddling miss information from your own miss understanding is not doing anyone any good.
Advice from a grumpy old man with 25+ years coding experience, it is very hard to know what the impact is without working with a system for a while using with ASM and C/C++.
I would say the only one who has a good understanding of the details of the port and the implications is Electron752.
Jeff
Suse was one of the best early distributions. It had a lot of amateur radio support. I will give it a try. Thank you!
don isenstadt
I downloaded it.. I am typing this comment from suse .. it comes with firefox only .. putting the image on a sd from the mac using the dd command did not work.. wound up changing .raw to .img and using apple pi baker .. that did work .. and took 5 mins. could not configure my wifii on initial config so successfully used eth0 and configured the wifii after running. This makes me really REALLY appreciate pixel and raspian … the are so easy to use and posished compared to this. Lots of things are not there . ie. htop NTP not configured. .Poor display support .. ie. my 40 inch vizeo hdmi tv did not work but my 20 in samsung dvi display via hdmi converter did. It does not seem to run any quicker than raspbian. The boot up is much slower. It was interesting to install it and use it .. I am sure over time it will get better ..
Richard
A big plus for many of us who actually do intensive research and education work in science and engineering is the free Mathematica that comes with the Raspian distribution.
Without the 64 bit free Mathematica, I see little advantage in going 64 bit, other than when one wants to do number crunching.
EPTON, if you are listening, PLEASE continue on with developing a 64 bit Raspian and getting a new, 64 bit version of Mathematica that comes with the distribution. The Mathematica is a very expensive program to have to buy or license for a private individual, and dramatically increases the value and usefulness of the Pi and the Raspian distribution.
se
has No sound
Jay
My Logitech wireless USB k700 mouse and keyboard combo does not work from my research it seems to be a kernel issues. Have not got past that issue.
David
Glad to see this thread on the RaspberryPi.org site.
Sending this via Firefox running on SUSE on a Pi-3. Had tried kraxel’s 64-bit Fedora 24 over the summer, but various parts of it didn’t work. Got SUSE up and running in and hour or two (long time to dd on my Mac, and it seemed to have failed but hadn’t). So far it feels just like a regular distribution — things just work, for the most part.
Some problems on startup, notably getting WiFi configured. But getting an X window from SUSE onto my Mac — haven’t figured that out yet; wants to work but doesn’t. And term window fonts are terribly small, and it’s been a long time since I’ve had to set XResources. I’ve gotten accustomed to setting such properties from the app itself. Can’t see how to do that in SUSE. Coming from Raspbian and Fedora, just another learning curve. But sync’d Firefox with my Mac version and it works fine.
SUSE seems *much* faster than kraxel’s 64-bit Fedora 24, both boot and desktop operation. Seems about the same as Raspbian 32-bit.
Ran an old FORTRAN chemistry program (double-precision arithmetic) and it ran about the same speed as Raspbian. Looks like the compiler doesn’t use the 64-bit NEON very well. But ARMv8 assembly code really does assemble and run (won’t run on Raspbian, which is running in 32-bit ARMv7 mode).
So if you’re interested in learning the ARMv8 architecture (which was my motivation), then this is a quick and easy way to get started. Just costs you a microSD card, a little time, and not much frustration.
FRANCIS M DOWN
I have now run Suse on an IBM mainframe and the Pi. Wow. I spent about an hour playing with Suse on the Pi, seems to be a very nice port, no real problems. When I get some time I will put some serious Java on it and see how it runs under Tomcat.
Exaga
Building a working aarch64 (arm64) kernel for the raspberry pi 3 is not rocket science. Anybody can do it. As with all things, you just need the right knowledge. It’s similar to baking a cake.
The above guide will show you how.
Speedz
Installed fine, ran ok, but their reg key activation needs a little work. It refuses the key they gave. Pretty much a waste of time today. Format, and back to 32bit until a 64bit Raspbian or Ubuntu is done up.
orca68
I’m a SUSE Linuxer since the advent of SUSE on the market.
My university hosted one of the first PD Downloads for SuSE.
I’m very happy that I can use my favorite Linux on my newest
toy :)
Great!
Thanks to the SUSE Dev Team
Nobody of Import
They need to work on it. It *DOESN’T* rate the rep SuSE carries.
1) If, for example, you are using a PiTop (Straight up HDMI monitor, folks) it doesn’t boot up to graphics- it’s trashed on the display, but it boots up clean hooked up to a TV HDMI jack.
2) A standard HID keyboard via a Unifying jack doesn’t work. PERIOD. I couldn’t get past the initial config screens on a TV.
FAIL on both counts and counts as a “brown paper bag” release (as in what Linus called at least one Linux release- as in put a brown paper bag over your head and hide in shame…)
|
https://www.raspberrypi.org/blog/suse-linux-enterprise-server-for-raspberry-pi/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Well-formatted content conveys important information better than unformatted content. Nowadays, it’s hard to find rich text in mobile applications since most editors don’t support it. We at Syncfusion understand the need for this simple but essential function; that’s why we are happy to introduce the Xamarin.Forms Rich Text Editor component in Essential Studio 2019 Vol. 3.
This WYSIWYG editor provides a simple, yet powerful editing interface to compose richly formatted text with common formatting options such as bold, italic, and so on. The Rich Text Editor is widely used to create messaging applications, email composers, blogs, forum posts, feedback and review sections, note sections, and more. It has a variety of tools to edit and format rich text, and it returns valid HTML markup.
This blog post showcases some key features of the Rich Text Editor.
Seamless Formatting
The Rich Text Editor provides formatting options frequently used in mobile applications. Formatting can be applied to selected content, a whole paragraph, specific words, or a selected character. Available options include:
- Bolding, italicizing, and underlining.
- Custom font and background colors.
- Formatting for headings, quotations, code, paragraphs, etc.
- Increasing or decreasing paragraph indentation.
- Text alignment.
Text formatting
Sequencing content as a list
Organize content by applying ordered (numbered) or unordered (bulleted) lists.
Clipboard
Cut, copy, and paste formatted content within the same application or to an external application.
Toolbar customization
The Rich Text Editor provides a highly customizable toolbar. Customization options include:
- Changing the toolbar’s background color, text color, as well as the background color of toolbar items.
- Showing or hiding the entire built-in toolbar or a specific toolbar item.
You can also design your own toolbar that has the same functionalities using our comprehensive APIs.
Creating a Xamarin.Forms application containing the Rich Text Editor
This section explains, step-by-step, the procedure for implementing the Rich Text Editor control in a Xamarin.Forms application using Visual Studio.
- Create a blank Xamarin.Forms application.
- In the application, refer to the Xamarin.SfRichTextEditor NuGet package from nuget.org. To learn more about SfRichTextEditor, refer to “Adding SfRichTextEditor reference” in Syncfusion’s documentation.
- When deploying the application in UWP and iOS, please follow the steps provided in “Launching the application on each platform with Rich Text Editor” in Syncfusion’s documentation.
- Import the rich text editor namespace in your respective page and initialize SfRichTextEditor as demonstrated in the following code sample.
<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns: <StackLayout> <richtexteditor:SfRichTextEditor </StackLayout> </ContentPage>
That’s how you add the Rich Text Editor control to an application.
You can download a basic sample from the “Getting Started” section of our Rich Text Editor control documentation.
Conclusion
I hope you have a clear picture of how the new Rich Text Editor control works, and how to use it in a Xamarin.Forms application. Give it a try in our 2019 Volume 3 release.
We also invite you to check out all our Xamarin.Forms controls. You can always download our free evaluation to see all our controls in action or explore our samples on Google Play and the Microsoft Store. To learn more about the advanced features of our Xamarin.Forms controls, refer to our documentation.
If you have any questions or require clarification regarding this control, please let us know in the comments section of this blog post. You can always contact us through our support forum, Direct-Trac support system, or feedback portal. We are always happy to assist.
Great work!
Can we save it as PDF?
Hi Ankush,
Yes, it is possible to save the formatted text in Rich Text Editor as Word/PDF document using Syncfusion.Xamarin.DocIORenderer NuGet. Please find the code example and sample for the mentioned requirement in below mentioned KB link.
KB link:
Regards,
Dilli babu.
Can a word document be convereted to Rich Text document format as well?
Hi Amir,
We regret for the delay.
Xamarin RichTextEditor doesn’t have direct import support of Word document. Instead, you can use our Essential DocIO to export the Word document as HTML. Please refer the following documentation for exporting Word document to HTML file.
UG documentation :
Then, the HTML string need to set to the HtmlText property of SfRichTextEditor to view the content in Rich Text Control.
HtmlText API :
Regards,
Dilli babu.
Hi there, I would like to subscribe for this web site to obtain most
recent updates, therefore where can i do it please assist.
Hi Jared,
Thank you for showing up interest in reading our blog posts.
You either subscribe to our mail update or subscribe our RSS feed to get updates of the new posts in our site. You can find both these options in the right pane of the blog post, above the Popular Now blog list.
Thanks,
Suresh
Can we create custom menus or buttons for other operation
Hi Vikas,
Thank you for contacting Syncfusion support.
Currently, SfRichTextEditor doesn’t provide support to add a custom menus or buttons to the toolbar. We have already logged a feature report in feedback portal. We will implement this feature in any of upcoming release. The status of this feature can be tracked using the following feedback portal link.
Regards,
Dilli babu.
Hi i want to use syncfusion richtext editior for unoplatform. is it possible?
Hi Hedelin,
Thank you for your interest in Syncfusion controls.
At present, we do not have Rich Text Editor for UNO platform. We will consider your request and will implement the control in any of our future release.
Thanks,
Suresh
|
https://www.syncfusion.com/blogs/post/introducing-xamarin-forms-rich-text-editor.aspx
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Method to detect and enable removal of doublets from single-cell RNA-sequencing.
Project description
DoubletDetection
DoubletDetection is a Python3 package to detect doublets (technical errors) in single-cell RNA-seq count matrices.
Installing DoubletDetection
Install from PyPI
pip install doubletdetection
Install from source
git clone cd DoubletDetection pip3 install .
If you are using
pipenv as your virtual environment, it may struggle installing from the setup.py due to our custom Phenograph requirement.
If so, try the following in the cloned repo:
pipenv run pip3 install .
Running DoubletDetection
To run basic doublet classification:
import doubletdetection clf = doubletdetection.BoostClassifier() # raw_counts is a cells by genes count matrix labels = clf.fit(raw_counts).predict()
raw_countsis a scRNA-seq count matrix (cells by genes), and is array-like
labelsis a 1-dimensional numpy ndarray with the value 1 representing a detected doublet, 0 a singlet, and
np.nanan ambiguous cell.
The classifier works best when
- There are several cell types present in the data
- It is applied individually to each run in an aggregated count matrix
In
v2.5 we have added a new experimental clustering method (
scanpy's Louvain clustering) that is much faster than phenograph. We are still validating results from this new clustering. Please see the notebook below for an example of using this new feature.
See our jupyter notebook for an example on 8k PBMCs from 10x.
Obtaining data
Data can be downloaded from the 10x website.
Credits and citations
Gayoso, Adam, Shor, Jonathan, Carr, Ambrose J., Sharma, Roshan, Pe'er, Dana (2018, July 17). DoubletDetection (Version v2.4). Zenodo.
We also thank the participants of the 1st Human Cell Atlas Jamboree, Chun J. Ye for providing data useful in developing this method, and Itsik Pe'er for providing guidance in early development as part of the Computational genomics class at Columbia University.
This project is licensed under the terms of the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/doubletdetection/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
A simple wrapper around the Communardo Metadata REST API.
Project description
Communardo Metadata Python Library
This is a simple wrapper around the REST API which the Communardo Metadata plugin for Confluence provides.
Installation
Install from pypi use: ~~ pip install communardo-metadata ~~
Usage
from communardo.metadata.client import MetadataClient with MetadataClient("", ("user", "pass")) as client: metadata_results = client.search(cql="ID=1")
Development and Deployment
See the Contribution guidelines for this project for details on how to make changes to this library.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/communardo-metadata/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
2.5.2 DFS Use Cases
The Distributed File System (DFS) functions provide the ability to logically group shares on multiple servers and to transparently link shares into a single, hierarchical namespace. DFS organizes shared resources on a network in a treelike structure. This section provides a series of use cases for namespace configuration and management.
The following diagram shows the DFS use cases that are described in detail in the following sections.
Figure 6: DFS use cases
|
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-fsmod/b9527bb7-5280-4901-bc9b-97513996955a
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
HAppS tutorial 0.9
Disclaimer: This is a draft.
Hello World
import HAppS import HAppS.Server.StdConfig main = start serverConfig{http=impl} impl =[method GET $ ok "Hello World!"] -- GET / returns "HTTP/1.0 200 OK\nContent-Type: text/plain; charset=utf-8\n\nHello World!"
This simple app will respond with "Hello World!" in plain text when you access the root with a GET request.
- start takes a configuration value of type (StdConfig a), where a is a suitable type to be used as the state of our application, and starts up the framework, so we only need to specify this configuration.
- serverConfig is an "empty" configuration that we can use to build our one, it's of type StdConfig (), so the state is of type (), i.e. we won't use it.
But we substitute the http field with our impl, so the app actually does what we want.
- impl is of type [ServerPart IO], where type ServerPart m = Reader Request (m (Maybe Result))
So it's a list of handlers, that are functions from Request to IO (Maybe Result) packed in the Reader monad. Request is an HTTP request and Result an HTTP response. For every request the framework tries each handler until it finds one that returns a Result and sends that back to the client.
- Here we've only one handler: the expression |method GET $ ok "Hello World!"|.
- method :: (Monad m, MatchMethod method) => method -> m Result -> ServerPart m
The function method takes the HTTP-method to match on and a (m Result), and constructs a (ServerPart m). This (ServerPart m) will return the result when the request is of the specified method and the URI path has been consumed: in this example only if the client requests /. Also m = IO here.
- the IO Result is built by the expression ok "Hello World!":
- ok :: (ToMessage mess) => mess -> IO Result
For every type in the ToMessage class, ok can take a value of that type and construct a Result in the IO monad. So since the instance of ToMessage for String is defined to construct a plain/text that simply contains the String, our response is indeed what we said earlier.
|
https://wiki.haskell.org/index.php?title=HAppS_tutorial_0.9&oldid=16223&printable=yes
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
In this post I discuss some of the changes you might need to make in integration test code that uses
WebApplicationFactory<> or
TestServer when upgrading to ASP.NET Core 3.0.
One of the biggest changes in ASP.NET Core 3.0 was converting it to run on top of the Generic Host infrastructure, instead of the WebHost. I've addressed that change a couple of times in this series, as well is in my series on exploring ASP.NET Core 3.0. This change also impacts other peripheral infrastructure like the
TestServer used for integration testing.
Integration testing with the Test Host and TestServer
ASP.NET Core includes a library Microsoft.AspNetCore.TestHost which contains an in-memory web host. This lets you send HTTP requests to your server without the latency or hassle of sending requests over the network.
The terminology is a little confusing here - the in-memory host and NuGet package is often referred to as the "TestHost" but the actual class you use in your code is
TestServer. The two are often used interchangeably.
In ASP.NET Core 2.x you could create a test server by passing a configured instance of
IWebHostBuilder to the
TestServer constructor:
public class TestHost2ExampleTests { [Fact] public async Task ShouldReturnHelloWorld() { // Build your "app" var webHostBuilder = new WebHostBuilder() .Configure(app => app.Run(async ctx => await ctx.Response.WriteAsync("Hello World!") )); // Configure the in-memory test server, and create an HttpClient for interacting with it var server = new TestServer(webHostBuilder); HttpClient client = server.CreateClient(); // Send requests just as if you were going over the network var response = await client.GetAsync("/"); response.EnsureSuccessStatusCode(); var responseString = await response.Content.ReadAsStringAsync(); Assert.Equal("Hello World!", responseString); } }
In the example above, we create a basic
WebHostBuilder that returns
"Hello World!" to all requests. We then create an in-memory server using
TestServer:
var server = new TestServer(webHostBuilder);
Finally, we create an
HttpClient that allows us to send HTTP requests to the in-memory server. You can use this
HttpClient exactly as you would if you were sending requests to an external API:
var client = server.CreateClient(); var response = await client.GetAsync("/");
In .NET core 3.0, this pattern is still the same generally, but is made slightly more complicated by the move to the generic host.
TestServer in .NET Core 3.0
To convert your .NET Core 2.x test project to .NET Core 3.0, open the test project's .csproj, and change the
<TargetFramework> element to
netcoreapp3.0. Next, replace the
<PackageReference> for Microsoft.AspNetCore.App with a
<FrameworkReference>, and update any other package versions to
3.0.0.
If you take the exact code written above, and convert your project to a .NET Core 3.0 project, you'll find it runs without any errors, and the test above will pass. However that code is using the old
WebHost rather than the new generic
Host-based server. Lets convert the above code to use the generic host instead.
First, instead of creating a
WebHostBuilder instance, create a
HostBuilder instance:
var hostBuilder = new HostBuilder();
The
HostBuilder doesn't have a
Configure() method for configuring the middleware pipeline. Instead, you need to call
ConfigureWebHost(), and call
Configure() on the inner
IWebHostBuilder. The equivalent becomes:
var hostBuilder = new HostBuilder() .ConfigureWebHost(webHost => webHost.Configure(app => app.Run(async ctx => await ctx.Response.WriteAsync("Hello World!") )));
After making that change, you have another problem - the
TestServer constructor no longer compiles:
The
TestServer constructor takes an
IWebHostBuilder instance, but we're using the generic host, so we have an
IHostBuilder. It took me a little while to discover the solution to this one, but the answer is to not create a
TestServer manually like this at all. Instead you have to:
- Call
UseTestServer()inside
ConfigureWebHostto add the
TestServerimplementation.
- Build and start an
IHostinstance by calling
StartAsync()on the
IHostBuilder
- Call
GetTestClient()on the started
IHostto get an
HttpClient
That's quite a few additions, so the final converted code is shown below:
public class TestHost3ExampleTests { [Fact] public async Task ShouldReturnHelloWorld() { var hostBuilder = new HostBuilder() .ConfigureWebHost(webHost => { // Add TestServer webHost.UseTestServer(); webHost.Configure(app => app.Run(async ctx => await ctx.Response.WriteAsync("Hello World!"))); }); // Build and start the IHost var host = await hostBuilder.StartAsync(); // Create an HttpClient to send requests to the TestServer var client = host.GetTestClient(); var response = await client.GetAsync("/"); response.EnsureSuccessStatusCode(); var responseString = await response.Content.ReadAsStringAsync(); Assert.Equal("Hello World!", responseString); } }
If you forget the call to
UseTestServer()you'll see an error like the following at runtime:
System.InvalidOperationException : Unable to resolve service for type 'Microsoft.AspNetCore.Hosting.Server.IServer' while attempting to activate 'Microsoft.AspNetCore.Hosting.GenericWebHostService'.
Everything else about interacting with the
TestServer is the same at this point, so you shouldn't have any other issues.
Integration testing with WebApplicationFactory
Using the
TestServer directly like this is very handy for testing "infrastructural" components like middleware, but it's less convenient for integration testing of actual apps. For those situations, the Microsoft.AspNetCore.Mvc.Testing package takes care of some tricky details like setting the
ContentRoot path, copying the .deps file to the test project's bin folder, and streamlining
TestServer creation with the
WebApplicationFactory<> class.
The documentation for using
WebApplicationFactory<> is generally very good, and appears to still be valid for .NET Core 3.0. However my uses of
WebApplicationFactory were such that I needed to make a few tweaks when I upgraded from ASP.NET Core 2.x to 3.0.
Adding XUnit logging with WebApplicationFactory in ASP.NET Core 2.x
For the examples in the rest of this post, I'm going to assume you have the following setup:
- A .NET Core Razor Pages app created using
dotnet new webapp
- An integration test project that references the Razor Pages app.
You can find an example of this in the GitHub repo for this post.
If you're not doing anything fancy, you can use the
WebApplicationFactory<> class in your tests directly as described in the documentation. Personally I find I virtually always want to customise the
WebApplicationFactory<>, either to replace services with test versions, to automatically run database migrations, or to customise the
IHostBuilder further.
One example of this is hooking up the xUnit
ITestOutputHelper to the fixture's
ILogger infrastructure, so that you can see the
TestServer's logs inside the test output when an error occurs. Martin Costello has a handy NuGet package, MartinCostello.Logging.XUnit that makes doing this a couple of lines of code.
The following example is for an ASP.NET Core 2.x app:
public class ExampleAppTestFixture : WebApplicationFactory<Program> { // Must be set in each test public ITestOutputHelper Output { get; set; } protected override IWebHostBuilder CreateWebHostBuilder() { var builder = base.CreateWebHostBuilder(); builder.ConfigureLogging(logging => { logging.ClearProviders(); // Remove other loggers logging.AddXUnit(Output); // Use the ITestOutputHelper instance }); return builder; } protected override void ConfigureWebHost(IWebHostBuilder builder) { // Don't run IHostedServices when running as a test builder.ConfigureTestServices((services) => { services.RemoveAll(typeof(IHostedService)); }); } }
This
ExampleAppTestFixture does two things:
- It removes any configured
IHostedServices from the container so they don't run during integration tests. That's often a behaviour I want, where background services are doing things like pinging a monitoring endpoint, or listening/dispatching messages to RabbitMQ/KafKa etc
- Hook up the xUnit log provider using an
ITestOutputHelperproperty.
To use the
ExampleAppTestFixture in a test, you must implement the
IClassFixture<T> interface on your test class, inject the
ExampleAppTestFixture as a constructor argument, and hook up the
Output property.
public class HttpTests: IClassFixture<ExampleAppTestFixture>, IDisposable { readonly ExampleAppTestFixture _fixture; readonly HttpClient _client; public HttpTests(ExampleAppTestFixture fixture, ITestOutputHelper output) { _fixture = fixture; fixture.Output = output; _client = fixture.CreateClient(); } public void Dispose() => _fixture.Output = null; [Fact] public async Task CanCallApi() { var result = await _client.GetAsync("/"); result.EnsureSuccessStatusCode(); var content = await result.Content.ReadAsStringAsync(); Assert.Contains("Welcome", content); } }
This test requests the home page for the RazorPages app, and looks for the string
"Welcome" in the body (it's in an
<h1> tag). The logs generated by the app are all piped to xUnit's output, which makes it easy to understand what's happened when an integration test fails:
[2019-10-29 18:33:23Z] info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. [2019-10-29 18:33:23Z] info: Microsoft.Hosting.Lifetime[0] Hosting environment: Development ... [2019-10-29 18:33:23Z] info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1] Executed endpoint '/Index' [2019-10-29 18:33:23Z] info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished in 182.4109ms 200 text/html; charset=utf-8
Using WebApplicationFactory in ASP.NET Core 3.0
On the face of it, it seems like you don't need to make any changes after converting your integration test project to target .NET Core 3.0. However, you may notice something strange - the
CreateWebHostBuilder() method in the custom
ExampleAppTestFixture is never called!
The reason for this is that
WebApplicationFactory supports both the legacy
WebHost and the generic
Host. If the app you're testing uses a
WebHostBuilder in Program.cs, then the factory calls
CreateWebHostBuilder() and runs the overridden method. However if the app you're testing uses the generic
HostBuilder, then the factory calls a different method,
CreateHostBuilder().
To update the factory, rename
CreateWebHostBuilder to
CreateHostBuilder, change the return type from
IWebHostBuilder to
IHostBuilder, and change the
base method call to use the generic host method. Everything else stays the same:
public class ExampleAppTestFixture : WebApplicationFactory<Program> { public ITestOutputHelper Output { get; set; } // Uses the generic host protected override IHostBuilder CreatHostBuilder() { var builder = base.CreateHostBuilder(); builder.ConfigureLogging(logging => { logging.ClearProviders(); // Remove other loggers logging.AddXUnit(Output); // Use the ITestOutputHelper instance }); return builder; } protected override void ConfigureWebHost(IWebHostBuilder builder) { builder.ConfigureTestServices((services) => { services.RemoveAll(typeof(IHostedService)); }); } }
Notice that the
ConfigureWebHostmethod doesn't change - that is invoked in both cases, and still takes an
IWebHostBuilderargument.
After updating your fixture you should find your logging is restored, and your integration tests should run as they did before the migration to the generic host.
Summary
In this post I described some of the changes required to your integration tests after moving an application from ASP.NET Core 2.1 to ASP.NET Core 3.0. These changes are only required if you actually migrate to using the generic
Host instead of the
WebHost. If you are moving to the generic host then you will need to update any code that uses either the
TestServer or
WebApplicationFactory.
To fix your
TestServer code, call
UseTestServer() inside the
HostBuilder.ConfigureWebHost() method. Then build your
Host, and call
StartAsync() to start the host. Finally, call
IHost.GetTestClient() to retrieve an
HttpClient that can call your app.
To fix your custom
WebApplicationFactory, make sure you override the correct builder method. If your app uses the
WebHost, override the
CreateWebHostBuilder method. After moving to the generic
Host, override the
CreateWebHostBuilder method.
|
https://andrewlock.net/converting-integration-tests-to-net-core-3/
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Hello,
I have recently read a very good article about NVME Namespaces and VSAN...
This thing is very interesting. It limit/makes more difficult the future expandibility of a single node retaining the balance, but until that point it will give you a lot of performance, bypassing the actual limitation of about 150-170kiops per disk group.
The main doubt is: is this officially supported? I haven't found any statement regarding, and also I have some doubt about the durability of a NVME drive for cache, because the spare space (all the space over the 600GB limit) will be used for other namespaces and not for spare, so the drive life will not be predictable from VMware when they certify a drive for cache thanks to his endurance ratio.
It will be solvable only if VMware reconsider the HCL exliciting the max usable space of a cache drive..
Does anyone know anything about this?
Thanks
Manuel
Hello Manuel,
"The main doubt is: is this officially supported? I haven't found any statement regarding"
Not to be a buzzkill, but it is literally the first thing mentioned in that article:
"Disclaimer: The technology tested here is not currently supported by VMware and may never be. This is not recommended for use in any production environment, especially where VMware support is required.."
"and also I have some doubt about the durability of a NVME drive for cache, because the spare space (all the space over the 600GB limit)"
How NVMe/SSDs have as high TBW as they do is by using ALL of the device over time via dynamic wear-levelling (+ the extra % that is hidden from the user assigned only for this purpose).
Using Cache-tier devices far larger than 600GB isn't that uncommon for the above reason - a good example being VMConAWS nodes (which accounts for likely thousands of nodes) using 1.8TB NVMe as Cache-tier.
While the article you have referenced is a cool example of what is theoretically possible, I would think it more likely that changes to how much of a single device Cache-tier vSAN can actively use being a more feasible option (as this doesn't rely on non-VMware protocols and potentially vendor-specific implementations of nvme namespaces etc.).
Bob
and what about this?
i3en.metal - Enhanced Capacity Uncompromising Performance
this is a official VMware VSAN node with NVME namespaces. Ok, it hardware is fully certified and configured by VMware, but this open a possibility...
Hello ManuelDB
Thanks for that link - oddly I didn't see that in any of my feeds and seemingly been too focused on 7.0 U1 stuff to look at where VMC branch is currently.
Fair enough, that is potential progress toward such implementations being supported on vSAN in general, but I wouldn't be so hasty to jump to the conclusion that this feature is a certainty for regular on-prem vSAN (or that it will take the same form) - while VMConAWS uses vSAN, it isn't 'vanilla-vSAN' (as is released publicly and currently), also features can get added and removed on it so time will tell if this persists.
Also, one has to consider that from a support/repair/redundancy perspective VMConAWS is a lot different than your average cluster (on-prem or other hosted(that I am aware of)) in that a failed node is replaced rapidly - so while splitting a single NVMe into 4 Cache-tier devices looks great from a performance/utilisation perspective, this comes with the obvious trade-off in that if that 1 device fails you lose 4 Disk-Groups.
While this is fine for VMConAWS where this sort of thing is reacted to rapidly, I would have concern for smaller clusters and more specifically those that didn't design for N+1 nodes (relative to their Storage Policy node-count requirements), who knows maybe such things will be a support requirement for implementing this (should it ever be part of a future release).
Bob
|
https://communities.vmware.com/t5/VMware-vSAN-Discussions/VSAN-NVME-Namespaces/m-p/2301458
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
According to Stack Overflow’s Annual Survey of 2018, JavaScript becomes the most commonly used programming language, for the six years in a row. Let's face it, JavaScript is a cornerstone of your Full Stack Developer skills and can't be avoided on any Developer's Interview. Follow through and read the FullStack.Cafe compilation of the most common and tricky JavaScript Interview Questions and Answers to land your next dream job.
🔴 Originally published on FullStack.Cafe - Kill Your Tech & Coding Interview
Q1: What is Coercion in JavaScript?
Topic: JavaScript
Difficulty:
In JavaScript conversion between different two build-in types called
coercion. Coercion comes in two forms in JavaScript: explicit and implicit.
Here's an example of explicit coercion:
var a = "42"; var b = Number( a ); a; // "42" b; // 42 -- the number!
And here's an example of implicit coercion:
var a = "42"; var b = a * 1; // "42" implicitly coerced to 42 here a; // "42" b; // 42 -- the number!
🔗 Source: FullStack.Cafe
Q2: What is Scope in JavaScript?
Topic: JavaScript
Difficulty: ⭐
In JavaScript, each function gets its own scope. Scope is basically a collection of variables as well as the rules for how those variables are accessed by name. Only code inside that function can access that function's scoped variables.
A variable name has to be unique within the same scope. A scope can be nested inside another scope. If one scope is nested inside another, code inside the innermost scope can access variables from either scope.
🔗 Source: FullStack.Cafe
Q3: Explain equality in JavaScript
Topic: JavaScript
Difficulty: ⭐
JavaScript has both strict and type–converting comparisons:
- Strict comparison (e.g., ===) checks for value equality without allowing coercion
- Abstract comparison (e.g. ==) checks for value equality with coercion allowed
var a = "42"; var b = 42; a == b; // true a === b; // false
Some simple equalityrules:
-.
🔗 Source: FullStack.Cafe
Q4: Explain what a callback function is and provide a simple example.
Topic: JavaScript
Difficulty: ⭐⭐); });
🔗 Source: coderbyte.com
Q5: What does "use strict" do?
Topic: JavaScript
Difficulty: ⭐⭐; }`
It will throw an error because
x was not defined and it is being set to some value in the global scope, which isn't allowed with
use strict The small change below fixes the error being thrown:
function doSomething(val) { "use strict"; var x = val + 10; }
🔗 Source: coderbyte.com
Q6: Explain Null and Undefined in JavaScript
Topic: JavaScript
Difficulty: ⭐⭐
JavaScript (and by extension TypeScript) has two bottom types:
null and
undefined. They are intended to mean different things:
- Something hasn't been initialized :
undefined.
- Something is currently unavailable:
null.
🔗 Source: FullStack.Cafe
Q7: Write a function that would allow you to do this.
Topic: JavaScript
Difficulty: ⭐⭐
var addSix = createBase(6); addSix(10); // returns 16 addSix(21); // returns 27);
🔗 Source: coderbyte.com
Q8: Explain Values and Types in JavaScript
Topic: JavaScript
Difficulty: ⭐⭐
JavaScript has typed values, not typed variables. The following built-in types are available:
string
number
boolean
nulland
undefined
object
symbol(new to ES6)
🔗 Source: FullStack.Cafe
Q9: Explain event bubbling and how one may prevent it
Topic: JavaScript
Difficulty: ⭐⭐
Event bubbling is the concept in which an event triggers at the deepest possible element, and triggers on parent elements in nesting order. As a result, when clicking on a child element one may exhibit the handler of the parent activating.
One way to prevent event bubbling is using
event.stopPropagation() or
event.cancelBubble on IE < 9.
🔗 Source:
Q10: What is let keyword in JavaScript?
Topic: JavaScript
Difficulty: ⭐⭐
In addition to creating declarations for variables at the function level, ES6 lets you declare variables to belong to individual blocks (pairs of { .. }), using the
let keyword.
🔗 Source: github.com/getify
Q11: How would you check if a number is an integer?
Topic: JavaScript
Difficulty: ⭐⭐
🔗 Source: coderbyte.com
Q12: What is IIFEs (Immediately Invoked Function Expressions)?
Topic: JavaScript
Difficulty: ⭐⭐⭐
It’s an Immediately-Invoked Function Expression, or IIFE for short. It executes immediately after it’s created:
(function IIFE(){ console.log( "Hello!" ); })(); // "Hello!"
This pattern is often used when trying to avoid polluting the global namespace, because all the variables used inside the IIFE (like in any other normal function) are not visible outside its scope.
🔗 Source: stackoverflow.com
Q13: How to compare two objects in JavaScript?
Topic: JavaScript
Difficulty: ⭐⭐⭐
Two non-primitive values, like objects (including function and array) held by reference, so both
== and
=== comparisons will simply check whether the references match, not anything about the underlying values.
For example,
arrays are by default coerced to strings by simply joining all the values with commas (
,) in between. So two arrays with the same contents would not be
== equal:
var a = [1,2,3]; var b = [1,2,3]; var c = "1,2,3"; a == c; // true b == c; // true a == b; // false
For deep object comparison use external libs like
deep-equal or implement your own recursive equality algorithm.
🔗 Source: FullStack.Cafe
Q14: Could you explain the difference between ES5 and ES6
Topic: JavaScript
Difficulty: ⭐⭐⭐
ECMAScript 5 (ES5): The 5th edition of ECMAScript, standardized in 2009. This standard has been implemented fairly completely in all modern browsers
ECMAScript 6 (ES6)/ ECMAScript 2015 (ES2015): The 6th edition of ECMAScript, standardized in 2015. This standard has been partially implemented in most modern browsers.
Here are some key differences between ES5 and ES6:
- Arrow functions & string interpolation: Consider:
const greetings = (name) => { return `hello ${name}`; }
and even:
const greetings = name => `hello ${name}`;
- Const. Const works like a constant in other languages in many ways but there are some caveats. Const stands for ‘constant reference’ to a value. So with const, you can actually mutate the properties of an object being referenced by the variable. You just can’t change the reference itself.
const NAMES = []; NAMES.push("Jim"); console.log(NAMES.length === 1); // true NAMES = ["Steve", "John"]; // error
- Block-scoped variables. The new ES6 keyword
letallows developers to scope variables at the block level.
Letdoesn’t hoist in the same way
vardoes.
- Default parameter values Default parameters allow us to initialize functions with default values. A default is used when an argument is either omitted or undefined — meaning null is a valid value.
// Basic syntax function multiply (a, b = 2) { return a * b; } multiply(5); // 10
Class Definition and Inheritance
ES6 introduces language support for classes (
classkeyword), constructors (
constructorkeyword), and the
extendkeyword for inheritance.
for-of operator
The for...of statement creates a loop iterating over iterable objects.
Spread Operator
For objects merging
const obj1 = { a: 1, b: 2 } const obj2 = { a: 2, c: 3, d: 4} const obj3 = {...obj1, ...obj2}
- Promises Promises provide a mechanism to handle the results and errors from asynchronous operations. You can accomplish the same thing with callbacks, but promises provide improved readability via method chaining and succinct error handling.
const isGreater = (a, b) => { return new Promise ((resolve, reject) => { if(a > b) { resolve(true) } else { reject(false) } }) } isGreater(1, 2) .then(result => { console.log('greater') }) .catch(result => { console.log('smaller') })
- Modules exporting & importing Consider module exporting:
const myModule = { x: 1, y: () => { console.log('This is ES5') }} export default myModule;
and importing:
import myModule from './myModule';
Q15: Explain the difference between "undefined" and "not defined" in JavaScript
Topic: JavaScript
Difficulty: ⭐⭐⭐
In JavaScript if you try to use a variable that doesn't exist and has not been declared, then JavaScript will throw an error
var name is not defined and the script will stop execute thereafter. But If you use
typeof undeclared_variable then it will return
undefined.
Before starting further discussion let's understand the difference between declaration and definition.
var x is a declaration because you are not defining what value it holds yet, but you are declaring its existence and the need of memory allocation.
var x; // declaring x console.log(x); //output: undefined
var x = 1 is both declaration and definition (also we can say we are doing initialisation), Here declaration and assignment of value happen inline for variable x, In JavaScript every variable declaration and function declaration brings to the top of its current scope in which it's declared then assignment happen in order this term is called
hoisting.
A variable that is declared but not define and when we try to access it, It will result
undefined.
var x; // Declaration if(typeof x === 'undefined') // Will return true
A variable that neither declared nor defined when we try to reference such variable then It result
not defined.
console.log(y); // Output: ReferenceError: y is not defined
🔗 Source: stackoverflow.com
Q16: What is the difference between anonymous and named functions?
Topic: JavaScript
Difficulty: ⭐⭐⭐
Consider:
var foo = function() { // anonymous function assigned to variable foo // .. }; var x = function bar(){ // named function (bar) assigned to variable x // .. }; foo(); // actual function execution x();
🔗 Source: FullStack.Cafe
Q17: What is “closure” in javascript? Provide an example?
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
🔗 Source: github.com/ganqqwerty
Q18: How would you create a private variable in JavaScript?
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
To create a private variable in JavaScript that cannot be changed you need to create it as a local variable within a function. Even if the function is executed the variable cannot be accessed outside of the function. For example:
function func() { var priv = "secret code"; } console.log(priv); // throws error
To access the variable, a helper function would need to be created that returns the private variable.
function func() { var priv = "secret code"; return function() { return priv; } } var getPriv = func(); console.log(getPriv()); // => secret code
🔗 Source: coderbyte.com
Q19: Explain the Prototype Design Pattern
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
The Prototype Pattern creates new objects, but rather than creating non-initialized objects it returns objects that are initialized with values it copied from a prototype - or sample - object. The Prototype pattern is also referred to as the Properties pattern.
An example of where the Prototype pattern is useful is the initialization of business objects with values that match the default values in the database. The prototype object holds the default values that are copied over into a newly created business object.
Classical languages rarely use the Prototype pattern, but JavaScript being a prototypal language uses this pattern in the construction of new objects and their prototypes.
🔗 Source: dofactory.com
Q20: Check if a given string is a isomorphic
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
For two strings to be isomorphic, all occurrences of a character in string A can be replaced with another character to get string B. The order of the characters must be preserved. There must be one-to-one mapping for ever char of string A to every char of string B.
paperand
titlewould return true.
eggand
sadwould return false.
dggand
addwould return true.
isIsomorphic("egg", 'add'); // true isIsomorphic("paper", 'title'); // true isIsomorphic("kick", 'side'); // false function isIsomorphic(firstString, secondString) { // Check if the same lenght. If not, they cannot be isomorphic if (firstString.length !== secondString.length) return false var letterMap = {}; for (var i = 0; i < firstString.length; i++) { var letterA = firstString[i], letterB = secondString[i]; // If the letter does not exist, create a map and map it to the value // of the second letter if (letterMap[letterA] === undefined) { letterMap[letterA] = letterB; } else if (letterMap[letterA] !== letterB) { // Eles if letterA already exists in the map, but it does not map to // letterB, that means that A is mapping to more than one letter. return false; } } // If after iterating through and conditions are satisfied, return true. // They are isomorphic return true; }
🔗 Source:
Q21: What does the term "Transpiling" stand for?
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
There's no way to polyfill new syntax that has been added to the language. So the better option is to use a tool that converts your newer code into older code equivalents. This process is commonly called transpiling, a term for transforming + compiling.
Typically you insert the transpiler into your build process, similar to your code linter or your minifier.
There are quite a few great transpilers for you to choose from:
- Babel: Transpiles ES6+ into ES5
- Traceur: Transpiles ES6, ES7, and beyond into ES5
🔗 Source: You Don't Know JS, Up &going
Q22: How does the “this” keyword work? Provide some code examples.
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
🔗 Source: quirksmode.org
Q23: How would you add your own method to the Array object so the following code would work?
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
🔗 Source: coderbyte.com
Q24: What is Hoisting in JavaScript?
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
Hoisting is the JavaScript interpreter's action of moving all variable and function declarations to the top of the current scope. There are two types of hoisting:
- variable hoisting - rare
- function hoisting - more common
Wherever a
var (or function declaration) appears inside a scope, that declaration is taken to belong to the entire scope and accessible everywhere throughout.
var a = 2; foo(); // works because `foo()` // declaration is "hoisted" function foo() { a = 3; console.log( a ); // 3 var a; // declaration is "hoisted" // to the top of `foo()` } console.log( a ); // 2
🔗 Source: FullStack.Cafe
Q25: What will the following code output?
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐
0.1 + 0.2 === 0.3
This will surprisingly output
false because of floating point errors in internally representing certain numbers.
0.1 + 0.2 does not nicely come out to
0.3 but instead the result is actually
0.30000000000000004 because the computer cannot internally represent the correct number. One solution to get around this problem is to round the results when doing arithmetic with decimal numbers.
🔗 Source: coderbyte.com
Q26: Describe the Revealing Module Pattern design pattern
Topic: JavaScript
Difficulty: ⭐⭐⭐⭐⭐
An obvious disadvantage of it is unable to reference the private methods
Thanks 🙌 for reading and good luck on your interview!
Please share this article with your fellow devs if you like it!
Check more FullStack Interview Questions & Answers on 👉
Discussion (18)
Great article!
For "Q11: How would you check if a number is an integer?" I recommend using:
As far as you are not tageting IE.
caniuse.com/#search=isInteger
nice, I was surprised most by the question 0.1+0.2===0.3.
This is what always haunts me and i'm not sure if that's the case with any other language? I used Python, C, C++ and VBA but such stuff is unseen there.
it's same in Python, C and C++
it's same in c# too but in goLang 0.1 + 0.2 == 0.3 is true :)
i would like more of the articles that address performance of javascript under the hood like this one.
Q6: Explain Null and Undefined in JavaScript
Bonus point pointing out Null is an object, it's null-ness still takes up memory
hi,it is a good article.but it has a problem:
Q20:
for your function ,it seemed wrong:
isIsomorphic('sad', 'egg') !== isIsomorphic('egg', 'sad')
my function:
It's great article indeed! , I shared in linkedIn and twitter, I am sure this will help javascript developers in their interviews.
good article! I learned a few things
Q22 is not completely right according to me .
thismay refer the context of the definition with the arrow function.
(lexical scoping vs dynamic scoping)
Great Article. Thank you Dev Alex.👍
Awesome walk through Alex!
Totally saving this one!
Great article I really needed that quick revision 👍
Great article Alex!!
Excellent and insightful article!
Great article! Thanks.
I love this article. nice one!
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/fullstackcafe/top-26-javascript-interview-questions-i-wish-i-knew-26k1
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
In this tutorial, I’ll show you how to control an Arduino-based robot including two DC motors via the official PS4 Bluetooth-based DualShock joystick controller by using USB host, Arduino, and a USB Bluetooth Dongle. Before diving into this project, collect the necessary hardware.
Collect the Hardware
- Arduino Pro Mini
- PS4 Bluetooth Controller
- USB Host Shield Mini
- FT232RL
- Arduino Robot Kit
- L298N Motor Driver
The necessary hardware.
Arduino Pro Mini
If we talk about Arduino Mini, it’s basically an Arduino controller without the programming chip installed on it. The reason why Arduino removed the programming chip is simply to reduce the size and cost of PRO Mini. Also once you have one programming device, it can then be reused for other PRO Mini projects, which is quite common.
Arduino Pro Mini
What is USB host?
USB Host is an adapter for USB devices. You can use it to connect an Arduino board to a USB device, such as a USB joystick, a mouse, or a thumb drive.
For example, you can control your Arduino robot with your own USB based joystick gamepad. We can connect any USB slave device easily with an Arduino microcontroller such as a USB mouse, USB keyboard, USB printer, USB mass storage, and more.
The USB Host might need analog/digital converters necessary to develop a full-speed USB peripheral/host controller with Arduino microcontroller.
The USB host shield.
Features
- USB 2.0 Full Speed compatible
- 3.3/5V operation level compatible
- All GPIOx pins break-out
- USB Host 5V/500mA supply for USB protocol
Specifications
- Operating Voltage: 5V/3V
- Max Current: 500mA only when Arduino is connected with a good power supply
- Max Current: 400mA only when Arduino is powered from its USB port
- USB Controller: MAX3421E
To get this project working, you will need the USB Host Shield for the Arduino and a Bluetooth dongle. I used the USB host shield from Arduino but it’s now discontinued. You can utilize Sparkfun’s USB shield instead.
The USB shield is necessary because we need to connect a Bluetooth based PS4 Controller to the Arduino, which doesn’t have a Bluetooth receiver. Then you can use the included library to pair with the Bluetooth based PS4 controller. And once the Bluetooth connection is established with the PS4 controller, we can use simple functions to read the state of the device.
Also, all of the code is open source. If you want to experiment more with the USB Host by yourself, [Kristian]’s work could be helpful to get started. All of the source code is available on Github. The example sketch also shows how easy it is to add a PS4 controller into your own Arduino project. For this project, we will be adapting this code for controlling the motor of an Arduino Robot car using the left joystick of a PS4 controller.
Assembling the Hardware
If you are using a different version of Arduino, then you can simply follow the pinout below for wiring it to your own Arduino board.
The pinout for wiring your Arduino.
if you have an Arduino Pro Mini, assemble the hardware as shown in the image below. Solder the female headers to your USB host, male headers for Arduino Pro Mini and assemble the Arduino micro board at the top of the USB host shield into their female headers.
The Arduino Pro Mini and USB host shield.
Another angle of the connected Arduino Pro Mini and USB host shield.
After assembling the USB Host Shield with Arduino Mini, add the connections for the L298N motor driver module, by following this Fritzing diagram:
The Fritzing diagram
If you are not using an Arduino Mini or not utilizing the shield setup, these are the connections you need to the USB host:
Fritzing diagram if you are not using an Arduino Mini or the shield setup.
The properly connected hardware.
Upload Source code
The following connections will be used for uploading the source code:
Arduino Pro --> FTDI
- GND --> GND
- CTS --> GND
- VCC --> VCC
- TXD --> RXD
- RXD --> TXD
- DTR --> DTR
Or you can simply add a female header to the Arduino Pro Mini and plug the FTDI module into the female headers as I did in the image below.
Upload the source code to your Arduino Pro Mini with the following steps. You can find the full source code at the bottom of this page.
After uploading the source code just insert the Bluetooth dongle into the USB host shield and start pairing with your PS4 controller.
Attach the dongle to you completed build.
Pairing the PS4 Controller
In order to pair the Bluetooth based PS4 controller with Arduino. Set the gamepad controller into pairing mode by pressing and holding the “PlayStation button” and ”Share button” at the same time. Hold these two buttons until the light on the PS4 controller starts flashing rapidly. Now the Arduino should automatically detect your PS4 controller.
Once the PS4 controller is connected with your Arduino robot, the controller’s light will turn blue.
When pairing is complete, the light will turn blue!
The completed build.
//Source Code #include <PS4BT.h> #include <usbhub.h> // Satisfy the IDE, which needs to see the include statement in the ino too. #ifdef dobogusinclude #include <spi4teensy3.h> #endif #include <SPI.h> int IN1 = 3; //control pin for first motor int IN2 = 4; //control pin for first motor int IN3 = 5; //control pin for second motor int IN4 = 6; //control pin for second motor USB Usb; //USBHub Hub1(&Usb); // Some dongles have a hub inside BTD Btd(&Usb); // You have to create the Bluetooth Dongle instance like so /* You can create the instance of the PS4BT class in two ways */ // This will start an inquiry and then pair with the PS4 controller - you only have to do this once // You will need to hold down the PS and Share button at the same time, the PS4 controller will then start to blink rapidly indicating that it is in pairing mode PS4BT PS4(&Btd, PAIR); // After that you can simply create the instance like so and then press the PS button on the device //PS4BT PS4(&Btd); bool printAngle, printTouch; uint8_t oldL2Value, oldR2Value; void setup() { pinMode(IN1, OUTPUT); pinMode(IN2, OUTPUT); pinMode(IN3, OUTPUT); pinMode(IN4, OUTPUT); Serial.begin(115200); #if !defined(__MIPSEL__) while (!Serial); // Wait for serial port to connect - used on Leonardo, Teensy and other boards with built-in USB CDC serial connection #endif if (Usb.Init() == -1) { Serial.print(F("\r\nOSC did not start")); while (1); // Halt } Serial.print(F("\r\nPS4 Bluetooth Library Started")); } void loop() { Usb.Task(); if (PS4.connected()) { // if (PS4.getAnalogHat(LeftHatX) > 137 || PS4.getAnalogHat(LeftHatX) < 117 || PS4.getAnalogHat(LeftHatY) > 137 || PS4.getAnalogHat(LeftHatY) < 117 || PS4.getAnalogHat(RightHatX) > 137 || PS4.getAnalogHat(RightHatX) < 117 || PS4.getAnalogHat(RightHatY) > 137 || PS4.getAnalogHat(RightHatY) < 117) { if(PS4.getAnalogHat(LeftHatX)<10) { digitalWrite(IN1, HIGH); digitalWrite(IN2, LOW); digitalWrite(IN3, LOW); digitalWrite(IN4, HIGH); Serial.print("\nHatX: 50 "); } else if(PS4.getAnalogHat(LeftHatX)>240) { digitalWrite(IN1, LOW); digitalWrite(IN2, HIGH); digitalWrite(IN3, HIGH); digitalWrite(IN4, LOW); Serial.print("\nHatX: 200 "); } else if(PS4.getAnalogHat(LeftHatY) < 20) { digitalWrite(IN1, HIGH); digitalWrite(IN2, LOW); digitalWrite(IN3, HIGH); digitalWrite(IN4, LOW); Serial.print("\nHatY: 50 "); } else if(PS4.getAnalogHat(LeftHatY) > 200) { digitalWrite(IN1, LOW); digitalWrite(IN2, HIGH); digitalWrite(IN3, LOW); digitalWrite(IN4, HIGH); Serial.print("\nHatY: 200 "); } else { digitalWrite(IN1, LOW); digitalWrite(IN2, LOW); digitalWrite(IN3, LOW); digitalWrite(IN4, LOW); Serial.print("\nrotate the joystick "); } } }
|
https://maker.pro/arduino/projects/how-to-control-an-arduino-robot-with-a-ps4-bluetooth-controller
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Introduction :
In this tutorial, we will learn how to make one TextInput component to take password inputs. By default, if you enter any text in a TextInput field, it is visible. Converting it to a password field means changing the text not readable to the user.
If you have created password field in any type of application before, you must be aware of that the values are replaced with asterisk symbol as we enter. In react native also, if we convert one TextInput to a password field, it shows asterisk while typing.
secureTextEntry props :
If you add this property as true, it will mark the TextInput as password text input. It obscures the text entered by the user. It’s default value is false. Also, it doesn’t work with multiline inputs.
Example program :
- Create one new React Native project using npx react-native init SampleProject.
- Update App.js file as like below :
* Import _TextInput_ from _react-native_. * Use it in a _View_ * Add _secureTextEntry_ property as _true_.
Below is the full App.js file :
import React from 'react'; import { SafeAreaView, StatusBar, StyleSheet, View, TextInput } from 'react-native'; const App = () => { return ( <> <StatusBar barStyle="dark-content" /> <SafeAreaView> <View style={styles.container}> <TextInput secureTextEntry={true} style={styles.textInput}/> </View> </SafeAreaView> </> ); }; const styles = StyleSheet.create({ container: { alignItems: "center", justifyContent: "center" }, textInput: { width: "90%", height: 50, borderColor: 'black', borderWidth: 2 } }); export default App;
Run it and it will produce password style text input.
|
https://www.codevscolor.com/react-native-password-textinput
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Scrapy Middleware that allows a Scrapy Spider to filter requests.
Project description
Scrapy-link-filter
Spider Middleware that allows a Scrapy Spider to filter requests. There is similar functionality in the CrawlSpider already using Rules and in the RobotsTxtMiddleware, but there are twists. This middleware allows defining rules dinamically per request, or as spider arguments instead of project settings.
Install
This project requires Python 3.6+ and pip. Using a virtual environment is strongly encouraged.
$ pip install git+
Usage
For the middleware to be enabled as a Spider Middleware, it must be added in the project
settings.py:
SPIDER_MIDDLEWARES = { # maybe other Spider Middlewares ... # can go after DepthMiddleware: 900 'scrapy_link_filter.middleware.LinkFilterMiddleware': 950, }
Or, it can be enabled as a Downloader Middleware, in the project
settings.py:
DOWNLOADER_MIDDLEWARES = { # maybe other Downloader Middlewares ... # can go before RobotsTxtMiddleware: 100 'scrapy_link_filter.middleware.LinkFilterMiddleware': 50, }
The rules must be defined either in the spider instance, in a
spider.extract_rules dict, or per request, in
request.meta['extract_rules'].
Internally, the extract_rules dict is converted into a LinkExtractor, which is used to match the requests.
Note that the URL matching is case-sensitive by default, which works in most cases. To enable case-insensitive matching, you can specify a "(?i)" inline flag in the beggining of each "allow", or "deny" rule that needs to be case-insensitive.
Example of a specific allow filter, on a spider instance:
from scrapy.spiders import Spider class MySpider(Spider): extract_rules = {"allow_domains": "example.com", "allow": "/en/items/"}
Or a specific deny filter, inside a request meta:
request.meta['extract_rules'] = { "deny_domains": ["whatever.com", "ignore.me"], "deny": ["/privacy-policy/?$", "/about-?(us)?$"] }
The possible fields are:
allow_domainsand
deny_domains- one, or more domains to specifically limit to, or specifically reject
allowand
deny- one, or more sub-strings, or patterns to specifically allow, or reject
All fields can be defined as string, list, set, or tuple.
License
BSD3 © Cristi Constantin.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/scrapy-link-filter/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
:)
I'm looking at it from a perspective of not polluting global namespace.
Someone who's never heard of it (or even Java beans) might be looking
for validation of their own process of interest, whether it be in a
computing field (HTML validation, signature validation, ...)
or somewhere completely different (Tax return validation,
Fire Safety Compliance Validation ...). A project called "Validation"
will turn up when they google - especially if someone's successive blog
entries are about "[this] Validation" and "my tax return". So the name
should carry some hint about it.
--
Nick Kew
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
|
http://mail-archives.eu.apache.org/mod_mbox/incubator-general/201002.mbox/%3C20100223210327.51e0106d@baldur%3E
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
I use Rectangle Selector from matplotlib to crop some image
It can print the position that I click and release, but I can't save the coordinate data of them.
What should I do to save those coordinate data where I click and release as variables?
For example, after I run the code, it will only print something like(2.14, -0.62) --> (5.86, 0.74). (2.14, -0.62) is the coordinate data of the point I click and (5.86, 0.74) is the point I release. But I hope I can have some variable to save this two coordinate date like: xclick = 2.14, yclick = -0.62, xrelease = 5.86, yrelease = 0.74, but I could not find a way to do it.
I know the x,y coordinate might save in x1,y1,x2,y2, but when I want to call four of them, it return "name 'x1' is not defined" and they are also not in my variables explorer.
Below is my code from matplotlib example:
from __future__ import print_function
from matplotlib.widgets import RectangleSelector
import numpy as np
import matplotlib.pyplot as plt
def line_select_callback(eclick, erelease):
print(eclick.xdata)
'eclick and erelease are the press and release events'
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
print("(%3.2f, %3.2f) --> (%3.2f, %3.2f)" % (x1, y1, x2, y2))
print(" The button you used were: %s %s" % (eclick.button,
erelease.button))
def toggle_selector(event):
print(' Key pressed.')
if event.key in ['Q', 'q'] and toggle_selector.RS.active:
print(' RectangleSelector deactivated.')
toggle_selector.RS.set_active(False)
if event.key in ['A', 'a'] and not toggle_selector.RS.active:
print(' RectangleSelector activated.')
toggle_selector.RS.set_active(True)
fig, current_ax = plt.subplots() # make a new plotting range
N = 100000 # If N is large one can see
x = np.linspace(0.0, 10.0, N) # improvement by use blitting!
plt.plot(x, +np.sin(.2*np.pi*x), lw=3.5, c='b', alpha=.7) # plot something
plt.plot(x, +np.cos(.2*np.pi*x), lw=3.5, c='r', alpha=.5)
plt.plot(x, -np.sin(.2*np.pi*x), lw=3.5, c='g', alpha=.3)
print("\n click --> release")
# drawtype is 'box' or 'line' or 'none'
toggle_selector.RS = RectangleSelector(current_ax, line_select_callback,
drawtype='box', useblit=True,
button=[1, 3], # don't use middle button
minspanx=5, minspany=5,
spancoords='pixels',
interactive=True,state_modifier_keys = 'extents')
plt.connect('key_press_event', toggle_selector)
plt.show()
The variables
x1,
y1,
x2 and
y2 are in the local scope of the
line_select_callback function. You may turn them globally accessible using the
global statement.
def line_select_callback(eclick, erelease): global x1, y1, x2, y2 x1, y1 = eclick.xdata, eclick.ydata x2, y2 = erelease.xdata, erelease.ydata
Or you can assign them to a globally defined list,
click = [None,None] release = [None,None] def line_select_callback(eclick, erelease): click[:] = eclick.xdata, eclick.ydata release[:] = erelease.xdata, erelease.ydata
|
https://codedump.io/share/l3Yo08hX24lR/1/i-wants-to-get-coordinate-from-matplotlib-rectangle-selector
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Fill Area Attributes class.
This class is used (in general by secondary inheritance) by many other classes (graphics, histograms). It holds all the fill area attributes.
Fill Area attributes are:
The fill area color is a color index (integer) pointing in the ROOT color table. The fill area color of any class inheriting from
TAttFill can be changed using the method
SetFillColor and retrieved using the method
GetFillColor. The following table shows the first 50 default colors.
SetFillColorAlpha(), allows to set a transparent color. In the following example the fill wheel contains the recommended 216 colors to be used in web applications. The colors in the Color Wheel are created by TColor::CreateColorWheel.
Using this color set for your text, background or graphics will give your application a consistent appearance across different platforms and browsers.
Colors are grouped by hue, the aspect most important in human perception Touching color chips have the same hue, but with different brightness and vividness.
Colors of slightly different hues clash. If you intend to display colors of the same hue together, you should pick them from the same group.
Each color chip is identified by a mnemonic (eg kYellow) and a number. The keywords, kRed, kBlue, kYellow, kPink, etc are defined in the header file Rtypes.h that is included in all ROOT other header files. We strongly recommend to use these keywords in your code instead of hardcoded color numbers, eg:
If the current style fill area color is set to 0, then ROOT will force a black&white output for all objects with a fill area defined and independently of the object fill style.
The fill area style defines the pattern used to fill a polygon. The fill area style of any class inheriting from
TAttFill can be changed using the method
SetFillStyle and retrieved using the method
GetFillStyle.
4000 to 4100 the window is 100% transparent to 100% opaque.
The pad transparency is visible in binary outputs files like gif, jpg, png etc .. but not in vector graphics output files like PS, PDF and SVG. This convention (fill style > 4000) is kept for backward compatibility. It is better to use the color transparency instead.
pattern_number can have any value from 1 to 25 (see table), or any value from 100 to 999. For the latest the numbering convention is the following:
The following table shows the list of pattern styles. The first table displays the 25 fixed patterns. They cannot be customized unlike the hatches displayed in the second table which be customized using:
gStyle->SetHatchesSpacing()to define the spacing between hatches.
gStyle->SetHatchesLineWidth()to define the hatches line width.
Definition at line 19 of file TAttFill.h.
#include <TAttFill.h>
AttFill default constructor.
Default fill attributes are taking from the current style
Definition at line 174 of file TAttFill.cxx.
AttFill normal constructor.
Definition at line 186 of file TAttFill.cxx.
AttFill destructor.
Definition at line 195 of file TAttFill.cxx.
Copy this fill attributes to a new TAttFill.
Definition at line 202 of file TAttFill.cxx.
Return the fill area color.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 30 of file TAttFill.h.
Return the fill area style.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 31 of file TAttFill.h.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 44 of file TAttFill.h.
Change current fill area attributes if necessary.
Definition at line 211 of file TAttFill.cxx.
Reset this fill attributes to default values.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 225 of file TAttFill.cxx.
Save fill attributes as C++ statement(s) on output stream out.
Definition at line 234 of file TAttFill.cxx.
Invoke the DialogCanvas Fill attributes.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 251 of file TAttFill.cxx.
Set the fill area color.
Reimplemented in TSpider, TTeXDump, TSVG, TPostScript, TPDF, TGX11, TGWin32VirtualXProxy, TGWin32, TGQuartz, and TVirtualX.
Definition at line 37 of file TAttFill.h.
Set a transparent fill color.
falpha defines the percentage of the color opacity from 0. (fully transparent) to 1. (fully opaque).
Definition at line 260 of file TAttFill.cxx.
Set the fill area style.
Reimplemented in TGX11, TGWin32VirtualXProxy, TGWin32, TGQuartz, TVirtualX, TSpider, and TPad.
Definition at line 39 of file TAttFill.h.
Fill area color.
Definition at line 22 of file TAttFill.h.
Fill area style.
Definition at line 23 of file TAttFill.h.
|
https://root.cern.ch/doc/master/classTAttFill.html
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Deploying Ruby Apps with Bare Metal: A New Type of VM
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
This article was sponsored by CenturyLink Cloud. Thank you for supporting the sponsors who make SitePoint possible.
CenturyLink is a company providing multiple Platform-as-a-Service (PaaS) offerings. When choosing which service to use for a particular application, there are basically two tracks. The first track is AppFog, which is a pure PaaS like you know and love. AppFog provides the infrastructure and you supply the application and data. AppFog supplies a Command Line Interface (CLI) for deployment, as well as a nice dashboard for monitoring your resources.
The other tract offered by CenturyLink is Infrastructure-as-a-Service, both with virtual machines and another product called Bare Metal, which we’ll cover in this article. Bare Metal offers “the computing power of a physical server, plus the automation and pay-as-you-go flexibility of virtual machines”. Bare Metal servers are NOT shared VMs, so you don’t have to worry about sharing resources. However, they operate like a VM, so you get the responsiveness and rapid deployment of VMs with the isolation of a physical machine.
You might use Bare Metal for a database server or an application that doesn’t fit well into other virtualized environments. Tasks like batch computing, where you need a large amount of computational resources for short bursts are great for Bare Metal. Also, items like analytics are a good fit, as you can manage the complexity of software like Hadoop and the unique needs of analytics computing.
Another interesting feature is that Bare Metal servers are integrated into the CenturyLink Cloud, right along services like AppFog. This allows you to mix PaaS applications, databases, expensive computing tasks, and just about anything else, managing them all from the same dashboard. To my knowledge, no other PaaS offers such a menu, and you’d have to do a ton of work on Amazon Web Services (AWS) to get the same convenience.
Here is a comparison of Bare Metal servers to other server options.
In today’s post, I will walk through creating a Bare Metal server and deploying a (very) simple Rails application to it.
Setup
Before we start on our journey, you’ll need an account on CenturyLink in order to follow along. There are free trials (they do require a payment method, FYI). So, head over to the CenturyLink site and click “Free Trial”. Follow the sign up procedure and you’re ready to go.
Provision a Bare Metal Server
After logging into the CenturyLink Control Portal, the dashboard is presented:
To start the process of creating a server, click on the large “Create a Server” or the “+” on the sidebar and choose “Server”.
Bare Metal servers are not available in all Data Centers. I had to choose “VA1 – US East (Sterling)” from the “data center” drop down in order to have ‘Bare Metal’ as an option for “server type”. If you’re in a data center where Bare Metal servers should be available, but you can’t see them as an option, contact Customer Care to make sure they’re enabled for your account.
CenturyLink has some great documentation on how to provision a server, which you can follow here. I used the following options:
- group member of: Default Group
- server type: Bare Metal
- configuration: 4 cores (the smallest option)
- operating system: Ubuntu 14 64-bit
- server name: sprail
- primary dns: 8.8.8.8 (Google DNS)
- secondary dns: 8.8.4.4 (Google DNS)
After hitting ‘Create Server’, you’ll see the following:
If you stay on this page, the server will go through 3 steps of provisioning where it is validated, the resources are requested, and it is started.
When we provisioned our server, the configuration requires that a group is selected. In our example, we chose ‘Default Group’. I bet you were wondering what a “group” is, weren’t you? On CenturyLink Cloud, groups allow you to manage multiple servers in bulk. Examples of what can be done in groups are:
- Bulk Operations, such as power up/down, etc.
- Group your servers and resources by project, or any other logical reason.
- Create parent-child relationships between servers, with cascading tasks, etc.
- Support for complex billing, where you might charge a client for their usage.
Groups are very powerful, and you should read up on them to learn more.
It’s worth noting that CenturyLink does offer a REST API that provides endpoints for just about anything you can do via the dashboard, including provisioning servers.
Once you have chosen and sized your server, we’ll start down the path of creating a Rails application.
Deploying a Rails Application
When I return to the dashboard, my data center is now listed:
Clicking through on that data center leads to a data center specific view, which isn’t very interesting yet. To get to the server, expand the “Default Group” folder in the left-hand list of Data Centers and select your server:
Here, you can retrieve the admin credentials (setup during provisioning) and see the configuration of the server, including the IP address of the server to connect to. Make sure you are connected to the VPN that is provided for you when you create the server. (See the instructions at “How To Configure Client VPN”.).
Now you will be able to SSH into the box using ssh root@. When you’re prompted for a password, use the password that you specified during provisioning.
As I mentioned, Bare Metal servers act like VMs, but they are not shared. As such, deploying Rails to a Bare Metal Server is exactly the same as setting up any POSIX box as a Rails server. The steps are:
- Create a deploy user.
- Install a web server (Nginx, in our case).
- Capify your Rails app
- Add the Rails app to source control.
- Push your changes.
- Run the deploy task.
The first 4 steps are one-time only, so once the deployment is working, it’s a simple (and very scriptable) two-step process to deploy your app to a very powerful standalone machine.
Server-Side Setup
For these tasks, you need to be SSH’d into the Bare Metal Server.
Create a Deploy User
If you aren’t familiar with basic Unix tasks, like creating a user, it’s not too hard. Type the following on the server:
$ adduser deploy ...Answer the prompts, give the user a good password... $ gpasswd -a demo sudo
That last command adds the deploy user to the sudoers group so we can run higher privileged commands when needed.
Create/Use Public Key
Your deployments will go much more smoothly if you use Public Key Authentication to SSH into the box as the deploy user. If you have a key pair on your LOCAL machine, you can use it, otherwise use ssh-keygen to create one. Again, this is on your local/development machine:
$ ssh-keygen ...ssh-keygen output... Generating public/private rsa key pair. Enter file in which to save the key (/Users/your-user-name/.ssh/id_rsa):
Just hit return to accept that file name. You’ll be prompted for a pass phrase, which I recommend you leave blank for now. If you choose to add a pass phrase (which is, btw, more secure) you will be prompted for it every time you deploy.
You should now have a id_rsa.pub file in your ~/.ssh directory. It needs to be copied to the server. This .pub file needs to be copied to the server. The easiest way to do this is via the
ssh-copy-id command, which was made for just this purpose. On your LOCAL machine:
$ ssh-copy-id deploy@SERVER-IP-ADDRESS
you will be prompted for the deploy user’s password, and the public key file will then be copied.
At this point,
ssh deploy@SERVER-IP should “just work” without prompting for a password.
Install Nginx
Let’s get Nginx installed. Packages managers make this simple. As root on the server, type:
$ apt-get install nginx git-core nodejs -y ... All kinds of output..
OK, Nginx is installed. If you open up a browser and go to **, you should see Nginx’s welcome page:
Install Ruby (RVM) and Friends
I love RVM. It makes life easier. Let’s install it on the server so we easily upgrade Ruby as our incredible app grows and lives forever.
SSH into the box as the deploy user.
$ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 ...output.. gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) $ \curl -sSL | bash -s stable --ruby ...this will prompt for the deploy user password... ...then install ruby 2.2.1... Creating alias default for ruby-2.2.1... * To start using RVM you need to run `source /home/deploy/.rvm/scripts/rvm` in all your open shell windows, in rare cases you need to reopen all shell windows. $ source /home/deploy/.rvm/scripts/rvm $ ruby -v ruby 2.2.1p85 (2015-02-26 revision 49769) [x86_64-linux]
Excellent. Ruby is installed. We’ll need Bundler too, since Gemfiles rule the Ruby world.
$ gem install bundler --no-ri --no-rdoc Successfully installed bundler-1.10.6 1 gem installed
Git
Our deploy will pull the latest code from source control. For this app, I am going to use Github. Remember, we installed git in a previous apt-get install step when we installed nginx. The deployment process will need to be able to access our git repository without logging in, so here’s another public key authentication scenario. The deploy user doesn’t have a key file yet, so generate one:
# AS THE DEPLOY USER ON THE SERVER $ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/deploy/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/deploy/.ssh/id_rsa. Your public key has been saved in /home/deploy/.ssh/id_rsa.pub. The key fingerprint is: 90:4d:56:9c:50:7d:b3:05:26:ad:61:64:4b:84:05:26 deploy@VA1SPGGSPRAIL01 The key's randomart image is: ...a really weird piece of ASCII art...
With our key pair in place, the public portion of the key needs to be added to Github. Basically, login to Github, go to your account settings Click on ‘SSH keys’, then ‘Add SSH Key’:
If you’ve done it right, typing
ssh -T [email protected] yields:
$ ssh -T [email protected] Warning: Permanently added the RSA host key for IP address '192.0.2.0' to the list of known hosts. Hi! You've successfully authenticated, but GitHub does not provide shell access.
The Rails App
Since the focus of this article is deployment, the Rails app will be relatively simple. We’ll create a single controller and view, and change/add some gems to represent a somewhat “real” deployment.
I am using Ruby 2.2 and Rails 4.2.4, and I am back on my local machine. After typing rails new, it’s time to modify the Gemfile. I simply added Puma and the various Capistrano gems. It looks something like this:
...other gems... gem 'puma' group :development do gem 'web-console', '~> 2.0' # this was already here gem 'pry-rails' # I love Pry gem 'spring' gem 'capistrano', require: false gem 'capistrano-rvm', require: false gem 'capistrano-rails', require: false gem 'capistrano-bundler', require: false gem 'capistrano3-puma', require: false end
Make these changes and bundle away.
I mentioned this is going to be a simple app. Quickly generate a
Home scaffold:
$ rails g scaffold Thing names:string purpose:string ...lots of output... $ rake db:migrate ...more output...
Change the root route to point at our list of Things:
# config/routes.rb Rails.application.routes.draw do resources :things root to: 'things#index' end
Last file change is to add a value in secrets
Staring the server
(rails s) and opening will allow you to see our progress and make things to your heart’s content.
Stop your local server (CTRL-C) and let’s get this Things app under source control.
Git
In the root of your application, type:
git init . git add . gc -m "Initial commit"
Now, we have a local git repository, which we need to push up to Github. Open a browser, go to, login, and create a repository for your application. Add that repository as the origin remote to your local and push up your changes. You should now have a Github repository with the Rails app. Here’s mine.
Capistrano
Return to your application root on your local machine and type:
$ cap install mkdir -p config/deploy create config/deploy.rb create config/deploy/staging.rb create config/deploy/production.rb mkdir -p lib/capistrano/tasks create Capfile Capified
In the Capfile (created by the above command), require the various capistrano gems we included to support our deployment:
# Capfile require 'capistrano/setup' # Include default deployment tasks require 'capistrano/deploy' require 'capistrano/rails' require 'capistrano/bundler' require 'capistrano/rvm' require 'capistrano/puma' # Load custom tasks from `lib/capistrano/tasks` if you have any defined Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
These tasks will setup Ruby, install the bundle, etc. Capistrano is nice.
cap install also creates a config/deploy.rb file and a config/deploy directory. Change the config/deploy.rb file to look like:
# config valid only for current version of Capistrano lock '3.4.0' set :application, 'Bare Metal Things' set :repo_url, '[email protected]:sitepoint-editors/bare-metal-fun.git' set :server '206.128.156.201', roles: [:web, :app, :db], primary: true set :user 'deploy' set :puma_threads, [4, 16] set :puma_workers, 0 # Default branch is :master # ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp # Default deploy_to directory is /var/www/my_app_name set :deploy_to, "/home/#{fetch(:user)}/apps/#{fetch(:application)}" set :use_sudo, false set :deploy_via, :remote_cache # Puma set :puma_bind, "unix://#{shared_path}/tmp/sockets/#{fetch(:application)}-puma.sock" set :puma_state, "#{shared_path}/tmp/pids/puma.state" set :puma_pid, "#{shared_path}/tmp/pids/puma.pid" set :puma_access_log, "#{release_path}/log/puma.error.log" set :puma_error_log, "#{release_path}/log/puma.access.log" set :puma_preload_app, true set :puma_worker_timeout, nil set :puma_init_active_record, true set :ssh_options, { forward_agent: true, user: fetch(:user) } #
Commit your changes to git, push it to the origin repository, then deploy the application. Deploying the application is simply a matter of typing:
$ cap production deploy ...loads of output... DEBUG [29269dca] Command: cd /home/deploy/apps/bare_metal/current && ( RACK_ENV=production ~/.rvm/bin/rvm default do bundle exec puma -C /home/deploy/apps/bare_metal/shared/puma.rb --daemon ) DEBUG [29269dca] Puma starting in single mode... DEBUG [29269dca] * Version 2.14.0 (ruby 2.2.1-p85), codename: Fuchsia Friday DEBUG [29269dca] * Min threads: 4, max threads: 16 DEBUG [29269dca] * Environment: production DEBUG [29269dca] * Daemonizing... INFO [29269dca] Finished in 0.480 seconds with exit status 0 (successful)
Unfortunately, this won’t work entirely, even if it seems like it did. I run the first production deploy to make sure permissions are OK and get the Capistrano directory structure made. However, we need to add a couple of directories based on our Puma configuration. SSH into the server as deploy and type:
$ mkdir apps/bare_metal/shared/tmp/sockets -p $ mkdir apps/bare_metal/shared/tmp/pids -p $ mkdir apps/bare_metal/shared/config -p
Now, Puma has a permanent spot to write the files it needs. Although, that last directory is for something else: secrets.
Secrets
Handling secrets in Rails has always been, well, fun. For our simple app, we just need to have a local value forSECRET_KEY_BASE, so I recommend we put a copy of config/secrets.yml on the server and then symlink it on deployment. So, open up that file locally, and put a real token value in for production. Change:
production: secret_key_base:
to
production: secret_key_base: 0c2e91d623cd62510e1ba6fc9ed7313461dc13b2068ff692f3a1803891870e6bb77c05bcfe27f7065e4fb1c380bd7fc720a336ea0ae231bf3bd32ecc34f8282b
You should probably use your own token, which you can generate with rake secret.
Now, copy that secrets.yml to the shared config directory on the server:
scp config/secrets.yml deploy@SERVER-IP:/home/deploy/apps/bare_metal/shared/config
Finally, add a task to the config/deploy.rb file to symlink that file on deployment:
## config/deploy.rb namespace :deploy do ...other tasks... desc "Link shared files" task :symlink_config_files do on roles(:web) do symlinks = { #"#{shared_path}/config/database.yml" => "#{release_path}/config/database.yml", "#{shared_path}/config/secrets.yml" => "#{release_path}/config/secrets.yml" } execute symlinks.map{|from, to| "ln -nfs #{from} #{to}"}.join(" && ") end end before 'deploy:assets:precompile', :symlink_config_files
And then remove that file from git:
git rm secrets.yml
Now, a
cap production deploy should get it done.
Nginx Configuration
SSH into the server as root and type
vi /etc/nginx/sites-enabled/default
Replace the entire contents of this file with:
upstream puma { server unix:///home/deploy/apps/bare_metal/shared/tmp/sockets/bare_metal-puma.sock; } server { listen 80 default_server deferred; # server_name example.com; root /home/deploy/apps/bare_metal/current/public; access_log /home/deploy/apps/bare_metal/current/log/nginx.access.log; error_log /home/deploy/apps/bare_metal; }
You’ll need to restart nginx:
nginx -s stop nginx
Success! The application is now running on our Bare Metal server. Going forward, deploying a new application is a simple as:
- Make changes
- Commit changes to git and push to Github.
cap production deploy
Conclusion
This tutorial walked through deploying a Rails application to a CenturyLink Bare Metal server. The process of deploying the application really wasn’t much different than a regular server, once the Bare Metal server was provisioned. The advantages of using a Bare Metal server make this environment superior to a vanilla, cloud-based virtual machine. There is no worrying about shared resources, as Bare Metal servers are isolated like a physical machine. Bare Metal servers deploy faster, so you’ll be able to scale up when needed. Add in all the services that CenturyLink offers, and your entire DevOps needs can be completely met with a single provider.
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS!
|
https://www.sitepoint.com/deploying-ruby-apps-bare-metal-self-sufficient-containers-right-way/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Important: Please read the Qt Code of Conduct -
Application exception when starting from a service
I am relatively new to QT, and found it very useful. However there is something strange in QT. I can't start an QT application from service. It returns c0000135, or access violation.
EDIT: The service is running as me, so it is definitely not permission issue.
Tried to debug it and a app stripped to the bone works fine, but the moment I add the line QApplication(argc,argv), it started to happen (without any other code in there).
Attached is the zip file with the sample application and a service to test it (sorry I hard coded everything so you might need to adjust it to run it correctly). As you can see during QApplication initialization, something happened and none of the printf comes out, if I took it out, everything is fine.
QT code:
@#include <stdio.h>
//#include <QApplication>
#include <QCoreApplication>
int main(int argc, char *argv[])
{
printf("start v0.1!!\n");
printf("start v0.2!!!\n");
FILE *fp = fopen("c:\\temp\\t.txt","w"); fclose(fp); QCoreApplication a(argc, argv); printf("loading\n"); return 0;
}
@
QT project file:
@
QT += core widgets
TARGET = test
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp
@
Part of service code (in c#)
@
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplicationServiceTest
{
class Program
{
static void Main(string[] args)
{
//Doit();
//return;
System.ServiceProcess.ServiceBase[] ServicesToRun; // Change the following line to match. ServicesToRun = new System.ServiceProcess.ServiceBase[] { new Service1() }; System.ServiceProcess.ServiceBase.Run(ServicesToRun); } public static void Doit() { new Thread(() => { var name = @"c:\temp\test\release\test.exe"; var proc = Process.Start(new ProcessStartInfo(name) { UseShellExecute = false, WindowStyle = ProcessWindowStyle.Hidden, RedirectStandardError = true, RedirectStandardOutput = true, CreateNoWindow = true, }); var stdout = proc.StandardOutput.ReadToEndAsync(); var stderr = proc.StandardError.ReadToEndAsync().Result; bool ext = proc.WaitForExit(30000); File.WriteAllText(@"c:\temp\test.txt", stderr + "\r\n stdout=" + stdout.Result + "\r\n" + proc.ExitCode + "\r\n"+ext); }).Start(); } }
}
@
Sorry just found out I can't attach file, so I just uploaded the snippets instead.
Thanks
gz
Never mind, I think I didn't copy the related dlls in.
|
https://forum.qt.io/topic/29307/application-exception-when-starting-from-a-service
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Laravel Shared Data Package
Share data from your backend in JavaScript with Laravel Shared Data by Coderello. The API for this package is simple:
// Facade SharedData::put([ 'post' => Post::first(), 'app' => [ 'name' => config('app.name'), 'environment' => config('app.env'), ], ]); // Helper share([ 'post' => Post::first(), 'app' => [ 'name' => config('app.name'), 'environment' => config('app.env'), ], ]);
Which outputs data to JavaScript:
window.sharedData = { post: { content: "...", created_at: "...", // ... }, app: { /* ... */ } }
To output the configured JavaScript, add the
@shared directive to your views:
<html> <head> @shared </head> </html>
If you want to configure the JavaScript object, you can publish and change the configuration:
<?php return ['js_namespace' => 'myCustomObjectName'];
This package has documentation available to help you get started, and you can view the source code on GitHub at coderello/laravel-shared-data
Setup Your Local Environment for Open Source Package Contributions
One of the most important steps to get started in contributing to an open-source package is to set up your local envi…
Laravel Zip Content Validator
Laravel Zip Content Validator by Orkhan Ahmadov is a custom validation rule for checking the contents of an uploaded…
|
https://laravel-news.com/laravel-shared-data-package
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
Reference
The public JavaScript API for React Native is written with TypeScript. The Notifee reference documentation is automatically generated to provide users with further detail into the full API.
API & Types
The reference documentation is broken down into two categories:
API
All publicly available methods which can be accessed from the imported library, for example:
import notifee from '@notifee/react-native'; notifee.cancelAllNotifications();
View the Basic Usage documentation for more information.
Types
Types are those which are publicly available to users to assist when using the library. The types documentation includes descriptions of individual properties which may not be covered in the general documentation. Types are available as named exports, for example:
import { AndroidColor, EventType } from '@notifee/react-native';
View the Basic Usage documentation for more information..
|
https://notifee.app/react-native/reference/
|
CC-MAIN-2021-04
|
en
|
refinedweb
|
ui.animate for nested class functions is really awkward
- Webmaster4o
Say I have a class, in which I have the function
animate(self)which does a bunch of actions and then animates something. It looks somehing like this:
def animate(self): def anim(self): self.subviews[0].frame=(10, 10, 100, 120) #Do stuff here ui.animate(anim(self), 1)
This doesn't work, because then it's not callable. Additionally, using
self.animdoesn't work, because anim isn't in the main namespace of the class. I actually have to call
ui.animate(lambda: anim(self), 1)
This seems awkward... Am I missing something?
- Webmaster4o
Wait... Answered my own question.
animdoesn't need
selfas an argument, because it inherits this from the parent's namespace. I can use
def animate(self): def anim(): self.subviews[0].frame=(10, 10, 100, 120) #Do stuff here ui.animate(anim, 1)
|
https://forum.omz-software.com/topic/2223/ui-animate-for-nested-class-functions-is-really-awkward
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
Creating classes
Hi,
I created a myDelegate class using File->NewFile->C++ class.
It created the following 2 files:
mydelegate.h:
#ifndef MYDELEGATE_H #define MYDELEGATE_H #include <QObject> #include <QWidget> class myDelegate : public QItemDelegate { public: myDelegate(); }; #endif // MYDELEGATE_H
and mydelegate.cpp:
#include "mydelegate.h" myDelegate::myDelegate() { }
When I run build I keep getting the following error message:
C:\Programming\Projects\Folkfriends_1_0\mydelegate.h:8: error: expected class-name before '{' token
{
^
What did I miss?
Thank you.
- jsulm Moderators
You are also missing
Q_OBJECTand the parent in the constructor. On top of that in Qt5 QItemDelegate is deprecated, use QStyledItemDelagate instead
mydelegate.h:
#ifndef MYDELEGATE_H #define MYDELEGATE_H #include <QStyledItemDelegate> class myDelegate : public QStyledItemDelegate { Q_OBJECT public: explicit myDelegate(QObject* parent = nullptr); virtual ~myDelegate(); /* if there is the remote chance of your delegate becoming base class for another delegate save yourselves hours of debug for memory leaks and just declare a virtual destructor */ }; #endif // MYDELEGATE_H
mydelegate.cpp:
#include "mydelegate.h" myDelegate::myDelegate(QObject* parent) :QStyledItemDelegate(parent) { } myDelegate::~myDelegate(){ }
@VRonin
Thank you. It still gives me the following error message:
C:\Programming\Projects\Folkfriends_1_0\mydelegate.h:11: error: expected unqualified-id before '}' token
}
^
The last error message was fixed by adding a ; after class myDelegate in mydelegate.h:
class myDelegate;
In return I got new error messages:
C:\Programming\Projects\Folkfriends_1_0\mydelegate.cpp:3: error: undefined reference to `vtable for myDelegate'
collect2.exe:-1: error: error: ld returned 1 exit status
- mrjj Qt Champions 2017
@gabor53 said in Creating classes:
table for myDelegate'
Clean All
Run qmake from the build menu.
Build all
Should cure it
|
https://forum.qt.io/topic/72770/creating-classes
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
I would like to demonstrate a case tutorial of building a predictive model that predicts whether a customer will like a certain product. The original model with the real world data has been tested on the platform of spark, but I will be using a mock-up data set for this tutorial.
Since unbalanced data set is a very common in real business world, this tutorial will specifically showcase some of the tactics that could effectively deal with such challenge using PySpark.
Concretely, this session will cover the following topics:
- Case Scenario and Data set
- Data Pre-processings – NAs replacement, one-hot encoding, pipe-lining, training and validation splits, etc.
- Using mllib random forest classifier for binary classification.
- Measuring performance using AUC score.
- Different strategies to handle the problem of unbalanced dataset:
- Down-Sampling and Up-Sampling
- Ensemble of Down-Sampling models
The Case Scenario
Let’s assume your manager one day approaches you and asks you to build a Product Recommendation Engine. The engine should be able to predict the probabilities for each product being liked by a customer, when relevant data such as customer’s details, product’s info and so on is provided. And your model will then recommend the top 5 products based on those probabilities.
Your stakeholder is business department who will eventually use your model for recommendations. Specifically, each Sales Rep will ‘consult’ your model by telling it what type of customer she is going to visit before she actually sets on her trip. She will then bring along the recommended products list for the pitch, hoping that her trip will become fruitful.
The Data
You receive the data from your friendly BI team. Thankfully, they made your life easier by crunching all the data into one nice clean csv table so that you won’t need to painfully join and merge from different tables. Peeking at the top 3 rows of the table shows you the following:
Some of the predictors represent properties of products such as product_price or product_features, whereas others contain information of the customer, e.g. customer title, age.
On the other side, your Big Data team has set up the spark platform for you, and ingested the table into Data Lake so you can access it easily in PySpark. We start by importing the libraries and loading the data:
from pyspark.ml import Pipeline from pyspark.ml.classification import RandomForestClassifier as RF from pyspark.ml.feature import StringIndexer, VectorIndexer, VectorAssembler, SQLTransformer from pyspark.ml.evaluation import MulticlassClassificationEvaluator, BinaryClassificationEvaluator from pyspark.ml.tuning import CrossValidator, ParamGridBuilder import numpy as np import functools from pyspark.ml.feature import OneHotEncoder tableData = sqlContext.table('your_table_containing_products_feedback_information') cols_select = ['prod_price', 'prod_feat_1', 'prod_feat_2', 'cust_age', 'prod_feat_3', 'cust_region', 'prod_type', 'cust_sex', 'cust_title', 'feedback'] df = tableData.select(cols_select).dropDuplicates()
Data Pre-processing Steps
1. Skewed responses
There are three types of responses – Positive, Neutral and Negative. The first step we can do is see how they are distributed:
from matplotlib import pyplot as plt %matplotlib inline responses = df.groupBy('feedback').count().collect() categories = [i[0] for i in responses] counts = [i[1] for i in responses] ind = np.array(range(len(categories))) width = 0.35 plt.bar(ind, counts, width=width, color='r') plt.ylabel('counts') plt.title('Response distribution') plt.xticks(ind + width/2., categories)
The distribution looks quite skewed in the sense that ‘Positive’ cases are much more than ‘Neural’ and ‘Negative’ ones, and the volume of ‘Negative’ cases is extremely low.
The problem with ‘Negative’ cases here is most serious. However, since our job is to differentiate ‘Positive’ cases from either ‘Neutral’ or ‘Negative’ ones, why don’t we just combine the ‘Neutral’ and ‘Negative’ and form one group? So we choose to convert all ‘Neutral’ cases to ‘Negative’.
from pyspark.sql.functions import udf from pyspark.sql.types import StringType binarize = lambda x: 'Negative' if x == 'Neutral' else x udfValueToCategory = udf(binarize, StringType()) df = df.withColumn("binary_response", udfConvertResponse("feedback"))
Notice we have created a new column called ‘binary_response’, and use it to hold the binary cases of ‘Positive’ and ‘Negative’.
However, we have not solved the unbalanced data set issue since the ‘Positive’ cases are a lot more than ‘Negative’ ones. We will look into it later on by applying some strategies like down-sampling and ensemble of sub-samplings.
2. Filling NA values and casting data types
We convert numeric cols into ‘float’ or ‘int’ depending on the values. There are also categorical cols that contain null values, so we can fill in those with a’NA’ string as a new category, and leave the rest cols unchanged:
cols_select = ['prod_price', 'prod_feat_1', 'prod_feat_2', 'cust_age', 'prod_feat_3', 'cust_region', 'prod_type', 'cust_sex', 'cust_title', 'feedback', 'binary_response'] df = df.select(df.prod_price.cast('float'), # convert numeric cols (int or float) into a 'int' or 'float' df.prod_feat_1.cast('float'), df.prod_feat_2.cast('float'), df.cust_age.cast('int'), *cols_select[4:]) df = df.fillna({'cust_region': 'NA', 'cust_title': 'NA', 'prod_type': 'NA'}) # fill in 'N/A' entries for certain cols
3. Categorical col that has too many discrete values
We are also interested to see if there are any categorical cols that have too many levels (or distinct values).
for col in df.columns[4:-2]: print(col, df.select(col).distinct().count())
prod_feat_3 553 cust_region 12 prod_type 35 cust_sex 2 cust_title 12
The prod_feat_3 simply has too many levels (discrete values)! And a simple way to resolve the problem is to group all the categories that rank lower than a threshold into one category, namely “MinorityCategory”. Below is how we do:
from pyspark.sql.functions import udf from pyspark.sql.types import StringType COUNT_THRESHOLD = 150 # threshold to filter # create a temporary col "count" as counting for each value of "prod_feat_3" prodFeat3Count = df.groupBy("prod_feat_3").count() df = df.join(prodFeat3Count, "prod_feat_3", "inner") def convertMinority(originalCol, colCount): if colCount > COUNT_THRESHOLD: return originalCol else: return 'MinorityCategory' createNewColFromTwo = udf(convertMinority, StringType()) df = df.withColumn('prod_feat_3_reduced', createNewColFromTwo(df['prod_feat_3'], df['count'])) df = df.drop('prod_feat_3') df = df.drop('count')
4. One-hot encoding categorical cols
For those categorical cols, we will apply one-hot encoding method to convert them into dummy cols:
column_vec_in = ['prod_feat_3_reduced', 'cust_region', 'prod_type', 'cust_sex', 'cust_title'] column_vec_out = ['prod_feat_3_reduced_catVec','cust_region_catVec', 'prod_type_catVec','cust_sex_catVec', 'cust_title_catVec'] indexers = [StringIndexer(inputCol=x, outputCol=x+']
Finally, we can group all the predictors as ‘features’, and the response col as ‘label’. We then streamline the entire process using a function called ‘Pipeline’ which will do all the jobs sequentially for us.
# prepare labeled sets cols_now = ['prod_price', 'prod_feat_1', 'prod_feat_2', 'cust_age', 'prod_feat_3_reduced_catVec', 'cust_region_catVec', 'prod_type_catVec', 'cust_sex_catVec', 'cust_title_catVec'] assembler_features = VectorAssembler(inputCols=cols_now, outputCol='features') labelIndexer = StringIndexer(inputCol='binary_response', outputCol="label") tmp += [assembler_features, labelIndexer] pipeline = Pipeline(stages=tmp)
5. Split into training and validation sets.
This part is straightforward. We randomly select 80% as the training data, and the remaining 20% as test set or validation set.
Notice: It is important to set seed for the randomSplit() function in order to get same split for each run. (This is the crucial step for the success of the subsequent tests later on)
allData = pipeline.fit(df).transform(df) allData.cache() trainingData, testData = allData.randomSplit([0.8,0.2], seed=0) # need to ensure same split for each time print("Distribution of Pos and Neg in trainingData is: ", trainingData.groupBy("label").count().take(3))
Distribution of Pos and Neg in trainingData is: [Row(label=1.0, count=144014), Row(label=0.0, count=520771)]
Prediction and Evaluation of AUC
Train and prediction
We are using a Random Forest with numTrees = 200. And we train on trainingData and predict on testData.
rf = RF(labelCol='label', featuresCol='features',numTrees=200) fit = rf.fit(trainingData) transformed = fit.transform(testData)
AUC
Use the test data labels to calculate AUC score against the predicted probabilities:
from pyspark.mllib.evaluation import BinaryClassificationMetrics as metric results = transformed.select(['probability', 'label']) ## prepare score-label set.6425143766095695
To visualize the AUC score, we can draw the ROC curve as below:
from sklearn.metrics import roc_curve, auc fpr = dict() tpr = dict() roc_auc = dict() y_test = [i[1] for i in results_list] y_score = [i[0] for i in results_list] fpr, tpr, _ = roc_curve(y_test, y_score) roc_auc = auc(fpr, tpr) %matplotlib inline plt.figure() plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show()
As we can see from above, the area between the blue line and dashed line measures the usefulness our model has gained over a random guess of flipping a coin.
The score is 0.64, which is not too optimistic
Well it could be because the features in the data set do not have enough information to train our model, and maybe we should talk with our BI team to see if it is possible to obtain additional insightful features.
But that’s not the focus of this tutorial anyway :)
To gain a better understanding of our model’s performance, we can plot the distribution of our predictions:
all_probs = transformed.select("probability").collect() pos_probs = [i[0][0] for i in all_probs] neg_probs = [i[0][1] for i in all_probs] from matplotlib import pyplot as plt %matplotlib inline # pos plt.hist(pos_probs, 50, normed=1, facecolor='green', alpha=0.75) plt.xlabel('predicted_values') plt.ylabel('Counts') plt.title('Probabilities for positive cases') plt.grid(True) plt.show() # neg plt.hist(neg_probs, 50, normed=1, facecolor='green', alpha=0.75) plt.xlabel('predicted_values') plt.ylabel('Counts') plt.title('Probabilities for negative cases') plt.grid(True) plt.show()
As can be seen, the predicted probabilities are highly skewed towards the Positive. This is not surprising as this is demonstrated by the fact that the percentage of data which are positive is around 79%!
It is time to dig into the unbalanced data issue.
Up- and Down-Samplings
Since the data set is highly skewed – we have more Positive training samples than Negative training samples – we will need to try out some strategies that counter the unbalance.
Unfortunately, the Random Forest implementation in spark’s mllib package doesn’t have the ‘Class Weights‘ parameter that we could tune, which could have taken care of the problem internally within the model itself (i.e. it penalizes more when the model mis-classifies a minority class than a majority one). Thus we will need to manually implement some naive methods from scratch.
The simplest ways we can do are up- or down-samplings. Up-sampling means to randomly sample (with replacement) some training cases from the minor classes (the negative cases in this case) as additional data points added into training data, whereas down-sampling means to randomly filter out some of the majority cases. Both methods will tend to make the training data more balanced (however at the cost of bias and overfitting).
Down-sampling
Here’s the way to implement down-sampling
from numpy.random import randint from pyspark.sql.functions import udf from pyspark.sql.types import IntegerType RATIO_ADJUST = 2.0 ## ratio of pos to neg in the df_subsample counts = trainingData.select('binary_response').groupBy('binary_response').count().collect() higherBound = counts[0][1] TRESHOLD_TO_FILTER = int(RATIO_ADJUST * float(counts[1][1]) / counts[0][1] * higherBound) randGen = lambda x: randint(0, higherBound) if x == 'Positive' else -1 udfRandGen = udf(randGen, IntegerType()) trainingData = trainingData.withColumn("randIndex", udfRandGen("binary_response")) df_subsample = trainingData.filter(trainingData['randIndex'] < TRESHOLD_TO_FILTER) df_subsample = df_subsample.drop('randIndex') print("Distribution of Pos and Neg cases of the down-sampled training data are: \n", df_subsample.groupBy("label").count().take(3))
Distribution of Pos and Neg cases of the down-sampled training data are: [Row(label=1.0, count=144014), Row(label=0.0, count=287482)]
Explanation – For trainingData: we randomly assigned an int as ‘randIndex’ to each majority data point, and then filter out those whose ‘randIndex’ is larger than a threshold we have calculated, so that the data points from the majority class – ‘Positive’ – will be much less. However, we won’t touch the data points from the minority class – ‘Negative’ – so the ‘count’ value in Row(label=1.0, count=144014) shown above is exactly the same as previously for trainingData.
For testData: We will not do anything about it now.
Same way for training and validating as before:
## training and prediction rf = RF(labelCol='label', featuresCol='features',numTrees=200) fit = rf.fit(df_subsample) transformed = fit.transform(testData)
Results:
## results and evaluation from pyspark.mllib.evaluation import BinaryClassificationMetrics as metric results = transformed.select(['probability', 'label']).6463328674547113
Awesome! Our method seems to work out and the ROC improves slightly to 0.646.
I won’t paste the code for up-sampling cause it’s essentially quite straightforward. It did improve the score (slightly) as well!
Ensemble of Down-samplings
Let’s take another look at the down-sampling method above.
Since each time when we do a subsampling of trainingData, we will be throwing away some data points that belong to the “Positive” class, thus we will miss out information which could potentially be used to train our model.
Therefore, we want to take multiple down-samplings of the trainingData, each of which will give us a slightly different data set to train our model. In the end, we will ensemble, or take the average of, the total prediction results from all the models trained using different data sets, and hopefully to get a better overall predictions.
Let’s jolt down the ideas in codes:
from numpy.random import randint from pyspark.sql.functions import udf from pyspark.sql.types import IntegerType from pyspark.mllib.evaluation import BinaryClassificationMetrics as metric RATIO_ADJUST = 3.0 ## ratio of pos to neg in the df_subsample TOTAL_MODELS = 10 total_results = None final_result = None #counts = trainingData.select('binary_response').groupBy('binary_response').count().collect() highestBound = counts[0][1] TRESHOLD_TO_FILTER = int(RATIO_ADJUST * float(counts[1][1]) / counts[0][1] * highestBound) ## UDF randGen = lambda x: randint(0, highestBound) if x == 'Positive' else -1 udfRandGen = udf(randGen, IntegerType()) ## ensembling for N in range(TOTAL_MODELS): print("Round: ", N) trainingDataIndexed = trainingData.withColumn("randIndex", udfRandGen("binary_response")) df_subsample = trainingDataIndexed.filter(trainingDataIndexed['randIndex'] < TRESHOLD_TO_FILTER).drop('randIndex') ## training and prediction rf = RF(labelCol='label', featuresCol='features',numTrees=200) fit = rf.fit(df_subsample) transformed = fit.transform(testData) result_pair = transformed.select(['probability', 'label']) result_pair = result_pair.collect() this_result = np.array([float(i[0][1]) for i in result_pair]) this_result = list(this_result.argsort().argsort() / (float(len(this_result) + 1))) ## sum up all the predictions, and average to get final_result if total_results is None: total_results = this_result else: total_results = [i+j for i, j in zip(this_result, total_results)] final_result = [i/(N+1) for i in total_results] results_list = [(float(i), float(j[1])) for i, j in zip(final_result, result_pair)] scoreAndLabels = sc.parallelize(results_list) metrics = metric(scoreAndLabels) print("The ROC score is (@numTrees=200): ", metrics.areaUnderROC)
Explanation: Basically, the algorithm is very similar to down-sampling: we are doing down-sampling multiple times and average the total results in terms of ranking! (meaning instead of raw predicted probabilities, for each round we rank the probabilities first and then take the average between rounds)
Round: 0 The ROC score is (@numTrees=200): 0.6456296366007628 Round: 1 The ROC score is (@numTrees=200): 0.6475210701955153 Round: 2 The ROC score is (@numTrees=200): 0.6488169677072237 Round: 3 The ROC score is (@numTrees=200): 0.6490333812262444 Round: 4 The ROC score is (@numTrees=200): 0.6490997896881725 Round: 5 The ROC score is (@numTrees=200): 0.648347665785477 Round: 6 The ROC score is (@numTrees=200): 0.6486544723987375 Round: 7 The ROC score is (@numTrees=200): 0.6492410064530146 Round: 8 The ROC score is (@numTrees=200): 0.6493154941849306 Round: 9 The ROC score is (@numTrees=200): 0.6483560027574977
The ensemble approach seems to give another boost to the AUC score!
WARNING of using down- or up-sampling: If what you care is only the ROC, which measures the probability rankings of each case, rather than the actual probability of being Positive or Negative, it is ok to use subsampling methods.
However, if the actually probability matters to you, applying subsampling will distort the predicted probability distribution, and the actual probability might be wrong or over- / under-estimated.
Other approach to handling unbalanced dataset
Of course these are just the starting and sometimes naive approaches for handling unbalanced classes. There are some useful blogs or paper which talk about different strategies in greater details.
Alternately, you could try sklearn package’s Random Forest on PySpark which have class weight parameter to tune.
Leave a comment if you have questions or some ideas. :)
14 thoughts on “PySpark tutorial – a case study using Random Forest on unbalanced dataset”
Can you share the sample data in a link so that we can run the exercise on our own. Thanks in advance.
The original data is our proprietary data, whereas the outputs shown inside the tutorial are masked so that both its values and names are fake, only for demonstrate purpose.
Therefore am sorry to say I don’t have a sample data to show at the moment.
Hello,
Very informative article. Can you also share how to get the Variable Importance from RF? Thanks
Hi,
Sorry as far as I know feature importance is not implemented in PySpark for random forest.
for x,y in zip(column_vec_in, column_vec_out)]
TypeError: __init__() got an unexpected keyword argument ‘outputCol’…………….. Facing this error while implementing the One-hot encoding technique
You should run the complete line “encoders = [OneHotEncoder(dropLast=False, inputCol=x+”_tmp”, outputCol=y)
for x,y in zip(column_vec_in, column_vec_out)]”
instead of part of it I guess
column_vec_in = [‘business_type’]
column_vec_out = [‘business_type_Vec’]
indexers = [StringIndexer(inputCol=x, outputCol=x + ‘]
cols_now = [‘business_type_Vec’]
assembler_features = VectorAssembler(inputCols=cols_now, outputCol=’features’)
labelIndexer = StringIndexer(inputCol=’binary_response’, outputCol=”label”)
tmp += [assembler_features, labelIndexer]
pipeline = Pipeline(stages=tmp)
I just Implemented as you implemented. How can Iook into the new dataframes which has dummy columns also.
You have created the pipeline but not yet fit it onto your df. running: allData = pipeline.fit(df).transform(df)
will fit onto your df and create the new sparse dataframe allData which has the one-hot cols.
Thank you for sharing this, I was trying to figure out a way to group categorical AND numerical columns together using vectorassembler. And your code worked perfectly!
Hi Weimin:
I do have a question regarding the process you use for OneHotEncoding and building the pipeline. Specifically, can you explain what these two lines do (can’t find it on Spark documentation):
tmp = [[i,j] for i,j in zip(indexers, encoders)]
tmp = [i for sublist in tmp for i in sublist]
In your code, it looks like these two lines serve as reference for the “tmp += [assembler_features, labelIndexer]” , and the ‘sublist’ is each row of the dataframe after it’s been indexed and encoded? I’m a bit confused about the order of execution of these lines, since i don’t see the .transform() on the encoder and indexer commands.
Thanks!
Hi Jenny,
It’s a bit python trick I applied by reference to this:
Basically, I am trying to make tmp a flat list instead of list of lists, so “tmp = [i for sublist in tmp for i in sublist]” will flatten it.
Thanks for the explanation!
I’m trying to scale this for multi-class classification.
I have used a StringIndexer for the output column like so..
labelIndexer = StringIndexer(inputCol=’multi_response’, outputCol=”label”)
I seem to be always getting a binary prediction…
Thoughts on what I might be missing?
Thanks Weimin, this is a very helpful example. I’ve been trying to follow your downsampling logic for the THRESHOLD_TO_FILTER calculation. In order to get desired results, I had to modify slightly your code from:
highestBound = counts[0][1]
TRESHOLD_TO_FILTER = int(RATIO_ADJUST * float(counts[1][1]) / counts[0][1] * highestBound)
to:
higherBound = counts[0][1]
TRESHOLD_TO_FILTER = int(adjust_ratio * float(counts[0][1]) / counts[1][1] * higherBound)
Does this change agree with your data or am I misinterpreting your logic?
Thanks!
|
https://weiminwang.blog/2016/06/09/pyspark-tutorial-building-a-random-forest-binary-classifier-on-unbalanced-dataset/?shared=email&msg=fail
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
I tried the package my application which has only two files, one for the GUI (wxPython) and a Library used by the GUI. Below is my setup.py code:
from distutils.core import setup import py2exe setup(name="U51 Converter", scripts=['convertapp.pyw']) class Target: def __init__(self, **kw): self.__dict__.update(kw) # for the versioninfo resources self.version = "0.01.1" self.company_name = "NovaSteps, Inc" self.copyright = "Copyright (c) 2008 NovaSteps, Inc." self.name = "U51 Transcoder"
I ran the following command:
python setup.py py2exe What I got was a buld and dist folder, but my app is not present in there. What am I doing wrong?
|
https://www.daniweb.com/programming/software-development/threads/139389/packaging-wxpython-application-with-py2exe-need-help
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
aiohttp_middlewares¶
Collection of useful middlewares for aiohttp applications.
License¶
aiohttp-middlewares is licensed under the terms of BSD License.
API¶
aiohttp_middlewares.timeout¶
Middleware to ensure that request handling does not exceeds X seconds.
Usage¶
from aiohttp import web from aiohttp_middlewares import error_middleware, timeout_middleware # Basic usage app = web.Application( middlewares=[timeout_middleware(29.5)]) # Ignore slow responses from list of urls slow_urls = ('/slow-url', '/very-slow-url', '/very/very/slow/url') app = web.Application( middlewares=[timeout_middleware(4.5, ignore=slow_urls)]) # Ignore slow responsed from dict of urls. URL to ignore is a key, # value is a lone string with HTTP method or list of strings with # HTTP methods to ignore. HTTP methods are case-insensitive slow_urls = { '/slow-url': 'POST', '/very-slow-url': ('GET', 'POST'), } app = web.Application( middlewares=[timeout_middleware(4,5, ignore=slow_urls)]) # Handle timeout errors with error middleware app = web.Application( middlewares=[error_middleware(), timeout_middleware(14.5)])
aiohttp_middlewares.timeout.
timeout_middleware(seconds, *, ignore=None)[source]¶
Ensure that request handling does not exceed X seconds.
This is helpful when aiohttp application served behind nginx or other reverse proxy with enabled read timeout. And when this read timeout exceeds reverse proxy generates error page instead of aiohttp app, which may result in bad user experience.
For best results, please do not supply seconds value which equals read timeout value at reverse proxy as it may results that request handling at aiohttp will be ended after reverse proxy already responded with 504 error. Timeout context manager accepts floats, so if nginx has read timeout in 30 seconds, it’s ok to configure timeout middleware to raise timeout error after 29.5 seconds. In that case in most cases user for sure will see the error from aiohttp app instead of reverse proxy.
Notice that timeout middleware just raised
asyncio.Timeoutin case of exceeding seconds per request, but not handling the error by itself. If you need to handle this error, please place
error_middleware_factoryin list of application middlewares as well. Error middleware should be placed before timeout middleware, so timeout errors can be catched and processed properly.
In case if you need to “disable” timeout middleware for given request path, please supply ignore collection as second positional argument, like:
from aiohttp import web app = web.Application( middlewares=[timeout_middleware(14.5, ignore={'/slow-url'})])
In case if you need more flexible ignore rules you can pass
ignoredict, where key is an URL to ignore and value is a collection of methods to ignore from timeout handling for given URL.
ignore = {'/slow-url': ['POST']} app = web.Application( middlewares=[timeout_middleware(14.5, ignore=ignore)])
Behind the scene, when current request path match the URL from ignore collection or dict timeout context manager will be configured to avoid break the execution after X seconds.
aiohttp_middlewares.shield¶
Middleware to shield application handlers by method or URL.
Usage¶
from aiohttp import web from aiohttp_middlewares import NON_IDEMPOTENT_METHODS, shield_middleware # Basic usage (shield by handler method) app = web.Application( middlewares=[shield_middleware(methods=IDEMPOTENT_METHODS)]) # Shield by handler URL app = web.Application( middlewares=[shield_middleware(urls=['/', '/about-us'])]) # Shield by handler method, but ignore shielding list of URLs app = web.Application( middlewares=[ shield_middleware( methods=NON_IDEMPOTENT_METHODS, ignore={'/api/documents', '/api/comments'})]) # Combine shielding by method and URL SHIELD_URLS = { '/api/documents': ['POST', 'DELETE'], re.compile('/api/documents/\d+'): ['DELETE', 'PUT', 'PATCH'], } app = web.Application( middlewares=[shield_middleware(urls=SHIELD_URLS)])
aiohttp_middlewares.shield.
shield_middleware(*, methods=None, urls=None, ignore=None)[source]¶
Ensure that handler execution would not break on
CancelledError.
Shielding handlers allow to avoid breaking handler execution on
CancelledError(this happens for example while client closes conneciton, but server still not ready to fullify response).
In most cases you need to shield non-idempotent methods (
PUT,
PATCH,
DELETE) and ignore shielding idempotent
GET,
HEAD,
OPTIONSand
TRACErequests.
More about shielding coroutines in official Python docs,
Other possibility to allow shielding request handlers by URLs dict. In that case order of dict keys is necessary as they will be processed from first to last added. In Python 3.6+ you can supply standard
dicthere, in Python 3.5 please supply
collections.OrderedDictinstance instead.
To shield all non-idempotent methods you need to:
from aiohttp import web app = web.Application( middlewares=shield_middleware(methods=NON_IDEMPOTENT_METHODS))
To shield all non-idempotent methods and
GETrequests to
/downloads/*URLs:
import re app = web.Application( middlewares=shield_middleware(urls={ re.compile(r'^/downloads/.*$'): 'GET', re.compile(r'.*'): NON_IDEMPOTENT_METHODS, }))
aiohttp.https¶
Change scheme for current request when aiohttp application deployed behind reverse proxy with HTTPS enabled.
aiohttp_middlewares.https.
https_middleware(match_headers=None)[source]¶
Change scheme for current request when aiohttp application deployed behind reverse proxy with HTTPS enabled.
This middleware is required to use, when your aiohttp app deployed behind nginx with HTTPS enabled, after aiohttp discounted
secure_proxy_ssl_headerkeyword argument in.
0.2.0 (In Development)¶
0.1.0 (2018-02-20)¶
- First non-beta release
- Support aiohttp 3.0 version
0.1.0b2 (2018-02-04)¶
- New
shield_middlewareto wrap request handler into asyncio.shield helper before execution
- Allow to match URL by regexp for shield/timeout middleware
0.1.0b1 (2017-10-20)¶
- New
https_middlewareto allow use proper scheme in
request.url, when deploying aiohttp behind reverse proxy with enabled HTTPS
- Allow passing dict of URLs with list methods to flex process of matching request ignored to wrapping into timeout context manager
|
https://aiohttp-middlewares.readthedocs.io/en/latest/
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
Communicate with a remote app service
In addition to launching an app on a remote device using a URI, you can run and communicate with app services on remote devices as well. Any Windows-based device can be used as either the client or host device. This gives you an almost limitless number of ways to interact with connected devices without needing to bring an app to the foreground.
Set up the app service on the host device
In order to run an app service on a remote device, you must already have a provider of that app service installed on that device. This guide will use CSharp version of the Random Number Generator app service sample, which is available on the Windows universal samples repo. For instructions on how to write your own app service, see Create and consume an app service.
Whether you are using an already-made app service or writing your own, you will need to make a few edits in order to make the service compatible with remote systems. In Visual Studio, go to the app service provider's project (called "AppServicesProvider" in the sample) and select its Package.appxmanifest file. Right-click and select View Code to view the full contents of the file. Create an Extensions element inside of the main Application element (or find it if it already exists). Then create an Extension to define the project as an app service and reference its parent project.
... <Extensions> <uap:Extension <uap3:AppService </uap:Extension> </Extensions> ...
Next to the AppService element, add the SupportsRemoteSystems attribute:
... <uap3:AppService ...
In order to use elements in this uap3 namespace, you must add the namespace definition at the top of the manifest file if it isn't already there.
<?xml version="1.0" encoding="utf-8"?> <Package xmlns="" xmlns: ... </Package>
Then build your app service provider project and deploy it to the host device(s).
Target the app service from the client device
The device from which the remote app service is to be called needs an app with Remote Systems functionality. This can be added into the same app that provides the app service on the host device (in which case you would install the same app on both devices), or implemented in a completely different app.
The following using statements are needed for the code in this section to run as-is:
using Windows.ApplicationModel.AppService; using Windows.System.RemoteSystems;
You must first instantiate an AppServiceConnection object, just as if you were to call an app service locally. This process is covered in more detail in Create and consume an app service. In this example, the app service to target is the Random Number Generator service.
Note
It is assumed that a RemoteSystem object has already been acquired by some means within the code that would call the following method. See Launch a remote app for instructions on how to set this up.
// This method returns an open connection to a particular app service on a remote system. // param "remotesys" is a RemoteSystem object representing the device to connect to. private async void openRemoteConnectionAsync(RemoteSystem remotesys) { // Set up a new app service connection. The app service name and package family name that // are used here correspond to the AppServices UWP sample. AppServiceConnection connection = new AppServiceConnection { AppServiceName = "com.microsoft.randomnumbergenerator", PackageFamilyName = "Microsoft.SDKSamples.AppServicesProvider.CS_8wekyb3d8bbwe" };
Next, a RemoteSystemConnectionRequest object is created for the intended remote device. It is then used to open the AppServiceConnection to that device. Note that in the example below, error handling and reporting is greatly simplified for brevity.
// a valid RemoteSystem object is needed before going any further if (remotesys == null) { return; } // Create a remote system connection request for the given remote device RemoteSystemConnectionRequest connectionRequest = new RemoteSystemConnectionRequest(remotesys); // "open" the AppServiceConnection using the remote request AppServiceConnectionStatus status = await connection.OpenRemoteAsync(connectionRequest); // only continue if the connection opened successfully if (status != AppServiceConnectionStatus.Success) { return; }
At this point, you should have an open connection to an app service on a remote machine.
Exchange service-specific messages over the remote connection
From here, you can send and receive messages to and from the service in the form of ValueSet objects (for more information, see Create and consume an app service). The Random number generator service takes two integers with the keys
"minvalue" and
"maxvalue" as inputs, randomly selects an integer within their range, and returns it to the calling process with the key
"Result".
// create the command input ValueSet inputs = new ValueSet(); // min_value and max_value vars are obtained somewhere else in the program inputs.Add("minvalue", min_value); inputs.Add("maxvalue", max_value); // send input and receive output in a variable AppServiceResponse response = await connection.SendMessageAsync(inputs); string result = ""; // check that the service successfully received and processed the message if (response.Status == AppServiceResponseStatus.Success) { // Get the data that the service returned: result = response.Message["Result"] as string; } }
Now you have connected to an app service on a targeted host device, run an operation on that device, and received data to your client device in response.
Related topics
Connected apps and devices (Project Rome) overview
Launch a remote app
Create and consume an app service
Remote Systems API reference
Remote Systems sample
|
https://docs.microsoft.com/en-us/windows/uwp/launch-resume/communicate-with-a-remote-app-service
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
#include <wx/filepicker.h>
This event class is used for the events generated by wxFilePickerCtrl and by wxDirPickerCtrl.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros:
The constructor is not normally used by the user code.
Retrieve the absolute path of the file/directory the user has just selected.
Set the absolute path of the file/directory associated with the event.
|
https://docs.wxwidgets.org/3.0/classwx_file_dir_picker_event.html
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
Let me start this tutorial by taking some theoretical jargon out of your way. When we talk about image enhancement, this basically means that we want a new version of the image that is more suitable than the original one.
For instance, when you scan a document, the output image might have a lower quality than the original input image. We thus need a way to improve the quality of output images so they can be visually more expressive for the viewer, and this is where image enhancement comes into play. When we enhance an image, what we are doing is sharpening the image features such as its contrast and edges.
It is important to note that image enhancement does not increase the information content of the image, but rather increases the dynamic range of the chosen features, eventually increasing the image's quality. So here we actually don't know what the output image would look like, but we should be able to tell (subjectively) whether there were any improvements or not, like observing more details in the output image, for instance.
Image enhancement is usually used as a preprocessing step in the fundamental steps involved in digital image processing (i.e. segmentation, representation). There are many techniques for image enhancement, but I will be covering two techniques in this tutorial: image inverse and power law transformation. We'll have a look at how we can implement them in Python. So, let's get started!
As you might have guessed from the title of this section (which can also be referred to as image negation), image inverse aims to transform the dark intensities in the input image to bright intensities in the output image, and bright intensities in the input image to dark intensities in the output image. In other words, the dark areas become lighter, and the light areas become darker.
Say that I(i,j) refers to the intensity value of the pixel located at (i,j). To clarify a bit here, the intensity values in the grayscale image fall in the range [0,255], and (i,j) refers to the row and column values, respectively. When we apply the image inverse operator on a grayscale image, the output pixel O(i,j) value will be:
I(i,j)
(i,j)
[0,255]
O(i,j)
O(i,j) = 255 - I(i,j)
Nowadays, most of our images are color images. Those images contain three channels, red, green, and blue, referred to as RGB images. In this case, as opposed to the above formula, we need to subtract the intensity of each channel from 255. So the output image will have the following values at pixel (i,j):
RGB
O_R(i,j) = 255 - R(i,j)
O_G(i,j) = 255 - G(i,j)
O-B)i,j) = 255 - B(i,j)
After this introduction, let's see how we can implement the image inverse operator in Python. I would like to mention that for the sake of simplicity, I will run the operator on a grayscale image. But I will give you some thoughts about applying the operator on a color image, and I will leave the full program for you as an exercise.
The first thing you need to do for a color image is extract each pixel channel (i.e. RGB) intensity value. For this purpose, you can use the Python Imaging Library (PIL). Go ahead and download a sample baboon image from baboon.png. The size of the image is 500x500. Let's say you want to extract the red, green, and blue intensity values located at the pixel location (325, 432). This can be done as follows:
500x500
(325, 432)
from PIL import Image
im = Image.open('baboon.png')
print im.getpixel((325,432))
Based on the documentation, what the method getpixel() does is:
getpixel()
Returns the pixel value at a given position.
After running the above script, you will notice that you only get the following result: 138! But where are the three channels' (RGB) intensity values? The issue seems to be with the mode of the image being read. Check the mode by running the following statement:
138
mode
print im.mode
You will get the output P, meaning that the image was read in a palette mode. One thing you can do is convert the image to RGB mode before returning the intensity values of the different channels. To do that, you can use the convert() method, as follows:
P
convert()
rgb_im = im.convert('RGB')
In this case, you would get the following value returned: (180, 168, 178). This means that the intensity values for the red, green, and blue channels are 180, 168, and 178, respectively.
(180, 168, 178)
To put together everything we have described so far, the Python script which would return the RGB values of an image looks as follows:
from PIL import Image
im = Image.open('baboon.png')
rgb_im = im.convert('RGB')
print rgb_im.getpixel((325,432))
There is one point left before you move forward to the image inverse operator. The above example shows how to retrieve the RGB value of one pixel only, but when performing the inverse operator, you need to perform that on all the pixels.
To print out all the intensity values for the different channels of each pixel, you can do the following:
from PIL import Image
im = Image.open('baboon.png')
rgb_im = im.convert('RGB')
width, height = im.size
for w in range(width):
for h in range(height):
print rgb_im.getpixel((w,h))
At this point, I will leave it as an exercise for you to figure out how to apply the image inverse operator on all the color image channels (i.e. RGB) of each pixel.
Let's have a look at an example that applies the image inverse operator on a grayscale image. Go ahead and download boat.tiff, which will serve as our test image in this section. This is what it looks like:
I'm going to use the scipy library for this task. The Python script for applying the image inverse operator on the above image should look as follows:
scipy
import scipy.misc
from scipy import misc
from scipy.misc.pilutil import Image
im = Image.open('boat.tiff')
im_array = scipy.misc.fromimage(im)
im_inverse = 255 - im_array
im_result = scipy.misc.toimage(im_inverse)
misc.imsave('result.tiff',im_result)
The first thing we did after reading the image is to convert it to an ndarray in order to apply the image inverse operator on it. After applying the operator, we simply convert the ndarray back to an image and save that image as result.tiff. The figure below displays the result of applying image inverse to the above image (the original image is on the left, and the result of applying the image inverse operator is on the right):
result.tiff
Notice that some features of the image became clearer after applying the operator. Look, for instance, at the clouds and the lighthouse in the right image.
This operator, also called gamma correction, is another operator we can use to enhance an image. Let's see the operator's equation. At the pixel (i,j), the operator looks as follows:
p(i,j) = kI(i,j)^gamma
I(i,j) is the intensity value at the image location (i,j); and k and gamma are positive constants. I will not go into mathematical details here, but I believe that you can find thorough explanations of this topic in image processing books. However, it is important to note that in most cases, k=1, so we will mainly be changing the value of gamma. The above equation can thus be reduced to:
k
gamma
k=1
p(i,j) = I(i,j)^gamma
I'm going to use the OpenCV and NumPy libraries here. You can kindly check my tutorial Introducing NumPy should you need to learn more about the library. Our test image will again be boat.tiff (go ahead and download it).
OpenCV
NumPy
The Python script to perform the Power Law Transformation operator looks as follows:
import cv2
import numpy as np
im = cv2.imread('boat.tiff')
im = im/255.0
im_power_law_transformation = cv2.pow(im,0.6)
cv2.imshow('Original Image',im)
cv2.imshow('Power Law Transformation',im_power_law_transformation)
cv2.waitKey(0)
Notice that the gamma value we chose is 0.6. The figure below shows the original image and the result of applying the Power Law Transformation operator on that image (the left image shows the original image, and the right image shows the result after applying the power law transformation operator).
0.6
The result above was when gamma = 0.6. Let's see what happens when we increase gamma to 1.5, for instance:
gamma = 0.6
1.5
Notice that as we increase the value of gamma, the image becomes darker, and vice versa.
One might be asking what the use of the power law transformation could be. In fact, the different devices used for image acquisition, printing, and display respond according to the power law transformation operator. This is due to the fact that the human brain uses gamma correction to process an image. For instance, gamma correction is considered important when we want an image to be displayed correctly (the best image contrast is displayed in all the images) on a computer monitor or television screens.
In this tutorial, you have learned how to enhance images using Python. You have seen how to highlight features using the image inverse operator, and how the power law transformation is considered a crucial operator for displaying images correctly on computer monitors and television screens.
Furthermore, don’t hesitate to see what we have available for sale and for study in the Envato Market, and please ask any questions and provide your valuable feedback using the feed below.
|
http://mindmapengineers.com/mmeblog/image-enhancement-python?page=1
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
#include <wx/dcclient.h>
A wxClientDC must be constructed if an application wishes to paint on the client area of a window from outside an EVT_PAINT() handler.
This should normally be constructed as a temporary stack object; don't store a wxClientDC object.
To draw on a window from within an EVT_PAINT() handler, construct a wxPaintDC object instead.
To draw on the whole window including decorations, construct a wxWindowDC object (Windows only).
A wxClientDC object is initialized to use the same font and colours as the window it is associated with.
Constructor.
Pass a pointer to the window on which you wish to paint.
|
https://docs.wxwidgets.org/3.1.0/classwx_client_d_c.html
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
I’m having a problem
mongorestoreing data to a local dev server in Meteor 1.0.4.1. The collection in question is large, but by no means vast: 274MB bson, 35k docs.
Immediately after a
meteor reset, in Meteor 1.0.3.2 / MongoDB 2.4 I get:
Fri Mar 20 16:41:47.222 ./updates.bson
Fri Mar 20 16:41:47.222 going into namespace [meteor.updates]
Fri Mar 20 16:41:47.274 warning: Restoring to meteor.updates without dropping. Restored data will be inserted without raising errors; check your server log
Fri Mar 20 16:41:50.087 Progress: 80179407/274368881 29% (bytes)
Fri Mar 20 16:41:53.143 Progress: 163031299/274368881 59% (bytes)
Fri Mar 20 16:41:56.075 Progress: 245155299/274368881 89% (bytes)
34479 objects found
Fri Mar 20 16:41:57.800 Creating index: { key: { _id: 1 }, name: “id”, ns: “meteor.updates” }
But if I
meteor reset and upgrade to Meteor 1.0.4.1 / MongoDB 2.6 before doing the same I get:
Fri Mar 20 16:39:11.764 ./updates.bson
Fri Mar 20 16:39:11.764 going into namespace [meteor.updates]
Fri Mar 20 16:39:11.770 warning: Restoring to meteor.updates without dropping. Restored data will be inserted without raising errors; check your server log
Fri Mar 20 16:39:14.128 Progress: 72011103/274368881 26% (bytes)
Fri Mar 20 16:39:17.011 Progress: 148000775/274368881 53% (bytes)
Fri Mar 20 16:39:20.137 Progress: 233232012/274368881 85% (bytes)
Fri Mar 20 16:39:20.242 Socket say send() errno:104 Connection reset by peer 127.0.0.1:3001
assertion: 9001 socket exception [SEND_ERROR] server [127.0.0.1:3001]
Can anybody with more experience of MongoDB than I have advise why this could be? I don’t seem to be anywhere near the limits listed here.
Thanks
|
https://forums.meteor.com/t/mongorestore-fails-for-large-ish-collection-in-meteor-1-0-4-1-mongodb-2-6/1744
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
Time slice and Suspense API - What’s coming in React 17? - Pusher Blog
React powers so many awesome web and mobile apps such as Whatsapp, Instagram, Dropbox and Twitter. Along the road, React had to make some tough changes, an example being the migration from the difficult BSD + Patents license to the very non-restrictive MIT license following the decision by the Apache Foundation to ban the use of React. The change proved to be a key decision as not only did it bring more developers to React, it led a number of key projects such as WordPress and Drupal to adopt React
What’s new in React 17?
The Fiber rewrite that subsequently led to the release of React 16.0 came with changes such as Error boundaries, improved server side rendering, fragments and portals just to mention a few (learn more). However React 17 comes with even more exciting features. At a JavaScript conference in Iceland, JSConf 2018, the creator of Redux and a core team member of React, Dan Abramov, demoed the new features that would be present in React 17. In React’s latest release, a few factors that were addressed include:
- How network speed affects the loading state of your application and in the larger picture – user experience.
- How the state of your application is managed on low end devices.
Time slice
Early on in the development process of React 16.0, asynchronous rendering was kept off due to potential backwards compatibility issues. Although it enables faster response in UI rendering, it also introduces challenges for keeping track of changes in the UI. That’s where time slicing comes in. In the words of Dan:
Time slice was created to make asynchronous rendering easier for developers. Heavy React UIs on devices with “not so great” CPU power could make users experience a “slow” feel when navigating through the app. With time slicing, React can now split computations of updates on children components into chunks during idle callbacks and rendering work is spread out over multiple frames. This enhances UI responsiveness on slower devices. Time slice does a great job in handling all the difficult CPU scheduling tasks under the hood without developer considerations.
Suspense
Wouldn’t it be great if your app could pause any state update while loading asynchronous data? Well that’s one of the awesome features of suspense.
“Suspense provides an all-encompassing medium for components to suspend rendering while they load asynchronous data.”
Suspense takes asynchronous IO in React to a whole new level. With the suspense feature, ReactJS can temporarily suspend any state update until the data is ready while executing other important tasks. This feature makes working with asynchronous IO operators such as calling REST APIs like Falcor or GraphQL much more seamless and easier. Developers can now manage different states all at once while users still get to experience the app regardless of network speed – instead of displaying only loading spinners, certain parts of the app can be displayed while other parts load thus ensuring that the app stays accessible. In many ways, suspense makes Redux, the state management library appear even more defunct.
Suspense lets you *delay* rendering the content for a few seconds until the whole tree is ready. It *doesn’t* destroy the previous view while this is happening.
— Dan Abramov (@dan_abramov) March 4, 2018
While Dan was demonstrating how suspense works, he used an API called
createFetcher.
createFetcher can be described as a basic cache system that allows React to suspend the data fetching request from within the render method. A core member of the React team, Andrew Clark, made it clear what to expect from
createFetcher:
createFetcher from @dan_abramov's talk is this thing:
We're calling it simple-cache-provider (for now). It's a basic cache that works for 80% of use cases, and (when it's done) will serve as a reference implementation for Apollo, Relay, etc.
— Andrew Clark (@acdlite) March 1, 2018
Note: It must be noted that the
createFetcherAPI is extremely unstable and may change at anytime. Refrain from using it in real applications. You can follow up on its development and progress on Github.
To show you how suspense works, I’d like to adopt excerpts from Dan’s IO demo at JSConf 2018:
import { createFetcher, Placeholder, Loading } from '../future';
In the image above, the
createFetcher API imported from the future has a
.read method and will serve as a cache. It is where we will pass in the function
fetchMovieDetails which returns a promise.
In
MovieDetails, a value is read from the cache. If the value is already cached, the render continues like normal else the cache throws a promise. When the promise resolves, React continues from where it left off. The cache is then shared throughout the React tree using the new context API.
Usually components get cached from context. This implies that while testing, it’s possible to use fake caches to mock out any network requests.
createFetcher uses simple-cache-provider to suspend the request from within the render method thus enabling us begin rendering before all the data has returned.
simple-cache-provider has a
.preload method that we can use to initiate a request before React gets to the component that needs it. Let’s say in an ecommerce app you’re switching from ProductReview to Product, but only ProductInfo needs some data. React can still render Product while the data for ProductInfo is being fetched.
Suspense enables the app to be fully interactive while fetching data. Fears of race conditions occuring on the app while a user clicks around and triggers different actions are totally quashed. In the picture above, using the
Placeholder component, if the Movie review takes more than one second to load while waiting for async dependencies, React will show the spinner.
We can pause any state update until the data is ready and then add async loading to any component deep in the tree. This is possible through the
isLoading that lets us decide what to show.
Note:
simple-cache-provider. Chances are, just like every name or API that has been proposed, there might be breaking changes in future ReactJS releases. Ensure you refrain form using this in real applications.
Summary
With time slice we can handle all our arduous CPU scheduling tasks under the hood without any considerations. With suspense we solve all our async race conditions in one stroke. A brief recap of the points noted by Dan at JSConf 2018:
While you’re at it you can also watch Dan’s full presentation at JSConf 2018 where he gave detailed reasons behind the new features in React 17, most notably CPU and I/O optimizations.
|
http://brianyang.com/time-slice-and-suspense-api-whats-coming-in-react-17-pusher-blog/
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
Can’t resolve all parameters for ContactField: (?, ?, ?).
I have this issue when i am working with contacts in ionic3
Can’t resolve all parameters for ContactField: (?, ?, ?).
I have this issue when i am working with contacts in ionic3
I tried to save the contact list in my contacts.but it cause an issue Cant resolve all parameters for CantactField
Ionic info output:
cli packages: (C:\Users\Admin\sample_phoneCall\node_modules)
@ionic/cli-plugin-cordova : 1.6.1 @ionic/cli-plugin-ionic-angular : 1.4.0 .6.0
System:
Node : v8.1.4 OS : Windows 10 npm : 5.0.3
Here is My code:
contact() { let contact: Contact = this.contacts.create(); contact.name = new ContactName(null, 'Smith', 'John'); contact.phoneNumbers = [new ContactField('mobile', '6471234567')]; contact.save().then( () => console.log('Contact saved!', contact), (error: any) => console.error('Error saving contact.', error) ); }
Can’t resolve all parameters for ContactField: (?, ?, ?).
According to what you’re giving us, it seems like your missing a third obligatory field.
Definition of
ContactField is here:
Maybe try adding a third parameter “false”? Although this should not be needed really…
whoooops. Didn’t notice that. I don’t think the third param is the issue here (@Sujan12 already showed us actually).
Seems to be lacking any providing at all since none of the params are resolved. Did you installed the npm package and also correctly include the imports at the top of your file @sabarinathen? Also added the plugin to your app.module?
These imports:
import { Contacts, Contact, ContactField, ContactName } from '@ionic-native/contacts';
I imported already, if i am not import it was not accepted the parameter and also added in app.module file.
also tried as third parameter as false …same issue again
And are you sure you’ve installed the npm file? Could you post the contents of package.json here?
Just noticed something:
The code you posted is not using
' ... ' around the values but
‘ ... ’. Is this your real code?
@luukschoen : its my package.json file
@Sujan12:if you want i ll show the screen shot of my code.otherwise you just copy the code and then paste it in command check preview it will show the ‘…’ as similar to that ‘ … ’…
Could you still just copy and paste the content of the entire package.json? I’m only seeing part of it and it’s not really nice for screenreaders and stuff to post pictures (regardless the quotes). I’m curious to see your dependencies, I definitely believe the plugin is in the right place.
I copied your code to my VS Code, I have a project that does Ionic Native Contacts stuff, and the code didn’t work - that’s why I was asking.
Edit: Figure out that editing your post and marking the code as code (by using the
</> button above the post) fixed the issue. So not relevant to your actual problem.
{ "name": "sample_phoneCall", "version": "0.0.1", "author": "Ionic Framework", "homepage": "", "private": true, "scripts": { "clean": "ionic-app-scripts clean", "build": "ionic-app-scripts build", "lint": "ionic-app-scripts lint", "ionic:build": "ionic-app-scripts build", "ionic:serve": "ionic-app-scripts serve" }, "dependencies": { "@angular/common": "4.1.3", "@angular/compiler": "4.1.3", "@angular/compiler-cli": "4.1.3", "@angular/core": "4.1.3", "@angular/forms": "4.1.3", "@angular/http": "4.1.3", "@angular/platform-browser": "4.1.3", "@angular/platform-browser-dynamic": "4.1.3", "@ionic-native/call-number": "^4.1.0", "@ionic-native/contacts": "^4.1.0", "@ionic-native/core": "3.12.1", "@ionic-native/splash-screen": "3.12.1", "@ionic-native/status-bar": "3.12.1", "@ionic/storage": "2.0.1", "call-number": "^1.0.1", "cordova-android": "^6.2.3", "cordova-plugin-compat": "^1.0.0", "cordova-plugin-console": "^1.0.5", "cordova-plugin-contacts": "^2.3.1", "cordova-plugin-device": "^1.1.4", "cordova-plugin-splashscreen": "^4.0.3", "cordova-plugin-statusbar": "^2.2.2", "cordova-plugin-whitelist": "^1.3.1", "ionic-angular": "3.6.0", "ionic-plugin-keyboard": "^2.2.1", "ionicons": "3.0.0", "mx.ferreyra.callnumber": "~0.0.2", "rxjs": "5.4.0", "sw-toolbox": "3.6.0", "zone.js": "0.8.12" }, "devDependencies": { "@ionic/app-scripts": "2.1.3", "@ionic/cli-plugin-cordova": "1.6.1", "@ionic/cli-plugin-ionic-angular": "1.4.0", "ionic": "3.7.0", "typescript": "2.3.4" }, "description": "An Ionic project", "cordova": { "plugins": { "mx.ferreyra.callnumber": {}, "cordova-plugin-console": {}, "cordova-plugin-device": {}, "cordova-plugin-splashscreen": {}, "cordova-plugin-statusbar": {}, "cordova-plugin-whitelist": {}, "ionic-plugin-keyboard": {}, "cordov-plugin-contacts": {} }, "platforms": [ "android" ] } }
I have the exact code you posted in my contacts demo app:
See the last commit. I tested on a Nexus 5 via
ionic cordova run android.
That’s curious. What does ionic cordova plugin list output ? I also see a difference in versions between ionic-native/core and the plugins later installed, it’s best if you keep them both up to date.
once again i tried it removing the contact plugin and try again again the same issue…
It wil show the white screen running in android device
|
https://forum.ionicframework.com/t/ionic-native-contacts-are-not-working/100707
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
Imports
Syntax:
import = 'import', identifier, { '::', identifier }, [ symbols ]; symbols = '::(', symbol, { ',', symbol }, ')'; symbol = 'self' | constant, [ 'as', constant ];
Imports start with the
import keyword, and are followed by at least one
identifier. Sub modules are separated using
::, and the list of symbols to
import (if any) is defined using
::(symbol, symbol, ...). Symbols can be
aliased using
original as alias.
self can be used in the list of symbols to
import to refer to the module itself, allowing you to import the module itself
along with any additional symbols.
Examples
Importing a module:
import std::fs
Importing a module and aliasing it:
import std::fs::(self, Foo)
Importing multiple symbols:
import std::thing::(Foo, Bar, Baz)
Importing multiple symbols, and aliasing some:
import std::thing::(Foo, Bar as Baz)
|
https://inko-lang.org/manual/syntax/imports/
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
This on how to work with data.
Classes in this topic are available in the Mediachase.BusinessFoundation.Data.Business and Mediachase.BusinessFoundation.Data namespaces.
- DataContext initialization
- EntityObject class
- BusinessManager class
- Request-response system
- Typed entity objects
- MetaObject class user time zone and the following events are executed by the BusinessManager class_1<<, you, it must be registered in the Application config file, enabling the application to call the handler. In the config file of the application, create a mediachase.businessFoundation.data/businessManager section.
The following example shows how to register a handler that responds to requests for the Test method for Class_1 meta-class. The handler is defined as the class SampleHandler in the assembly SampleHandlerAssembly.
Example: Register a plugin
<businessManager> <plugins> <add metaClass="*" method="Update;Create" eventStage="PreMainOperationInsideTranasaction" type="SampleHandler, SampleHandlerAssembly" /> </plugins> </businessManager> section for information about FilterElement and SortingElement.
Typed entity objects
Entity objects that are used in your applications can be either typed or untyped. However, BF has other tools to support typed entity objects, with which programming is easier and less error-prone.
To use typed entity objects:
- Run McCodeGen to create typed entity object C# class.
- Register a new typed object as primary type for meta-class. entity object
The McCodeGen application can generate C# classes from the mcgen file. A typed entity object class is very similar to the untyped class EntityObject, but includes properties mapped to column values and column name constant string.
The Mcgen file should include this:
<?xml version="1.0" encoding="utf-8" ?> <mcgen xmlns="mediachase.codegen.config" extension="cs" template="Templates\BusinessFoundationEntityObject.aspx"> <EntityObject> <ConnectionString> Data Source=(local);Initial Catalog=TestDB;User ID=sa;Password=; </ConnectionString> <MetaClass>Book</MetaClass> <Params> <add key="Namespace" value="Test.MetaClass"/> </Params> </EntityObject> </mcgen>
Paste your connection string and meta-class name, declare an output class namespace, then execute McCodeGen from the command line.
McCodeGen.exe -mcgen:BookEntity.mcgen -out:BookEntity.cs
Then you can add a typed entity object class to your .NET project.
Typed entity object registration
After you create a typed entity object, register it in the application. Create a new handler based on EntityObjectDefaultRequestHandler and override the CreateEntityObject method, which creates and returns a typed entity object. After you create a handler, register it.
Note: BF allows for executing entity object methods through Web Service.
Visual Studio integration
Note: McCodeGen should be installed in order for the Visual Studio integration to work.
You can integrate the McCodeGen application with Visual Studio by adding the mcgen file to aVisual Studio project.
Check the properties for your mcgen file (in this case a CategoryRow.mcgen), the Custom Tool field should be blank. Enter the name McCodeGenerator.
The tool runs automatically. Check to make sure that the output was created by opening the Solution Explorer.
, you.
|
https://world.episerver.com/documentation/Items/Developers-Guide/Episerver-Commerce/9/Business-Foundation/Working-with-entity-objects/
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
#include <CUserManager.h>
The CloudBuilder::CUserManager is the second class you will use once you are connected with CloudBuilder::CClan::Setup method. This class manages a user profile. The class is a singleton and is not designed to be overridden. In version 2, we use the concept of delegates to get the results, in the same way it was designed in the C# wrapper.
Run a user authenticated batch on the server side. Batch is edited on BackOffice server.
Helper method that allows easily to post a success. For this to work, you need to have an achievement that uses the same unit as the name of the achievement.
Method used to retrieve the displayName given by the user at creation time or by a call to SetProfile.
Method used to retrieve the GamerID.
Method used to retrieve all the godchildren for the currently logged in user.
Method used to retrieve the godfather of the currently logged in user.
Method to call in order to generate a temporary code that can be used to obtain a new godchild.
Method used to retrieve the email address given by the user at creation time or by a call to SetProfile .
Use this method to obtain a reference to the CUserManager.
Fetches information about the status of the achievements configured for this game. Additional configuration can be provided in the form of JSON data.
Method used to retrieve global user information, including profile, friends, devices, ... It's the same JSON as the one received upon the Login/ResumeSession process.
If the user is logged in with a Facebook linked account, you can use this method to publish something on this user's wall.
Method used to signal that the currently logged in user is ready to receive push notifications. Usually, this is called internally by CotC at login time, unless CClan::Setup has been called with an option to delay the registration.
Allows to store arbitrary data for a given achievement and the current player (appears in the 'gamerData' node of achievements).
Method to call to attribute a godfather to the currently logged in user.
Method to send a transaction for this user.
|
http://cloudbuilder.clanofthecloud.mobi/doc/class_cloud_builder_1_1_c_user_manager.html
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
> The Connections are direct from their machines. > > -----Original Message----- > From: Jason Smith [mailto:[EMAIL PROTECTED]] > Sent: Sunday, December 29, 2002 5:38 PM > To: [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: [vchkpw] vpopmail + courier-imap + mysql relay problem > > > I am having problems with Imap client IP's not getting added to the > > relay table, pop3 clients get added fine. > > It says imap in the lastauth table for the user, but does not add > their > > IP > > > > What do I need to do to correct this problem? > > > > Please help > > > > Jason > > Diamond International > > > Are your IMAP connections direct from the client machines, or via a > webmail > interface (such as squirrelmail)? >
>From what I can tell with my very limited C knowledge, line 62 of authlib/preauthvchkpw.c is specifying that "imap" be logged into the lastauth table: Here is a call to the vset_lastauth function from preauthvchkpw.c, passing "service" which would be "imap" in this case. #ifdef ENABLE_AUTH_LOGGING vset_lastauth(User, Domain, service); #endif Whereas, vpopmail gets the value for "ipaddr" from the environment variable TCPREMOTEIP and uses this in place of "service" when logging connections. Maybe there is a way to modify preauthvchkpw.c and recompile courier-imap to call vset_lastauth with the actual remote IP address vs. the "service".
|
https://www.mail-archive.com/[email protected]/msg10370.html
|
CC-MAIN-2018-51
|
en
|
refinedweb
|
In React we could have a component that shows a series title and the current episode's title while also rendering a
PlayButton sub-component.
const PlayerView = ({ episode }) => {const [isPlaying, setPlaying] = useState(false);return (<div style={{ display: "flex", flexDirection: "column" }}><span>{episode.title}</span><span>{episode.showTitle}</span><PlayButton isPlaying={isPlaying} setPlaying={setPlaying} /></div>);};
Not how this is similar to the way that
@State works. We can specify a private var wrapped with
@State.
When the state value changes, the view invalidates its appearance and recomputes the body. - swiftui/state
struct PlayerView: View {var episode: Episode@State private var isPlaying: Bool = falsevar body: some View {VStack {Text(episode.title)Text(episode.showTitle)PlayButton(isPlaying: $isPlaying)}}}
In both cases, changes to the state cause a "re-render" of the component. In Swift's case, the
@State is a wrapped value, which is why we have to access the projected value using
$isPlaying when passing it to the
PlayButton component.
The behavior of how we're passing in
isPlaying is slightly different, so I've included
setPlaying in the React example whereas the
isPlaying value could be manipulated by the
PlayButton in the swift example.
|
https://www.christopherbiscardi.com/swift-ui-state-property-wrapper-vs-react-use-state
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Storybook is a set of libraries that lets us create components and preview them by passing in various attributes to them. The recent release of Storybook 6 included many convenient new features. Without further ado, let’s take a look at the new features and how to use them.
Zero-config setup
With Storybook 6, we can build a Storybook with ease: all we have to do is run
npx sb init on our project and we have Storybook added.
If we wanted to add Storybook to a React project created with create-react-app, for example, we’d just use that command. Do note, however, that
npx sb init only works with existing projects and can’t be used on an empty project folder.
So, to use Storybook with a React project, we first run:
npx create-react-app storybook-project
This creates the
storybook-project React project. Then, we go to the
storybook-project folder and run
npx sb init to add Storybook to it.
To upgrade an existing Storybook project to the latest version, we run
npx sb upgrade to install it. We’d then run
yarn add @storybook/addon-essentials --dev to install the addons, which render the content we see below the preview of the component.
The Storybook Essentials package has a few useful addons for changing the viewport in which we can preview our component. It also has an addon that allows us to document our component using either JSX or MDX code. (MDX is a mix of Markdown and JSX.)
Other addons include:
- The actions addon: Lets us log event objects emitted from various events, such as clicks, mouseover, keyboard events, etc.
- The backgrounds addon: Lets us set the background to our preferred color when previewing our component
- The toolbars addon: Lets us customize the toolbar at the top of the Storybook screen with our own preferences
TypeScript support is also built-in with Storybook 6, so we can immediately use TypeScript out of the box without extra configuration.
Args for stories
In Storybook, args are attributes that we pass into our components to change it. This lets us make preset configurations for our component so that we can preview them.
We can set the args in the story files. For example, if we have a React Storybook project, we can create our components and stories as follows:
//src/stories/Button.js import React from 'react'; import PropTypes from 'prop-types'; import './button.css'; export const Button = ({ primary, backgroundColor, size, label, ...props }) => { const mode = primary ? 'button-primary' : 'button-secondary'; return ( <button type="button" className={['button', `button-${size}`, mode].join(' ')} style={backgroundColor && { backgroundColor }} {...props} > {label} </button> ); }; Button.propTypes = { primary: PropTypes.bool, backgroundColor: PropTypes.string, size: PropTypes.oneOf(['small', 'medium', 'large']), label: PropTypes.string.isRequired, onClick: PropTypes.func, }; Button.defaultProps = { backgroundColor: null, primary: false, size: 'medium', onClick: undefined, };
//src/stories/button.css .button { font-weight: 700; border: 0; border-radius: 3em; cursor: pointer; display: inline-block; line-height: 1; } .button-primary { color: white; background-color: #1ea7fd; } .button-secondary { color: #333; background-color: transparent; } .button-small { font-size: 12px; padding: 10px; } .button-medium { font-size: 14px; padding: 11px; } .button-large { font-size: 16px; padding: 12px; }
//src/stories/Button.stories.js import React from 'react'; import { Button } from './Button'; export default { title: 'Example/Button', component: Button, argTypes: { backgroundColor: { control: 'color' }, }, }; const Template = (args) => <Button {...args} />; export const Primary = Template.bind({}); Primary.args = { primary: true, label: 'Button', }; export const Secondary = Template.bind({}); Secondary.args = { label: 'Button', }; export const Large = Template.bind({}); Large.args = { size: 'large', label: 'Button', }; export const Small = Template.bind({}); Small.args = { size: 'small', label: 'Button', };
The
Button.js file has the component file, and the
button.css has the styles for the
Button component.
The
Button components takes several props:
- The
primaryprop lets us set the class for to style the button in various ways
backgroundColorset the background color
sizesets the size
labelsets the button text
The rest of the props are passed into the
button element.
Below that, we add some prop type validations so that we can set our args properly and let Storybook pick the controls for the args.
primary is a Boolean, so it’ll be displayed as a checkbox button.
backgroundColor is a string.
size can be one of three values, so Storybook will create a dropdown for it automatically to let us select the value.
label is a string prop, so it’ll show as a text input. The input controls are in the Controls tab of the Storybook screen below the component preview.
The args are set in the
Button.stories.js file, which is a file with the stories. Storybook will pick up any file that ends with
stories.js or
stories.ts as a story files.
The
argTypes property lets us set the control for our args. In our example, we set the
backgroundColor prop to be controlled with the
'color' control, which is the color picker.
Below that, we have our stories code. We create a template from the
Button component with our
Template function. It takes the args we pass in and passes them all off to the
Button.
Then, we call
Template.bind to let us pass the args as props to
Button by setting the
args property to an object with the props.
Template.bind returns a story object, which we can configure with args. This is a convenient way to set the props that we want to preview in our story.
Live-edit UI components
The Controls tab has all the form controls that we can use to set our component’s props. Storybook picks up the props and displays the controls according to the prop type.
Also, we can set the form control type as we wish in the stories file, as we’ve seen in the
argTypes property in the previous sections’ example. With this, we can set the props live in the Storybook screen and see what the output looks like in the Canvas tab.
The
backgroundColor prop’s value is changed with a color picker. The
primary prop is changed with a toggle button that lets us set it to
true or
false. And the
size prop is controlled with a dropdown since it can only be one of three values.
Storybook does the work automatically unless we change the control types ourselves. This is a very useful feature that lets us change our component without changing any code.
Combine multiple Storybooks
Storybook 6 introduces the ability to combine multiple Storybook projects by referencing different Storybook projects in another project.
We can do this by adding the following code in the
.storybook/main.js file:
module.exports = { //... refs: { react: { title: "React", url: '' }, angular: { title: "Angular", url: '' } } }
This lets us load multiple Storybook projects’ stories in one project. Now, if we run
npm run storybook, we’ll see all the Storybook stories displayed from both projects on the left sidebar.
The
title value is displayed in the left sidebar, and the
url has the URL to reach the Storybook project.
Conclusion
Storybook 6 comes with many useful new features. Storybook setup in existing projects can now be done with one command if you have a project that Storybook supports. We can use args to preset props in stories and preview them easily, and we can reference another Storybook project from another with minimal configuration..
|
https://blog.logrocket.com/whats-new-in-storybook-6/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
DR-GAN extractor not available ?
Hey @amohammadi @tiago.pereira
Just installed the package (conda install bob.ip.tensorflow) to prepare an environment for the FARGO project. For some reason, the class DrGanMSUExtractor is not available, and I don't know why, since it is here:
Here's the error I get:
from bob.ip.tensorflow_extractor import DrGanMSUExtractor ImportError: cannot import name 'DrGanMSUExtractor'
Do you have any idea about this ?
Thanks !
|
https://gitlab.idiap.ch/bob/bob.ip.tensorflow_extractor/-/issues/3
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
In this tutorial we will check how to control a relay connected to the ESP32 remotely, using sockets. The code will be developed using the Arduino core. The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board and a DFRobot relay board.
Introduction
In this tutorial we will check how to control a relay connected to the ESP32 remotely, using sockets. The code will be developed using the Arduino core.
The ESP32 will be acting as a socket server, receiving a connection from a client, which will send commands to turn the relay on or off. The commands will be very simple, a ‘1’ will correspond to turning the relay on and a ‘0’ to turn it off.
We will use Putty as a socket client, to avoid having to code a client ourselves. You can check how to reach a socket server hosted on the ESP32 using Putty on this previous tutorial.
The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board and a DFRobot relay board.
The electrical diagram
To make things simpler, I’m assuming the use of a ready to use relay module, which allows to control the relay directly from a pin of a microcontroller.
These type of modules already contain all the electronics needed so we can use a digital pin from a microcontroller without damaging it. In my case, as already mentioned in the introductory section, I’m using a DFRobot relay board.
Depending on your relay board, it may work with a power supply of 3.3 V or 5 V. The one I’m using works well with 3.3 V, as illustrated in the connection diagram of figure 1.
Figure 1 – Electrical schematic of the connection between the ESP32 and the relay board.
Note that the labels of the pins may differ between different modules of relays, so in figure 1 the input pin for controlling the relay was generically called SIG.
The power needed for the relay module may be provided directly from your ESP32 board, in case it has a power pin capable of providing enough current. If you are not sure if your board has such pin or what is the maximum current draw supported, my recommendation is to use an external power supply such as this.
If you use an external power supply, don’t forget to have a common ground for all the devices.
As a final note and as already mentioned in other tutorials, the control logic we will see below in the code is independent of the actuator, as long as it can be controlled by a single digital pin of the ESP32. So, although we are using a relay on this tutorial to exemplify a potencial use case, the code below can be used for a much more vast group of actuators.
The code
We start the code by importing the WiFi.h library, so we can connect the ESP32 to a Wireless Network. In order to be able to connect to the network, we will need to store its access credentials, namely the network name (service set identifier or SSID) and the password.
Additionally, we need an object of class WiFiServer, which will be used to configure the socket server and to receive the connections from the socket clients. Remember that the constructor for this class receives the port where the server will be listening as argument. I will use port 80, but you can test with other values, as long as you use them later when testing the whole system.
Finally, we will also store the number of the pin that will control the relay on a global variable, so it is easily changeable. I will be using pin 23 but you can test with others.
#include "WiFi.h" const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPass"; WiFiServer wifiServer(80); int relayPin = 23;
As usual, we will use the Arduino setup function to take care of initializing a serial connection to output the results of the program, and also to connect the ESP32 to the Wireless Network.
After that, we will call the begin method on the previously created WiFiServer object. This method is called for the server to start listening to the incoming connections.
Finally, at the end of the setup function, we will set the pin of the microcontroller that will control the relay as output. This is done with a call to the pinMode function, passing as first argument the number of the pin we have stored in a global variable and as second the constant OUTPUT.
As an additional safeguard, we will set the status of the pin to a low digital state. This way, we always know the initial state of the relay when the program starts running.
We do it using the digitalWrite function, which receives as first input the number of the pin and as second a constant, either LOW or HIGH, depending if we want to set the pin to a low or high digital state, respectively. In our case, we should use LOW.); }
Moving on to the main loop function, we will handle the reception of connections from socket clients.
We start by calling the available method on the WiFiServer object, which will return an object of class WiFiClient. We will use this object to receive the actual data from the client.
WiFiClient client = wifiServer.available();
Nonetheless, before receiving the data, we need to check if a client is indeed connected, since the previously called available method is non-blocking by default, and thus it always returns a WiFiClient object, which may be connected or not.
Remember from the previous tutorials that the WiFiClient class overrides the C++ bool operator to return the value of the connected method of that class, which tells us if the client is effectively connected or not. Thus, we simply need to enclose the object in a IF statement.
if (client) { // Handle the reception of data }
Inside this conditional block, we start reading the data while the client is connected. We use the connected method to check if the client is still connected and we use the available method to check if there is data available to read. These methods are both called on the WiFiClient object.
If there is data to read, we read each byte with a call to the read method on the WiFiClient object. We will pass the result of this method call to a function we will analyse below, which we will call processedReceivedValue. This function will analyze the value of the received command and actuate over the relay accordingly.
Additionally, for debugging, we will write the received byte to the serial port.
while (client.connected()) { while (client.available()>0) { char c = client.read(); processReceivedValue(c); Serial.write(c); } delay(10); }
Once the client disconnects, we call the stop method on the WiFiClient object, to free the resources. Then, we complete the loop function and go back to the beginning, checking if there are new clients available
client.stop();
To finalize, we will declare the function that will process the received command, which takes the form of a byte. Note that we are interpreting it as a char, since it will be sent from a command line tool where the user will type the command.
The logic of the function will be as simple as comparing the received value with ‘1’ and ‘0’. If the received value was ‘1’, then we turn on the relay, and if it was ‘0’ we turn it off. Otherwise, we do nothing.
As mentioned, these values will be typed by the user, so we are doing the comparison assuming the values are characters, which is why we compare it with the chars ‘1’ and ‘0’ and not with byte values 1 and 0.
void processReceivedValue(char command){ if(command == '1'){ digitalWrite(relayPin, HIGH); } else if(command == '0'){ digitalWrite(relayPin, LOW);} return; }
You can check the final source code below.
#include "WiFi.h" const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPass"; WiFiServer wifiServer(80); int relayPin = 23; void processReceivedValue(char command){ if(command == '1'){ digitalWrite(relayPin, HIGH); } else if(command == '0'){ digitalWrite(relayPin, LOW);} return; }); } void loop() { WiFiClient client = wifiServer.available(); if (client) { while (client.connected()) { while (client.available()>0) { char c = client.read(); processReceivedValue(c); Serial.write(c); } delay(10); } client.stop(); Serial.println("Client disconnected"); } }
Testing the code
To test the whole system, first compile and upload the code to the ESP32. Once the procedure finishes, open the Arduino IDE serial monitor and copy the IP address that gets printed as soon as the ESP32 finishes connecting to the WiFi network.
Then, open Putty and configure it accordingly to what is illustrated on figure 2. As shown, we need to choose “Raw“ as connection type and then put the IP address copied from the Arduino IDE serial monitor on the checkbox labeled as “Host Name (or IP Address)”. We also need to use the port defined on the Arduino code (80).
Figure 2 – Configuring Putty to reach the ESP32 socket server.
After clicking “Open“, a command line should appear. There you can type the values 1 and 0 to turn on and off the relay, respectively.
Related Posts
- ESP32 Socket Server: Connecting from a Putty socket Client
- ESP32 Arduino Socket server: Getting remote client IP
- ESP32 Arduino: Sending data with socket client
- ESP32 Arduino Bluetooth Classic: Controlling a relay remotely
- ESP32 Arduino HTTP server: controlling a relay remotely
- ESP32 Arduino: Controlling a relay
2 Replies to “ESP32 Socket Server: Controlling a relay remotely”
|
https://techtutorialsx.com/2018/05/25/esp32-socket-server-controlling-a-relay-remotely/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
The hl7 component is used for working with the HL7 MLLP protocol and HL7 v2 messages using the HAPI library. This component supports the following:
Agnostic data format using either plain String objects or HAPI HL7 model objects.
Type Converter from/to HAPI and String
HL7 DataFormat using HAPI library
Even more ease-of-use as it's integrated well with the Camel-Mina and Camel-Mina2 (for Camel 2.11+) components.
Maven users will need to add the following dependency to their
pom.xml for
this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hl7</artifactId> <!-- use the same version as your Camel core version --> <version>x.x.x</version> </dependency>
HL7 is often used with the HL7 MLLP protocol that is a text based TCP socket based
protocol. This component ships with a Mina Codec that conforms to the MLLP protocol so you
can easily expose a HL7 listener that accepts HL7 requests over the TCP transport. To expose
a HL7 listener service we reuse the existing Camel Mina or Mina2 components where we just
use
HL7MLLPCodec as codec.
The HL7 MLLP codec has the following options:
In our Spring XML file, we configure an endpoint to listen for HL7 requests using TCP:
<endpoint id="hl7listener" uri="mina:tcp://localhost:8888?sync=true&codec=#hl7codec"/> <!-- for Camel 2.11: use uri="mina2:tcp..." -->
Notice that we use TCP on localhost on port 8888. We use
sync=true to
indicate that this listener is synchronous and therefore will return a HL7 response to the
caller. Then we setup Mina to use our HL7 codec with codec=#hl7codec. Notice that hl7codec
is just a Spring bean ID, so we could have named it mygreatcodecforhl7 or whatever. The
codec is also set up in the Spring XML file:
<bean id="hl7codec" class="org.apache.camel.component.hl7.HL7MLLPCodec"> <property name="charset" value="iso-8859-1"/> </bean>
And here we configure the charset encoding to use, and iso-8859-1 is commonly used.
The endpoint hl7listener can then be used in a route as a consumer, as this java DSL example illustrates:
from("hl7listener").to("patientLookupService");
This is a very simple route that will listen for HL7 and route it to a service named patientLookupService that is also a Spring bean ID we have configured in the Spring XML as:
<bean id="patientLookupService" class="com.mycompany.healtcare.service.PatientLookupService"/>
Another powerful feature of Camel is that we can have our business logic in POJO classes that is not tied to Camel as shown here:
import ca.uhn.hl7v2.HL7Exception; import ca.uhn.hl7v2.model.Message; import ca.uhn.hl7v2.model.v24.segment.QRD; public class PatientLookupService { public Message lookupPatient(Message input) throws HL7Exception { QRD qrd = (QRD)input.get("QRD"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // find patient data based on the patient id and // create a HL7 model object with the response Message response = ... create and set response data return response; } }
Notice that this class uses imports from the HAPI library and not from Camel.
The HL7MLLP codec uses plain Strings as its data format. Camel uses its Type Converter to convert to/from strings to the HAPI HL7 model objects. However, you can use plain String objects if you prefer, for instance if you wish to parse the data yourself.
The HL7v2 model uses Java objects from the HAPI library. Using this library, we can encode and decode from the EDI format (ER7) that is mostly used with HL7v2. With this model you can code with Java objects instead of the EDI based HL7 format that can be hard for humans to read and understand.
The sample below is a request to lookup a patient with the patient ID 0101701234.
MSH|^~\\&|MYSENDER|MYRECEIVER|MYAPPLICATION||200612211200
||QRY^A19|1234|P|2.4
QRD|200612211200|R|I|GetPatient|||1^RD|0101701234|DEM||
Using the HL7 model we can work with the data as a ca.uhn.hl7v2.model.Message object. To retrieve the patient ID in the message above, you can do this in Java code:
Message msg = exchange.getIn().getBody(Message.class); QRD qrd = (QRD)msg.get("QRD"); String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue();
Camel has built-in type converters, so when this operation is invoked:
Message msg = exchange.getIn().getBody(Message.class);
If you know the message type in advance, you can be more type-safe:
QRY_A19 msg = exchange.getIn().getBody(QRY_A19.class); String patientId = msg.getQRD().getWhoSubjectFilter(0).getIDNumber().getValue();
Camel will convert the received HL7 data from String to Message. This is powerful when combined with the HL7 listener, then you as the end-user don't have to work with byte[], String or any other simple object formats. You can just use the HAPI HL7v2 model objects.
To use HL7 in your Camel routes you'll need to add a Maven dependency on camel-hl7 listed above, which implements this data format. The HAPI library is split into a base library and several structures libraries, one for each HL7v2 message version.
By default camel-hl7 only references the HAPI base library. Applications are responsible for including structures libraries themselves. For example, if a application works with HL7v2 message versions 2.4 and 2.5 then the following dependencies must be added:
<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v24</artifactId> <!-- use the same version as your hapi-base version --> <version>1.2</version> </dependency>
<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-structures-v25</artifactId> <!-- use the same version as your hapi-base version --> <version>1.2</version> </dependency>
Alternatively, an OSGi bundle containing the base library, all structure libraries and required dependencies (on the bundle classpath) can be downloaded from the central Maven repository:
<dependency> <groupId>ca.uhn.hapi</groupId> <artifactId>hapi-osgi-base</artifactId> <version>1.2</version> </dependency>
Note that the version number must match the version of the hapi-base library that is transitively referenced by this component.
See the Camel Website for examples of this component in use.:");
Often it is preferable to parse a HL7v2 message and validate it against a HAPI ValidationContext in a separate step afterwards.())
|
https://help.talend.com/reader/SUzvVjxkFWs4p6BXVXwyHQ/p3y0lUe96EQJSBT1TkBA_A
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Organizing Layers Using Hexagonal Architecture, DDD, and Spring
Last modified: February 16, 2020
I just announced the new Learn Spring course, focused on the fundamentals of Spring 5 and Spring Boot 2:>> CHECK OUT THE COURSE
1. Overview
In this tutorial, we'll implement a Spring application using DDD. Additionally, we'll organize layers with the help of Hexagonal Architecture.
With this approach, we can easily exchange the different layers of the application.
2. Hexagonal Architecture
Hexagonal architecture is a model of designing software applications around domain logic to isolate it from external factors.
The domain logic is specified in a business core, which we'll call the inside part, the rest being outside parts. Access to domain logic from the outside is available through ports and adapters.
3. Principles
Firstly, we should define principles to divide our code. As explained briefly already, hexagonal architecture defines the inside and the outside part.
What we'll do instead is divide our application into three layers; application (outside), domain (inside), and infrastructure (outside):
Through the application layer, the user or any other program interacts with the application. This area should contain things like user interfaces, RESTful controllers, and JSON serialization libraries. It includes anything that exposes entry to our application and orchestrates the execution of domain logic.
In the domain layer, we keep the code that touches and implements business logic. This is the core of our application. Additionally, this layer should be isolated from both the application part and the infrastructure part. On top of that, it should also contain interfaces that define the API to communicate with external parts, like the database, which the domain interacts with.
Lastly, the infrastructure layer is the part that contains anything that the application needs to work such as database configuration or Spring configuration. Besides, it also implements infrastructure-dependent interfaces from the domain layer.
4. Domain Layer
Let's begin by implementing our core layer, which is the domain layer.
Firstly, we should create the Order class:
public class Order { private UUID id; private OrderStatus status; private List<OrderItem> orderItems; private BigDecimal price; public Order(UUID id, Product product) { this.id = id; this.orderItems = new ArrayList<>(Arrays.astList(new OrderItem(product))); this.status = OrderStatus.CREATED; this.price = product.getPrice(); } public void complete() { validateState(); this.status = OrderStatus.COMPLETED; } public void addOrder(Product product) { validateState(); validateProduct(product); orderItems.add(new OrderItem(product)); price = price.add(product.getPrice()); } public void removeOrder(UUID id) { validateState(); final OrderItem orderItem = getOrderItem(id); orderItems.remove(orderItem); price = price.subtract(orderItem.getPrice()); } // getters }
This is our aggregate root. Anything related to our business logic will go through this class. Additionally, Order is responsible for keeping itself in the correct state:
- The order can only be created with the given ID and based on one Product – the constructor itself also inits the order with CREATED status
- Once the order is completed, changing OrderItems is impossible
- It's impossible to change the Order from outside the domain object, like with a setter
Furthermore, the Order class is also responsible for creating its OrderItem.
Let's create the OrderItem class then:
public class OrderItem { private UUID productId; private BigDecimal price; public OrderItem(Product product) { this.productId = product.getId(); this.price = product.getPrice(); } // getters }
As we can see, OrderItem is created based on a Product. It keeps the reference to it and stores the current price of the Product.
Next, we'll create a repository interface (a port in Hexagonal Architecture). The implementation of the interface will be in the infrastructure layer:
public interface OrderRepository { Optional<Order> findById(UUID id); void save(Order order); }
Lastly, we should make sure that the Order will be always saved after each action. To do that, we'll define a Domain Service, which usually contains logic that can't be a part of our root:
public class DomainOrderService implements OrderService { private final OrderRepository orderRepository; public DomainOrderService(OrderRepository orderRepository) { this.orderRepository = orderRepository; } @Override public UUID createOrder(Product product) { Order order = new Order(UUID.randomUUID(), product); orderRepository.save(order); return order.getId(); } @Override public void addProduct(UUID id, Product product) { Order order = getOrder(id); order.addOrder(product); orderRepository.save(order); } @Override public void completeOrder(UUID id) { Order order = getOrder(id); order.complete(); orderRepository.save(order); } @Override public void deleteProduct(UUID id, UUID productId) { Order order = getOrder(id); order.removeOrder(productId); orderRepository.save(order); } private Order getOrder(UUID id) { return orderRepository .findById(id) .orElseThrow(RuntimeException::new); } }
In a hexagonal architecture, this service is an adapter that implements the port. Additionally, we'll not register it as a Spring bean because, from a domain perspective, this is in the inside part, and Spring configuration is on the outside. We'll manually wire it with Spring in the infrastructure layer a bit later.
Because the domain layer is completely decoupled from application and infrastructure layers, we can also test it independently:
class DomainOrderServiceUnitTest { private OrderRepository orderRepository; private DomainOrderService tested; @BeforeEach void setUp() { orderRepository = mock(OrderRepository.class); tested = new DomainOrderService(orderRepository); } @Test void shouldCreateOrder_thenSaveIt() { final Product product = new Product(UUID.randomUUID(), BigDecimal.TEN, "productName"); final UUID id = tested.createOrder(product); verify(orderRepository).save(any(Order.class)); assertNotNull(id); } }
5. Application Layer
In this section, we'll implement the application layer. We'll allow the user to communicate with our application via a RESTful API.
Therefore, let's create the OrderController:
@RestController @RequestMapping("/orders") public class OrderController { private OrderService orderService; @Autowired public OrderController(OrderService orderService) { this.orderService = orderService; } @PostMapping CreateOrderResponse createOrder(@RequestBody CreateOrderRequest request) { UUID id = orderService.createOrder(request.getProduct()); return new CreateOrderResponse(id); } @PostMapping(value = "/{id}/products") void addProduct(@PathVariable UUID id, @RequestBody AddProductRequest request) { orderService.addProduct(id, request.getProduct()); } @DeleteMapping(value = "/{id}/products") void deleteProduct(@PathVariable UUID id, @RequestParam UUID productId) { orderService.deleteProduct(id, productId); } @PostMapping("/{id}/complete") void completeOrder(@PathVariable UUID id) { orderService.completeOrder(id); } }
This simple Spring Rest controller is responsible for orchestrating the execution of domain logic.
This controller adapts the outside RESTful interface to our domain. It does it by calling the appropriate methods from OrderService (port).
6. Infrastructure Layer
The infrastructure layer contains the logic needed to run the application.
Therefore, we'll start by creating the configuration classes. Firstly, let's implement a class that will register our OrderService as a Spring bean:
@Configuration public class BeanConfiguration { @Bean OrderService orderService(OrderRepository orderRepository) { return new DomainOrderService(orderRepository); } }
Next, let's create the configuration responsible for enabling the Spring Data repositories we'll use:
@EnableMongoRepositories(basePackageClasses = SpringDataMongoOrderRepository.class) public class MongoDBConfiguration { }
We have used the basePackageClasses property because those repositories can only be in the infrastructure layer. Hence, there's no reason for Spring to scan the whole application. Furthermore, this class can contain everything related to establishing a connection between MongoDB and our application.
Lastly, we'll implement the OrderRepository from the domain layer. We'll use our SpringDataMongoOrderRepository in our implementation:
@Component public class MongoDbOrderRepository implements OrderRepository { private SpringDataMongoOrderRepository orderRepository; @Autowired public MongoDbOrderRepository(SpringDataMongoOrderRepository orderRepository) { this.orderRepository = orderRepository; } @Override public Optional<Order> findById(UUID id) { return orderRepository.findById(id); } @Override public void save(Order order) { orderRepository.save(order); } }
This implementation stores our Order in MongoDB. In a hexagonal architecture, this implementation is also an adapter.
7. Benefits
The first advantage of this approach is that we separate work for each layer. We can focus on one layer without affecting others.
Furthermore, they're naturally easier to understand because each of them focuses on its logic.
Another big advantage is that we've isolated the domain logic from everything else. The domain part only contains business logic and can be easily moved to a different environment.
In fact, let's change the infrastructure layer to use Cassandra as a database:
@Component public class CassandraDbOrderRepository implements OrderRepository { private final SpringDataCassandraOrderRepository orderRepository; @Autowired public CassandraDbOrderRepository(SpringDataCassandraOrderRepository orderRepository) { this.orderRepository = orderRepository; } @Override public Optional<Order> findById(UUID id) { Optional<OrderEntity> orderEntity = orderRepository.findById(id); if (orderEntity.isPresent()) { return Optional.of(orderEntity.get() .toOrder()); } else { return Optional.empty(); } } @Override public void save(Order order) { orderRepository.save(new OrderEntity(order)); } }
Unlike MongoDB, we now use an OrderEntity to persist the domain in the database.
If we add technology-specific annotations to our Order domain object, then we violate the decoupling between infrastructure and domain layers.
The repository adapts the domain to our persistence needs.
Let's go a step further and transform our RESTful application into a command-line application:
@Component public class CliOrderController { private static final Logger LOG = LoggerFactory.getLogger(CliOrderController.class); private final OrderService orderService; @Autowired public CliOrderController(OrderService orderService) { this.orderService = orderService; } public void createCompleteOrder() { LOG.info("<<Create complete order>>"); UUID orderId = createOrder(); orderService.completeOrder(orderId); } public void createIncompleteOrder() { LOG.info("<<Create incomplete order>>"); UUID orderId = createOrder(); } private UUID createOrder() { LOG.info("Placing a new order with two products"); Product mobilePhone = new Product(UUID.randomUUID(), BigDecimal.valueOf(200), "mobile"); Product razor = new Product(UUID.randomUUID(), BigDecimal.valueOf(50), "razor"); LOG.info("Creating order with mobile phone"); UUID orderId = orderService.createOrder(mobilePhone); LOG.info("Adding a razor to the order"); orderService.addProduct(orderId, razor); return orderId; } }
Unlike before, we now have hardwired a set of predefined actions that interact with our domain. We could use this to populate our application with mocked data for example.
Even though we completely changed the purpose of the application, we haven't touched the domain layer.
8. Conclusion
In this article, we've learned how to separate the logic related to our application into specific layers.
First, we defined three main layers: application, domain, and infrastructure. After that, we described how to fill them and explained the advantages.
Then, we came up with the implementation for each layer:
Finally, we swapped the application and infrastructure layers without impacting the domain.
As always, the code for these examples is available over on GitHub.
I’ve got a some ideas from good article. and I have one question about JPA.
in infrastructure, layer using JPA repository. JPA has to know what the properties metadata.
but I cannot see domain class has no ‘@Entity’ or ‘@Column’ annotations.
This code could be only mongo(like NoSQL DB) available?
Hello, No, you can use any persistence technology you want. In fact, this is the purpose of this architecture: to separate business logic and implementation details. You can use JPA and entities, but they can’t cross the infrastructure boundary. The calls between the infrastructure and domain layers should use objects defined in the domain layer. These objects must not be entities, only simple data structures. In the infrastructure layer, you can convert these objects to anything you want (or need). This conversion isn’t necessary if the POJO we defined in the domain layer is enough, like in this case. If… Read more »
|
https://www.baeldung.com/hexagonal-architecture-ddd-spring
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
#include <lib_takesystem.h>
An Override Group manages the values of multiple objects in a Take.
Gets the next Override Group in the list. Convenience version of GeListNode::GetNext() returning a BaseOverrideGroup*.
Gets the previous Override Group in the list. Convenience version of GeListNode::GetPred() returning a BaseOverrideGroup*.
Retrieves all the objects in the group.
Adds node to the Override Group. If node is already part of another group it will be automatically removed first.
Removes node from the Override Group.
Adds a new tag of the given type to the Override Group if it is not already there.
Removes the tag of the given type from the Override Group.
Sets the editor visibility mode for the Override Group.
Sets the render visibility mode for the Override Group.
Searches for a tag of the given type attached to the Override Group.
Returns the Take that owns the Override Group.
Checks if an object is included in the Override Group.
|
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_base_override_group.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
#include <c4d_filterdata.h>
A data class for creating bitmap loader plugins.
Use RegisterBitmapLoaderPlugin() to register a bitmap loader plugin.
Called to identify a file type as one that can be loaded using the bitmap loader.
If possible, the file should not be identified through its suffix, but through the probe data.
Called to load an image file into a bitmap.
Called to get the plugin ID of the corresponding BitmapSaverData, if there is one.
Called to get information on the loading of movies.
Called to accelerate the loading of animated bitmaps.
Loaders that overload LoadAnimated do not need to implement code twice, Load should in that case just look like this:
Called to extract the sound of animated bitmaps.
Called by the Picture Viewer to determine whether a movie has sound or not.
|
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_bitmap_loader_data.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
JavaScript/Print version
Contents[edit]
- Basics
- Placing the Code
- The
scriptelement
- Bookmarklets
- Lexical Structure
- Reserved Words
- Variables and Types
- Operators
- Control Structures
- Functions and Objects
- Event Handling
- Program Flow
- Regular Expressions
Introduction[edit].
First Program[edit]
Here> alert("Hello World!"); </script> </head> <body> <p>The content of the web page.</p> </body> </html>
This basic hello World program can then be used as a starting point for any new programs that you need to create.
Exercises[edit]
Exercise 1-1[edit][edit].
The SCRIPT Tag[edit]
The
script element[edit].
Inline JavaScript[edit]
Using inline JavaScript allows you to easily work with HTML and JavaScript within the same page. This is commonly used for temporarily testing out some ideas, and in situations where the script code is specific to that one page.
<script> // JavaScript code here </script>
Inline HTML comment markers[edit].
Inline XHTML JavaScript[edit]
In XHTML, the method is somewhat different:
<script> // <![CDATA[ // [Todo] JavaScript code here! // ]]> </script>
Note that the <![CDATA[ tag is commented out. The // prevents the browser from mistakenly interpreting the <![CDATA[ as a JavaScript statement. (That would be a syntax error).
Linking to external scripts[edit] named "script.js" and is located in a directory called "js", your
src would be "js/script.js".
Location of script elements[edit]] suggested by the Yahoo! Developer Network that specify a different placement for script tags: to put scripts at the bottom, just before the </body> tag. This speeds up downloading, and also allows for direct manipulation of the DOM while the page is loading. It is also a good practice to separate HTML documents from CSS code for easier management.
<!DOCTYPE html> <html> <head> <title>Web page title</title> </head> <body> <!-- HTML code here --> <script src="script.js"></script> </body> </html>
Controlling external script evaluation and parser blocking[edit]
By default, JavaScript execution is "parser blocking". When the browser encounters a script in the document, it must pause Document Object Model (DOM) construction, hand over control to the JavaScript runtime, and let the script execute before proceeding with DOM construction.[3]
As an alternative to placing scripts at the bottom of the document body, loading and execution of external scripts may be controlled using async or defer attributes. Asynchronous external scripts are loaded and executed in parallel with document parsing. The script will be executed as soon as it is available.[4]
<!DOCTYPE html> <html> <head> <title>Web page title</title> <script async</script> </head> <body> <!-- HTML code here --> </body> </html>
Deferred external scripts are loaded in parallel with document parsing, but script execution is deferred until after the document is fully parsed.[5]
<!DOCTYPE html> <html> <head> <title>Web page title</title> <script defer</script> </head> <body> <!-- HTML code here --> </body> </html>
Reference[edit]
- ↑ w:JavaScript#History and naming
- ↑ Yahoo: best practices for speeding up your web site
- ↑ Google: Adding Interactivity with JavaScript
- ↑ Mozilla: The Script element
- ↑ Mozilla: The Script element
Bookmarklets[edit]
Bookmarklets are one line scripts stored in the URL field of a bookmark. Bookmarklets have been around for a long time so they will work in older browsers., as.
Lexical Structure[edit] typical amount of called, to the right of the equals sign:
1A2B3C is an invalid identifier, as it starts with a number.
Naming variables[edit]
References[edit]
- ↑ Standard ECMA-262 ECMAScript Language Specification, Chapter 7.9 - Automatic Semicolon Insertion
Reserved Words[edit]
This page contains a list of reserved words in JavaScript, which cannot be used as names of variables, functions or other objects.
Variables and Types[edit]
JavaScript is a loosely typed language. This means that you can use the same variable for different types of information, but you may also have to check what type a variable is yourself, if the differences matter. For example, if you wanted to add two numbers, but one variable turned out to be a string, the result wouldn't necessarily be what you expected.
Variable declaration[edit][edit]
Primitive types are types provided by the system, in this case by JavaScript. Primitive type for JavaScript are Booleans, numbers and text. In addition to the primitive types, users may define their own classes.
The primitive types are treated by JavaScript as value types and when passed to a function, they are passed as values. Some types, such as string, allow method calls.
Boolean type[edit]
Boolean variables can only have two possible values, true or false.
var mayday = false; var birthday = true;
Numeric types[edit]
You can use an integer and double types on your variables, but they are treated as a numeric type.
var sal = 20; var pal = 12.1;
In the ECMA JavaScript specification, your number literals can go from 0 to -+1.79769e+308. And because 5e-324 is the smallest infinitesimal you can get, anything smaller is rounded to 0.
String types[edit]
The String and char types are all strings, so you can build any string literal that you wished for.
var myName = "Some Name"; var myChar = 'f';
Complex types[edit]
A complex type is an object, be it either standard or custom made. Its home is the heap and is always passed by reference.
Array type[edit]];
There is no limit to the number of items that can be stored in an array.
Object types[edit]
An object within JavaScript is created using the new operator:
var myObject = new Object();
Objects can also be created with the object notation, which uses curly braces:
var myObject = {};
JavaScript objects can implement inheritance and support overriding, and you can use polymorphism. There are no scope modifiers, with all properties and methods having public access. More information on creating objects can be found in Object Oriented Programming.
You can access browser built-in objects and objects provided through browser JavaScript extensions.
Scope[edit] scope[edit] scope[edit][edit]
Further reading[edit]
- "Values, variables, and literals". MDN. 2013-05-28.. Retrieved 2013-06-20.
Numbers[edit]
JavaScript implements numbers as floating point values, that is, they're attaining decimal values as well as whole number values..
Further reading[edit]
Strings[edit]
A string is a type of variable that stores a string (chain of characters).
Basic use[edit]
To make a new string, you can make a variable and give it a value of new String().
var foo = new String();
But, most developers skip that part and use a string literal:
var foo = "my string";
After you have made your string, you can edit it as you like:
foo = "bar"; // foo = "bar" foo = "barblah"; // foo = "barblah" foo += "bar"; // foo = "barblahbar"
A string literal is normally delimited by the ' or " character, and can normally contain almost any character. Common convention differs on whether to use single quotes or double quotes for strings. Some developers are for single quotes (Crockford, Amaram, Sakalos, Michaux), while others are for double quotes (NextApp, Murray, Dojo). Whichever method you choose, try to be consistent in how you apply it.
Due to the delimiters, it's not possible to directly place either the single or double quote within the string when it's used to start or end the string. In order to work around that limitation, you can either switch to the other type of delimiter for that case, or place a backslash before the quote to ensure that it appears within the string:
foo = 'The cat says, "Meow!"'; foo = "The cat says, \"Meow!\""; foo = "It's \"cold\" today."; foo = 'It\'s "cold" today.';
Properties and methods of the String() object[edit]
As with all objects, Strings have some methods and properties.
concat(text)[edit]
The concat() function joins two strings.
var foo = "Hello"; var bar = foo.concat(" World!") alert(bar); // Hello World!
length[edit]
Returns the length as an integer.
var foo = "Hello!"; alert(foo.length); // 6
indexOf[edit]
Returns the first occurrence of a string inside of itself, starting with 0. If the search string cannot be found, -1 is returned. The indexOf() method is case sensitive.
var foo = "Hello, World! How do you do?"; alert(foo.indexOf(' ')); // 6 var hello = "Hello world, welcome to the universe."; alert(hello.indexOf("welcome")); // 13
lastIndexOf[edit]
Returns the last occurrence of a string inside of itself, starting with index 0.. If the search string cannot be found, -1 is returned.
var foo = "Hello, World! How do you do?"; alert(foo.lastIndexOf(' ')); // 24
replace(text, newtext)[edit]
The replace() function returns a string with content replaced. Only the first occurrence is replaced.
var foo = "foo bar foo bar foo"; var newString = foo.replace("bar", "NEW!") alert(foo); // foo bar foo bar foo alert(newString); // foo NEW! foo bar foo
As you can see, the replace() function only returns the new content and does not modify the 'foo' object.
slice(start[, end])[edit]
Slice extracts characters from the start position.
"hello".slice(1); // "ello"
When the end is provided, they are extracted up to, but not including the end position.
"hello".slice(1, 3); // "el"
Slice allows you to extract text referenced from the end of the string by using negative indexing.
"hello".slice(-4, -2); // "el"
Unlike substring, the slice method never swaps the start and end positions. If the start is after the end, slice will attempt to extract the content as presented, but will most likely provide unexpected results.
"hello".slice(3, 1); // ""
substr(start[, number of characters])[edit]
substr extracts characters from the start position, essentially the same as slice.
"hello".substr(1); // "ello"
When the number of characters is provided, they are extracted by count.
"hello".substr(1, 3); // "ell"
substring(start[, end])[edit]
substring extracts characters from the start position.
"hello".substring(1); // "ello"
When the end is provided, they are extracted up to, but not including the end position.
"hello".substring(1, 3); // "el"
substring always works from left to right. If the start position is larger than the end position, substring will swap the values; although sometimes useful, this is not always what you want; different behavior is provided by slice.
"hello".substring(3, 1); // "el"
toLowerCase()[edit]
This function returns the current string in lower case.
var foo = "Hello!"; alert(foo.toLowerCase()); // hello!
toUpperCase()[edit]
This function returns the current string in upper case.
var foo = "Hello!"; alert(foo.toUpperCase()); // HELLO!
Escape Sequences[edit]
Escape sequences are very useful tools in editing your code in order to style your output of string objects, this improves user experience greatly.[1]
\bbackspace (U+0008 BACKSPACE)
\f: form feed (U+000C FORM FEED)
\n: line feed (U+000A LINE FEED)
\r: carriage return (U+000D CARRIAGE RETURN)
\t: horizontal tab (U+0009 CHARACTER TABULATION)
\v: vertical tab (U+000B LINE TABULATION)
\0: null character (U+0000 NULL) (only if the next character is not a decimal digit; else it’s an octal escape sequence)
\': single quote (U+0027 APOSTROPHE)
\": double quote (U+0022 QUOTATION MARK)
\\: backslash (U+005C REVERSE SOLIDUS)
Further reading[edit]
Dates[edit]][2]
- getHours(): Returns hours based on a 24 hour clock.[3]
- getMinutes():Returns minutes based on [0 - 59][4]
- getSeconds():Returns seconds based on [0 - 59][5]
- getTime(): Gets the time in milliseconds since January 1, 1970.
-[edit]
- JavaScript Date Object, developer.mozilla.org
Arrays[edit]
An array is a type of variable that stores a collection of variables. Arrays in JavaScript are zero-based - they start from zero. (instead of foo[1], foo[2], foo[3], JavaScript uses foo[0], foo[1], foo[2].)
Overview[edit][edit][edit]
Make an array with "zzz" as one of the elements, and then make an alert box using that element.
Nested arrays[edit]
You can also put an array within[edit]"]
Note that in this example the new arr3 array contains the contents of both the arr1 array and the arr2 array.
join() and split()[edit]
The Array Object's join() method returns a single string which contains all of the elements of an array — separated by a specified delimiter. If the delimiter is not specified, it is set to a comma. The String object's split() method returns an array in which the contents of the supplied string become the array elements — each element separated from the others based on a specified string Array pop() method removes and returns the last element of an array. The Array shift() method removes and returns the first element of an array. The length property of the array is changed by both the pop and shift"
Further reading[edit]
Operators[edit]
Arithmetic operators[edit]
JavaScript has the arithmetic operators +, -, *, /, and %. These operators function as the addition, subtraction, multiplication, division, and modulus operators, and operate very similarly to other languages. Multiplication and division operators will be calculated before addition and subtraction. Operations in parenthesis will be calculated first.
var a = 12 + 5; // 17 var b = 12 - 5; // 7 var c = 12*5; // 60 var d = 12/5; // 2.4 - division results in floating point numbers. var e = 12%5; // 2 - the remainder of 12/5 in integer math is 2. var f = 5 -2 * 4 // -3 - multiplication is calculated first. var g = (2+2) / 2 // 2 - Parenthesis are calculated first..[6]:[7]
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ W3Schools: JavaScript Object Properties
- ↑ "typeof" (in English) (HTML). Mozilla Corporation. 2014-11-18. Archived from the original on 2014-11-18.. Retrieved 2015-03-05.
Control Structures[edit]
The control structures within JavaScript allow the program flow to change within a unit of code or function. These statements can determine whether or not given statements are executed - and provide the basis for the repeated execution of a block of code.
Most of the statements listed below are so-called conditional statements that can operate either on a statement or on a block of code enclosed with braces ({ and }). The structure provided by the use of conditional statements utilizes Booleans to determine whether or not a block gets executed. In this use of Booleans, any defined variable that is neither zero nor an empty string will be evaluated as true.
Conditional statements[edit]
if[edit][edit] while block.
The continue keyword finishes the current iteration of the while block or statement, and checks the condition to see, if it is true. If it is true, the loop commences again.
do … while[edit]. In other words, break exits the loop, and continue checks the condition before attempting to restart the loop.
for[edit] object elements accessed by this version is arbitrary. For instance, this structure can be used to loop through all the properties of an object instance. It should not be used when the object is of Array type
switch[edit].
A slightly different usage of the switch statement can be found at the following link:
Omitting the break can be used to test for more than one value at a time:
switch(i) { case 1: case 2: case 3: // … break; case 4: // … break; default: // … break; }
In this case the program will run the same code in case
i equals 1, 2 or 3.
with[edit]
The with statement is used to extend the scope chain for a block[1] and has the following syntax:
with (expression) { // statement }
Pros[edit]
The with statement can help to
- reduce file size by reducing the need to repeat a lengthy object reference, and
- relieve the interpreter of parsing repeated object references.
However, in many cases, this can be achieved by using a temporary variable to store a reference to the desired object.
Cons[edit][edit]
var area; var r = 10; with (Math) { a = PI*r*r; // == a = Math.PI*r*r x = r*cos(PI); // == a = r*Math.cos(Math.PI); y = r*sin(PI/2); // == a = r*Math.sin(Math.PI/2); }
Functions and Objects[edit]
Functions[edit]
A function is an action to take to complete a goal, objective, or task. Functions allow you to split a complex goal into simpler tasks, which make managing and maintaining scripts easier. Parameters or arguments can be used to provide data, which is passed to a function to effect the action to be taken. The Parameters or arguments are placed inside the parentheses, then the function is closed with a pair of curly braces. The block of code to be executed is placed inside the curly braces. parameter calling a function within an html is called with an argument of 6.
See Also[edit]
Event Handling[edit]
Event Handlers[edit]
An event that can be handled is something happening in a browser window, including a document loading, the user clicking a mouse button, the user pressing a key, and the browser screen changing size. When a function is assigned to handle an event type, that function is run when an event of the event type occurs.
An event handler can be assigned in the following ways:
- Via an element attribute directly in HTML: <body onload="alert('Hello World!');">
- Via JavaScript, by assigning the event type to an element attribute: document.onclick = clickHandler;
- Via JavaScript by a direct call to the addEventListener() method of an element.
A handler that is assigned from a script uses the syntax '[element].[event] = [function];', where [element] is a page element, [event] is the name of the selected event and [function] is the name of the function that is called
Regular Expressions[edit]
Overview[edit]
JavaScript implements regular expressions (regex for short) when searching for matches within a string. As with other scripting languages, this allows searching beyond a simple letter-by-letter match, and can even be used to parse strings in a certain format.
Unlike strings, regular expressions are delimited by the slash (/) character, and may have some options appended.
Regular expressions most commonly appear in conjunction with the string.match() and string.replace() methods.
At a glance, by example:
strArray = "Hello world!".match(/world/); // Singleton array; note the slashes strArray = "Hello!".match(/l/g); // Matched strings are returned in a string array "abc".match(/a(b)c/)[1] === "b" // Matched subgroup is the 2nd item (index 1) str1 = "Hey there".replace(/Hey/g, "Hello"); str2 = "N/A".replace(/\//g, ","); // Slash is escaped with \ str3 = "Hello".replace(/l/g, "m").replace(/H/g, "L").replace(/o/g, "a"); // Pile if (str3.match(/emma/)) { console.log("Yes"); } if (str3.match("emma")) { console.log("Yes"); } // Quotes work as well "abbc".replace(/(.)\1/g, "$1") === "abc" // Backreference (?=...), (?!...), (?<=...), and (?<!...) are not available.
Examples[edit]
- Matching
- string = "Hello world!".match(/world/);
- stringArray = "Hello world!".match(/l/g); // Matched strings are returned in a string array
- "abc".match(/a(b)c/)[1] => "b" // Matched subgroup is the second member (having the index "1") of the resulting array
- Replacement
- string = string.replace(/expression without quotation marks/g, "replacement");
- string = string.replace(/escape the slash in this\/way/g, "replacement");
- string = string.replace( ... ).replace ( ... ). replace( ... );
- Test
- if (string.match(/regexp without quotation marks/)) {
Modifiers[edit]
Single-letter modifiers: classicText = "To be or not to be?"; var changedClassicText = classicText.replace(/\W[a-zA-Z]+/g, capitalize); console.log(changedClassicText==="To Be Or Not To Be?"); Reference at W3schools.com
- JavaScript RexExp Tester at regular-expressions.info
- Regular Expressions in Javascript at mozilla.org
- JavaScript RegExp Object at mozilla.org
Optimization[edit]
JavaScript optimization[edit]
Optimization Techniques[edit]
- High Level Optimization
- Algorithmic Optimization (Mathematical Analysis)
- Simplification
- Low Level Optimization
- Loop Unrolling
- Strength Reduction
- Duff's Device
- Clean Loops
- External Tools & Libraries for speeding/optimizing/compressing JavaScript code
Common Mistakes and Misconceptions[edit]
String concatenation[edit]
Strings in JavaScript are immutable objects. This means that once you create a string object, to modify it, another string object must theoretically be created.
Now, suppose you want to perform a ROT-13 on all the characters in a long string. Supposing you have a rot13() function, the obvious way to do this might be:
var s1 = "the original string"; var s2 = ""; for (i = 0; i < s1.length; i++) { s2 += rot13(s1.charAt(i)); }
Especially in older browsers like Internet Explorer 6, this will be very slow. This is because, at each iteration, the entire string must be copied before the new letter is appended.
One way to make this script faster might be to create an array of characters, then join it:
var s1 = "the original string"; var a2 = new Array(s1.length); var s2 = ""; for (i = 0; i < s1.length; i++) { a2[i] = rot13(s1.charAt(i)); } s2 = a2.join('');
Internet Explorer 6 will run this code faster. However, since the original code is so obvious and easy to write, most modern browsers have improved the handling of such concatenations. On some browsers the original code may be faster than this code.
A second way to improve the speed of this code is to break up the string being written to. For instance, if this is normal text, a space might make a good separator:
var s1 = "the original string"; var c; var st = ""; var s2 = ""; for (i = 0; i < s1.length; i++) { c = rot13(s1.charAt(i)); st += c; if (c == " ") { s2 += st; st = ""; } } s2 += st;
This way the bulk of the new string is copied much less often, because individual characters are added to a smaller temporary string.
A third way to really improve the speed in a for loop, is to move the [array].length statement outside the condition statement. In face, every occurrence, the [array].length will be re-calculate For a two occurrences loop, the result will not be visible, but (for example) in a five thousand occurrence loop, you'll see the difference. It can be explained with a simple calculation :
// we assume that myArray.length is 5000 for (x = 0;x < myArray.length;x++){ // doing some stuff }
"x = 0" is evaluated only one time, so it's only one operation.
"x < myArray.length" is evaluated 5000 times, so it is 10,000 operations (myArray.length is an operation and compare myArray.length with x, is another operation).
"x++" is evaluated 5000 times, so it's 5000 operations.
There is a total of 15 001 operation.
// we assume that myArray.length is 5000 for (x = 0, l = myArray.length; x < l; x++){ // doing some stuff }
"x = 0" is evaluated only one time, so it's only one operation.
"l = myArray.length" is evaluated only one time, so it's only one operation.
"x < l" is evaluated 5000 times, so it is 5000 operations (l with x, is one operation).
"x++" is evaluated 5000 times, so it's 5000 operations.
There is a total of 10002 operation.
So, in order to optimize your for loop, you need to make code like this :
var s1 = "the original string"; var c; var st = ""; var s2 = ""; for (i = 0, l = s1.length; i < l; i++) { c = rot13(s1.charAt(i)); st += c; if (c == " ") { s2 += st; st = ""; } } s2 += st;
Debugging[edit]
JavaScript Debuggers[edit]
Firebug[edit]
- Firebug is a powerful extension for Firefox that has many development and debugging tools including JavaScript debugger and profiler.
Venkman JavaScript Debugger[edit]
- Venkman JavaScript Debugger (for Mozilla based browsers such as Netscape 7.x, Firefox/Phoenix/Firebird and Mozilla Suite 1.x)
- Introduction to Venkman
- Using Breakpoints in Venkman
Internet Explorer debugging[edit]
-[edit].[2]
JTF: JavaScript Unit Testing Farm[edit]
- JTF is a collaborative website that enables you to create test cases that will be tested by all browsers. It's the best way to do TDD and to be sure that your code will work well on all browsers.
jsUnit[edit]
built-in debugging tools[edit]
Some people prefer to send debugging messages to a "debugging console" rather than use the alert() function[2][3][4]. Following is a brief list of popular browsers and how to access their respective consoles/debugging tools.
- Firefox: Ctrl+Shift+K opens an error console.
- Opera (9.5+): Tools >> Advanced >> Developer Tools opens Dragonfly.
- Chrome: Ctrl+Shift+J opens chrome's "Developer Tools" window, focused on the "console" tab.
- Internet Explorer: F12 opens a firebug-like Web development tool that has various features including the ability to switch between the IE8 and IE7 rendering engines.
- Safari: Cmd+Alt+C opens the WebKit inspector for Safari.
Common Mistakes[edit]
-:
alert('He's eating food');should be
-[edit]
Debugging in JavaScript doesn't differ very much from debugging in most other programming languages. See the article at Computer Programming Principles/Maintaining/Debugging.
Following Variables as a Script is Running[edit]
The most basic way to inspect variables while running is a simple alert() call. However some development environments allow you to step through your code, inspecting variables as you go. These kind of environments may allow you to change variables while the program is paused.
Browser Bugs[edit]
Sometimes the browser is buggy, not your script. This means you must find a workaround.
browser-dependent code[edit]]; }
References[edit]
- ↑ Sheppy, Shaver et al. (2014-11-18). "with" (in English) (HTML). Mozilla. Archived from the original on 2014-11-18.. Retrieved 2015-03-18.
- ↑ "Safari - The best way to see the sites." (in English) (HTML). Apple.. Retrieved 2015-03-09.
Further reading[edit]
- "JavaScript Debugging" by Ben Bucksch
DHTML[edit]
DHTML (Dynamic HTML) is a combination of JavaScript, CSS and HTML.
alert messages[edit]
<script type="text/javascript"> alert('Hello World!'); </script>
This will give a simple alert message.
<script type="text/javascript"> prompt('What is your name?'); </script>
This will give a simple prompt message.
<script type="text/javascript"> confirm('Are you sure?'); </script>
This will give a simple confirmation message.
Javascript Button and Alert Message Example:[edit]
Sometimes it is best to dig straight in with the coding. Here is an example of a small piece of code:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>"THE BUTTON" - Javascript</title> <script type="text/javascript"> x = 'You have not pressed "THE BUTTON"' function bomb() { alert('O-GOD NOOOOO, WE ARE ALL DOOMED!!'); alert('10'); alert('9'); alert('8'); alert('7'); alert('6'); alert('5'); alert('4'); alert('3'); alert('2'); alert('1'); alert('!BOOM!'); alert('Have a nice day. :-)'); x = 'You pressed "THE BUTTON" and I told you not to!'; } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } </style> </head> <body> <div> <input type="button" value="THE BUTTON - Don't Click It" onclick="bomb()">>
What does this code do? When it loads it tells what value the variable 'x' should have. The next code snippet is a function that has been named "bomb". The body of this function fires some alert messages and changes the value of 'x'.
The next part is mainly HTML with a little javascript attached to the INPUT tags. The "onclick" property tells its parent what has to be done when clicked. The bomb function is assigned to the first button, the second button just shows an alert message with the value of x.
Javascript if() - else Example[edit]
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>The Welcome Message - Javascript</title> <script type="text/javascript"> function wlcmmsg() { name = prompt('What is your name?', ''); correct = confirm('Are you sure your name is ' + name + ' ?'); if (correct == true) { alert('Welcome ' + name); } else { wlcmmsg(); } } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } </style> </head> <body onload="wlcmmsg()" onunload="alert('Goodbye ' + name)"> <p> This script is dual-licensed under both, <a href="">GFDL</a> and <a href="GNU General Public License">GPL</a>. See <a href="">Wikibooks</a> </p> </body> </html>
Two Scripts[edit]
Now, back to the first example. We have modified the script adding a different welcome message. This version requests the user to enter a name. They are also asked if they want to visit the site. Some CSS has also been added to the button.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>"THE BUTTON" - Javascript</title> <script type="text/javascript"> // global variable x x = 'You have not pressed "THE BUTTON"'; function bomb() { alert('O-GOD NOOOOO, WE ARE ALL DOOMED!!'); alert('3'); alert('2'); alert('1'); alert('!BOOM!'); alert('Have a nice day. :-)'); x = 'You pressed "THE BUTTON" and I told you not too!'; } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } </style> </head> <body onload="welcome()"> <script type="text/javascript"> function welcome() { var name = prompt('What is your name?', ''); if (name == "" || name == "null") { alert('You have not entered a name'); welcome(); return false; } var visit = confirm('Do you want to visit this website?') if (visit == true) { alert('Welcome ' + name); } else { window.location=history.go(-1); } } </script> <div> <input type="button" value="THE BUTTON - Don't Click It" onclick="bomb()" STYLE="color: #ffdd00; background-color: #ff0000">>
Simple Calculator[edit]
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>Calculator</title> <script type="text/javascript"> function multi() { var a = document.Calculator.no1.value; var b = document.Calculator.no2.value; var p = (a*b); document.Calculator.product.value = p; } function divi() { var d = document.Calculator.dividend.value; var e = document.Calculator.divisor.value; var q = (d/e); document.Calculator.quotient.value = q; } function circarea() { var r = document.Calculator.radius.value;756 48233786783165; var a = pi*(r*r); document.Calculator.area.value = a; var c = 2*pi*r; document.Calculator.circumference.value = c; } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } label { float:left; width:7em } </style> </head> <body> <h1>Calculator</h1> <form name="Calculator" action=""> <fieldset> <legend>Multiply</legend> <input type="text" name="no1"> × <input type="text" name="no2"> <input type="button" value="=" onclick="multi()"> <input type="text" name="product"> </fieldset> <fieldset> <legend>Divide</legend> <input type="text" name="dividend"> ÷ <input type="text" name="divisor"> <input type="button" value="=" onclick="divi()"> <input type="text" name="quotient"> </fieldset> <fieldset> <legend>Area and Circumfrence of Circle</legend> <p>(Uses pi to 240 d.p)</p> <div> <label for="radius">Type in radius</label> <input type="text" name="radius" id="radius" value=""> </div> <div> <input type="button" value="=" onclick="circarea()"> </div> <div> <label for="area">Area</label> <input type="text" name="area" id="area" value=""> </div> <div> <label for="circumference">Circumference</label> <input type="text" name="circumference" id="circumference" value=""> </div> </fieldset> </form> <p>Licensed under the <a href="">GNU GPL</a>.</p> </body> </html>
Finding Elements[edit]
The most common method of detecting page elements in the DOM is by the document.getElementById(id) method.
Simple Use[edit]
Let's say, on a page, we have:
<div id="myDiv">content</div>
A simple way of finding this element in JavaScript would be:
var myDiv = document.getElementById("myDiv"); // Would find the DIV element by its ID, which in this case is 'myDiv'.
Use of getElementsByTagName[edit]
Another way to find elements on a web page is by the getElementsByTagName(name) method. It returns an array of all name elements in the node.
Let's say, on a page, we have:
<div id="myDiv"> <p>Paragraph 1</p> <p>Paragraph 2</p> <h1>An HTML header</h1> <p>Paragraph 3</p> </div>
Using the getElementsByTagName method we can get an array of all <p> elements inside the div:
var myDiv = document.getElementById("myDiv"); // get the div var myParagraphs = myDiv.getElementsByTagName('P'); //get all paragraphs inside the div // for example you can get the second paragraph (array indexing starts from 0) var mySecondPar = myParagraphs[1]
Adding Elements[edit]
Basic Usage[edit]
Using the Document Object Module we can create basic HTML elements. Let's create a div.
var myDiv = document.createElement("div");
What if we want the div to have an ID, or a class?
var myDiv = document.createElement("div"); myDiv.id = "myDiv"; myDiv.class = "main";
And we want it added into the page? Let's use the DOM again…
var myDiv = document.createElement("div"); myDiv.id = "myDiv"; myDiv.class = "main"; document.documentElement.appendChild(myDiv);
Further Use[edit]
So let's have a simple HTML page…
<html> <head> </head> <body bgcolor="white" text="blue"> <h1> A simple Javascript created button </h1> <div id="button"></div> </body> </html>
Where the div which has the id of button, let's add a button.
var myButton = document.createElement("input"); myButton.type = "button"; myButton.value = "my button"; placeHolder = document.getElementById("button"); placeHolder.appendChild(myButton);
All together the HTML code looks like:
<html> <head> </head> <body bgcolor="white" text="blue"> <h1> A simple Javascript created button </h1> <div id="button"></div> </body> <script> myButton = document.createElement("input"); myButton.type = "button"; myButton.value = "my button"; placeHolder = document.getElementById("button"); placeHolder.appendChild(myButton); </script> </html>
The page will now have a button on it which has been created via JavaScript.
Changing Elements[edit]
In JavaScript you can change elements by using the following syntax:
element.attribute="new value"
Here, the srcattribute), use:
myButton.type = "text"; //changes the input type from 'button' to 'text'.
Another way to change or create an attribute is to use a method like element.setAttribute("attribute", "value") or element.createAttribute("attribute", "value"). Use setAttribute to change"
Removing Elements[edit]);
References[edit]
Code Structuring[edit]
Links[edit]).
Useful Software Tools[edit]
A list of useful tools for JavaScript programmers.
Editors / IDEs[edit]
- Adobe Brackets: Another browser-based editor by Adobe
- Eclipse: The Eclipse IDE includes an editor and debugger for JavaScript
- Notepad++: A Great tool for editing any kind of code, includes syntax highlighting for many programming languages.
- Programmers' Notepad: A general tool for programming many languages.
- Scripted: An open source browser-based editor by Spring Source
- Sublime Text: One of the most used editors for HTML/CSS/JavaScript editing
- Web Storm or IntelliJ IDEA: both IDEs include and editor and debugger for JavaScript, IDEA also includes a Java development platform
Engines and other tools[edit]
- JSLint: static code analysis for JavaScript
- jq - " 'jq' is like sed for JSON data "
- List of ECMAScript engines
- List of Really Useful Free Tools For JavaScript Developers
|
https://en.wikibooks.org/wiki/JavaScript/Print_version
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
#include <row_iterator.h>
Ends performance schema batch mode, if started.
It's always safe to call this.
Iterators that have children (composite iterators) must forward the EndPSIBatchModeIfStarted() call to every iterator they could conceivably have called StartPSIBatchMode() on. This ensures that after such a call to on the root iterator, all handlers are out of batch mode.
Reimplemented from.
Start performance schema batch mode, if supported (otherwise ignored).
PFS batch mode is a mitigation to reduce the overhead of performance schema, typically applied at the innermost table of the entire join. If you start it before scanning the table and then end it afterwards, the entire set of handler calls will be timed only once, as a group, and the costs will be distributed evenly out. This reduces timer overhead.
If you start PFS batch mode, you must also take care to end it at the end of the scan, one way or the other. Do note that this is true even if the query ends abruptly (LIMIT is reached, or an error happens). The easiest workaround for this is to simply call EndPSIBatchModeIfStarted() on the root iterator at the end of the scan. See the PFSBatchMode class for a useful helper.
The rules for starting batch and ending mode are:
The upshot of this is that when scanning a single table, batch mode will typically be activated for that table (since we call StartPSIBatchMode() on the root iterator, and it will trickle all the way down to the table iterator), but for a join, the call will be ignored and the join iterator will activate batch mode by itself as needed.
Reimplemented from RowIterator.
The default implementation of unlock-row method of RowIterator, used in all access methods except EQRefIterator.
Implements RowIterator.
Reimplemented in SortFileIterator< false >, SortFileIterator< true >, SortBufferIterator< false >, and SortBufferIterator< true >.
|
https://dev.mysql.com/doc/dev/mysql-server/latest/classTableRowIterator.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
In this workshop we will be developing a React Native mobile app using Expo. We will use an API (Application Programming Interface) as the input/resources/database for the mobile application. The app would consume the JSON resources provided by the API to create a presentable interface where the user can check the daily haze Air Pollution Index in their mobile devices.
We will be using Expo Snack online editor as the main tools to create this application.
We will need your iOS or Android phone installed with the Expo mobile app, so that the app can tested on your phone.
Understand the basic concept of React/React Native
Publish the app to a website or at least run it on your phone
At least some basic understanding of programming is needed, preferably in JavaScript as this is the main language used throughout this course
Doesn't know anything about programming? No worries! Armed with just some common sense and a bit of patience you can go through this workshop as well. If you need any help you can just raise your hand or message me at @joevo2
Let's fire up Expo Snack to get started.
We will be presented with
App.js in the text editor area, and on the right you will see preview of your app. Where you can toggle between iOS, Android and Web.
Referring to the code below, the top part consists all the imports, which consists all the library installed using NPM which you can see in your
package.json . The rest are imported from other local file such as the
AssetExample component.
In the
render() function is where we have HTML-like syntax called JSX where we can create the user interface for the app. Component such as
Text ,
View can be imported from
react native as shown in line 2. The full list of component can be referred at the official React Native documentation.
App.jsimport * as from 'react';import { Text, View, StyleSheet } from 'react-native';import Constants from 'expo-constants';// You can import from local filesimport AssetExample from './components/AssetExample';// or any pure javascript modules available in npmimport { Card } from 'react-native-paper';export default class App extends React.Component {render() {return (<View style={styles.container}><Text style={styles.paragraph}>Change code in the editor and watch it change on your phone!Save to get a shareable url.</Text><Card><AssetExample /></Card></View>);}}const styles = StyleSheet.create({container: {flex: 1,justifyContent: 'center',paddingTop: Constants.statusBarHeight,backgroundColor: '#ecf0f1',padding: 8,},paragraph: {margin: 24,fontSize: 18,fontWeight: 'bold',textAlign: 'center',},});
React
In layman term, we can use React to build interactive interface where we can manipulate the data on the screen, fetch data from database etc.
In this workshop we will be using React Native instead React. React Native works the same as React but instead of using
div and dealing with HTML, we would use React Native component like
View and dealing with mobile app libraries.
The added benefit of using React Native is that instead of using Swift and Kotlin/Java to develop iOS and Android app, we could use JavaScript instead where it will be compiled by React Native into the respective code.
React state allow us update a value efficiently in HTML DOM. As shown in the code below
App.jsexport default class App extends React.Component {state = {text: 'Testing',};onChangeText = incomingText => {this.setState({ text: incomingText });};render() {return (<View style={styles.container}><Text style={styles.paragraph}>{this.state.text}</Text><TextInputstyle={{ height: 40, borderColor: 'gray', borderWidth: 1 }}onChangeText={text => this.onChangeText(text)}/></View>)}}
Props allow to pass value or even function to component we created.
App.jsclass MyComponent extends React.Component {render() {return (<View><Text>{this.props.text}</Text></View>)}}export default class App extends React.Componentrender() {return (<View style={styles.container}><MyComponent text={new Date().toLocaleDateString()} /></View>);}}
You should receive a confirmation email with instruction on making a simple API call with your token
You should be presented with a link as shown where you can get a JSON response.
Token is use to uniquely identify you and your application where usage will be recorded for analytics, security and billing purpose.
JSON (JavaScript Object Notation) An open-standard file format that uses human-readable text to transmit data objects consisting of attribute–value pairs and array data types
Sample AQI JSON response{"status": "ok","data": {"aqi": 150,"city": {"geo": [3.139003,101.686855],"name": "Kuala Lumpur","url": ""},"time": {"s": "2019-09-19 17:00:00","tz": "+08:00","v": 1568912400}}}
To make an API call we would be using Fetch API which is built in and also commonly being used in modern browser.
App.jsexport default class App extends React.Component {state = {aqi: 0,location: '',};componentDidMount() {const location = 'kualalumpur'const token = 'yourOwnTaken'fetch(`{location}/?token=${token}`).then(res => res.json()).then(fetchedData => {this.setState({location: fetchedData.data.city.name,aqi: fetchedData.data.aqi,});});}render() {return (<View style={styles.container}><Text style={styles.paragraph}>{this.state.text}</Text><Text>{this.state.aqi}</Text><Text>{this.state.location}</Text></View>);}}
In React Native we will be using Flex to layout our UI. Here's the official documentation where you could learn how to position any item anywhere with ease.
What you learnt here is applicable on both React Native development as well as web development.
In this workshop we will be using a third party library called react-native-paper where it provide us with Material Designed style component where we could present our data in a modern and presentable fashion.
After having your own concept of UI, try to create the component that are required in Snack Expo.
Icons
Expo have a built in set of open source icon. Here's the directory for the full list of icon. Below we have an example on how to implement it.
import { Ionicons } from '@expo/vector-icons';export default class IconExample extends React.Component {render() {return (<Ionicons name="md-checkmark-circle" size={32});}}
Here's the official documentation from expo for more details
Kindly fill up this short survey so that we could further improve 😊
|
https://docs.joevo2.com/workshop/create-your-first-react-native-mobile-app-haze-api-app
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
API Evolution the Right Way
(Watch videos of me presenting this material at PyCon Canada or PyCon US.)
Imagine you are a creator deity, designing a body for a creature. In your benevolence, you wish for the creature to evolve over time: first, because it must respond to changes in its environment, and second, because your wisdom grows and you think of better designs for the beast. It shouldn’t remain in the same body forever!
The creature, however, might be relying on features of its present anatomy. You can’t add wings or change its scales without warning. It needs an orderly process to adapt its lifestyle to its new body. How can you, as a responsible designer in charge of this creature’s natural history, gently coax it toward ever greater improvements?
It’s the same for responsible library maintainers. We keep our promises to the people who depend on our code: we release bugfixes and useful new features. We sometimes delete features if that’s beneficial for the library’s future. We continue to innovate, but we don’t break the code of people who use our library. How can we fulfill all those goals at once?
Add Useful Features
Your library shouldn’t stay the same for eternity: you should add features that your make your library better for your users. For example, if you have a Reptile class and it would be useful to have wings for flying, go for it.
class Reptile: @property def teeth(self): return 'sharp fangs' # If wings are useful, add them! @property def wings(self): return 'majestic wings'
But beware, features come with risk. Consider the following feature in the Python standard library, and let us see what went wrong with it.
bool(datetime.time(9, 30)) == True bool(datetime.time(0, 0)) == False
This is peculiar: converting any time object to a boolean yields True, except for midnight. (Worse, the rules for timezone-aware times are even stranger.) I’ve been writing Python for more than a decade but I didn’t discover this rule until last week. What kind of bugs can this odd behavior cause in users’ code?
Consider a calendar application with a function that creates events. If an event has an end time, the function requires it to also have a start time:
def create_event(day, start_time=None, end_time=None): if end_time and not start_time: raise ValueError("Can't pass end_time without start_time") # The coven meets from midnight until 4am. create_event(datetime.date.today(), datetime.time(0, 0), datetime.time(4, 0))
Unfortunately for witches, an event starting at midnight fails this validation. A careful programmer who knows about the quirk at midnight can write this function correctly, of course:
def create_event(day, start_time=None, end_time=None): if end_time is not None and start_time is None: raise ValueError("Can't pass end_time without start_time")
But this subtlety is worrisome. If a library creator wanted to make an API that bites users, a “feature” like the boolean conversion of midnight works nicely.
The responsible creator’s goal, however, is to make your library easy to use correctly.
This feature was written by Tim Peters when he first made the datetime module in 2002. Even founding Pythonistas like Tim make mistakes. The quirk was removed, and all times are True now.
# Python 3.5 and later. bool(datetime.time(9, 30)) == True bool(datetime.time(0, 0)) == True
Programmers who didn’t know about the oddity of midnight are saved from obscure bugs, but it makes me nervous to think about any code that actually relies on the weird old behavior and didn’t notice the change. It would have been better if this bad feature were never implemented at all. This leads us to the first promise of any library maintainer:
First Covenant:
Avoid Bad Features
The most painful change to make is when you have to delete a feature. One way to avoid bad features is to add few features in general! Make no public method, class, function, or property without a good reason. Thus:
Second Covenant:
Minimize Features
Features are like children: conceived in a moment of passion, they must be supported for years. Don’t do anything silly just because you can. Don’t add feathers to a snake!
But of course, there are plenty of occasions when users need something from your library that it does not yet offer. How do you choose the right feature to give them? Here’s another cautionary tale.
A Cautionary Tale From asyncio
As you may know, when you call a coroutine function, it returns a coroutine object:
async def my_coroutine(): pass print(my_coroutine())
<coroutine object my_coroutine at 0x10bfcbac8>
Your code must “await” this object to actually run the coroutine. It’s easy to forget this, so the asyncio developers wanted a “debug mode” that catches this mistake. Whenever a coroutine is destroyed without being awaited, the debug mode prints a warning with a traceback to the line where it was created.
When Yury Selivanov implemented the debug mode, he added as its foundation a “coroutine wrapper” feature. The wrapper is a function that takes in a coroutine and returns anything at all. Yury used it to install the warning logic on each coroutine, but someone else could use it to turn coroutines into the string “hi!”:
import sys def my_wrapper(coro): return 'hi!' sys.set_coroutine_wrapper(my_wrapper) async def my_coroutine(): pass print(my_coroutine())
hi!
That is one hell of a customization. It changes the very meaning of “async”. Calling set_coroutine_wrapper once will globally and permanently change all coroutine functions. It is, as Nathaniel Smith wrote, “a problematic API” which is prone to misuse and had to be removed. The asyncio developers could have avoided the pain of deleting the feature if they’d better shaped it to its purpose. Responsible creators must keep this in mind:
Third Covenant:
Keep Features Narrow
Luckily, Yury had the good judgment to mark this feature provisional, so asyncio users knew not to rely on it. Nathaniel was free to replace set_coroutine_wrapper with a narrower feature that only customized the traceback depth:
import sys sys.set_coroutine_origin_tracking_depth(2) async def my_coroutine(): pass print(my_coroutine())
<coroutine object my_coroutine at 0x10bfcbac8> RuntimeWarning:'my_coroutine' was never awaited Coroutine created at (most recent call last) File "script.py", line 8, in <module> print(my_coroutine())
This is much better. There’s no more global setting that can change coroutines’ type, so asyncio users need not code as defensively. Deities should all be as farsighted as Yury:
Fourth Covenant:
Mark Experimental Features "Provisional"
If you have merely a hunch that your creature wants horns and a quadruple-forked tongue, introduce the features but mark them “provisional”.
You might discover that the horns are adventitious but the quadruple-forked tongue is useful after all. In the next release of your library you can delete the former and mark the latter official.
Deleting Features
No matter how wisely we guide our creature’s evolution, there may come a time when it’s best to delete an official feature. For example, you might have created a lizard, and now you choose to delete its legs. Perhaps you want to transform this awkward creature into a sleek and modern python.
There are two main reasons to delete features. First, you might discover a feature was a bad idea, through user feedback or your own growing wisdom. That was the case with the quirky behavior of midnight. Or, the feature might have been well-adapted to your library’s environment at first, but the ecology changes. Perhaps another deity invents mammals. Your creature wants to squeeze into their little burrows and eat the tasty mammal filling, so it has to lose its legs.
Similarly, the Python standard library deletes features in response to changes in the language itself. Consider asyncio’s Lock. It has been awaitable ever since “await” was added as a keyword:
lock = asyncio.Lock() async def critical_section(): await lock try: print('holding lock') finally: lock.release()
But now, we can do “async with lock”:
lock = asyncio.Lock() async def critical_section(): async with lock: print('holding lock')
The new style is much better! It’s short, and less prone to mistakes in a big function with other try-except blocks. Since “there should be one and preferably only one obvious way to do it” the old syntax is deprecated in Python 3.7 and it will be banned soon.
It’s inevitable that ecological change will have this effect on your code too, so learn to delete features gently. Before you do so, consider the cost or benefit of deleting it. Responsible maintainers are reluctant to make their users change a large amount of their code, or change their logic. (Remember how painful it was when Python 3 removed the “u” string prefix, before it was added back.) If the code changes are mechanical, however, like a simple search and replace, or if the feature is dangerous, it may be worth deleting.
Whether to Delete a Feature
In the case of our hungry lizard, we decide to delete its legs so it can slither into a mouse’s hole and eat it. How do we go about this? We could just delete the
walk method, changing code from this:
class Reptile: def walk(self): print('step step step')
To this:
class Reptile: def slither(self): print('slide slide slide')
That’s not a good idea, the creature is used to walking! Or, in terms of a library, your users have code that relies on the existing method. When they upgrade to the latest version of your library, their code will break.
# User's code. Oops! Reptile.walk()
Therefore responsible creators make this promise:
Fifth Covenant:
Delete Features Gently
There’s a few steps involved in deleting a feature gently. Starting with a lizard that walks with its legs, you first add the new method, “slither”. Next, deprecate the old method.
import warnings class Reptile: def walk(self): warnings.warn( "walk is deprecated, use slither", DeprecationWarning, stacklevel=2) print('step step step') def slither(self): print('slide slide slide')
The Python warnings module is quite powerful. By default it prints warnings to stderr, only once per code location, but you can silence warnings or turn them into exceptions, among other options.
As soon as you add this warning to your library, PyCharm and other IDEs render the deprecated method with a strikethrough. Users know right away that the method is due for deletion.
Reptile().walk()
What happens when they run their code with the upgraded library?
> python3 script.py DeprecationWarning: walk is deprecated, use slither script.py:14: Reptile().walk() step step step
By default, they see a warning on stderr, but the script succeeds and prints “step step step”. The warning’s traceback shows what line of the user’s code must be fixed. (That’s what the “stacklevel” argument does: it shows the call site that users need to change, not the line in your library where the warning is generated.) Notice that the error message is instructive, it describes what a library user must do to migrate to the new version.
Your users will want to test their code and prove they call no deprecated library methods. Warnings alone won’t make unittests fail, but exceptions will. Python has a command-line option to turn deprecation warnings into exceptions:
> python3 -Werror::DeprecationWarning script.py Traceback (most recent call last): File "script.py", line 14, in <module> Reptile().walk() File "script.py", line 8, in walk DeprecationWarning, stacklevel=2) DeprecationWarning: walk is deprecated, use slither
Now, “step step step” is not printed, because the script terminates with an error.
So once you’ve released a version of your library that warns about the deprecated “walk” method, you can delete it safely in the next release. Right?
Consider what your library’s users might have in their projects’ requirements:
# User's requirements.txt has a dependency on the reptile package. reptile
The next time they deploy their code, they’ll install the latest version of your library. If they haven’t yet handled all deprecations then their code will break, because it still depends on “walk”. You need to be gentler than this. There are three more promises you must keep to your users: to maintain a changelog, choose a version scheme, and write an upgrade guide.
Sixth Covenant:
Maintain a Changelog
Your library must have a change log; its main purpose is to announce when a feature that your users rely on is deprecated or deleted.
Changes in Version 1.1
New features
- New function Reptile.slither()
Deprecations
- Reptile.walk() is deprecated and will be removed in version 2.0, use slither()
Responsible creators use version numbers to express how a library has changed, so users can make informed decisions about upgrading. A “version scheme” is a language for communicating the pace of change.
Seventh Covenant:
Choose a Version Scheme
There are two schemes in widespread use, semantic versioning and time-based versioning. I recommend semantic versioning for nearly any library. The Python flavor thereof is defined in PEP 440, and tools like “pip” understand semantic version numbers.
If you choose semantic versioning for your library, you can delete its legs gently with version numbers like:
1.0: First “stable” release, with walk()
1.1: Add slither(), deprecate walk()
2.0: Delete walk()
Your users should depend on a range of your library’s versions like so:
# User's requirements.txt. reptile>=1,<2
This allows them to upgrade automatically within a major release, receiving bugfixes and potentially raising some deprecation warnings, but not upgrading to the next major release and risking a change that breaks their code.
If you follow time-based version your releases might be numbered thus:
2017.06.0: A release in June 2017
2018.11.0: Add slither(), deprecate walk()
2019.04.0: Delete walk()
And users can depend on your library like:
# User's requirements.txt for time-based version. reptile==2018.11.*
This is terrific, but how do your users know your versioning scheme and how to test their code for deprecations? You have to advise them how to upgrade.
Eighth Covenant:
Write an Upgrade Guide
Here’s how a responsible library creator might guide users:
Upgrading to 2.0
Migrate from Deprecated APIs
See the changelog for deprecated features.
Enable Deprecation Warnings
Upgrade to 1.1 and test your code with:
python -Werror::DeprecationWarning
Now it's safe to upgrade.
You must teach users how to handle deprecation warnings by showing them the command line options. Not all Python programmers know this—I certainly have to look up the syntax each time. And take note, you must release a version that prints warnings from each deprecated API, so users can test with that version before upgrading again. In this example, version 1.1 is the bridge release. It allows your users to rewrite their code incrementally, fixing each deprecation warning separately until they have entirely migrated to the latest API. They can test changes to their code, and changes in your library, independently from each other, and isolate the cause of bugs.
If you chose semantic versioning, this transitional period lasts until the next major release, from 1.x to 2.0, or from 2.x to 3.0, and so on. The gentle way to delete a creature’s legs is to give it at least one version in which to adjust its lifestyle. Don’t remove the legs all at once!
Version numbers, deprecation warnings, the changelog, and the upgrade guide work together to gently evolve your library without breaking the covenant with your users. The Twisted project’s Compatibility Policy explains this beautifully:
“The First One’s Always Free”
Any application which runs without warnings may be upgraded one minor version of Twisted.
In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings.
Now, we creator deities have gained the wisdom and power to add features by adding methods, and to delete them gently. We can also add features by adding parameters, but this brings a new level of difficulty. Are you ready?
Adding Parameters
Imagine that you just gave your snake-like creature a pair of wings. Now you must allow it the choice whether to move by slithering or flying. Currently its “move” function takes one parameter:
# Your library code. def move(direction): print(f'slither {direction}') # A user's application. move('north')
You want to add a “mode” parameter, but this breaks your users’ code if they upgrade, because they pass only one argument:
# Your library code. def move(direction, mode): assert mode in ('slither', 'fly') print(f'{mode} {direction}') # A user's application. Error! move('north')
A truly wise creator promises not to break users’ code this way.
Ninth Covenant:
Add Parameters Compatibly
To keep this covenant, add each new parameter with a default value that preserves the original behavior.
# Your library code. def move(direction, mode='slither'): assert mode in ('slither', 'fly') print(f'{mode} {direction}') # A user's application. move('north')
Over time, parameters are the natural history of your function’s evolution. They’re listed oldest first, each with a default value. Library users can pass keyword arguments to opt in to specific new behaviors, and accept the defaults for all others.
# Your library code. def move(direction, mode='slither', turbo=False, extra_sinuous=False, hail_lyft=False): # ... # A user's application. move('north', extra_sinuous=True)
There is a danger, however, that a user might write code like this:
# A user's application, poorly-written. move('north', 'slither', False, True)
What happens if, in the next major version of your library, you get rid of one of the parameters, like “turbo”?
# Your library code, next major version. "turbo" is deleted. def move(direction, mode='slither', extra_sinuous=False, hail_lyft=False): # ... # A user's application, poorly-written. move('north', 'slither', False, True)
The user’s code still compiles, and this is a bad thing. The code stopped moving extra-sinuously and started hailing a Lyft, which was not the intention. I trust that you can predict what I’ll say next: deleting a parameter requires several steps. First, of course, deprecate the “turbo” parameter. I like a technique like this one that detects whether any user’s code is relying on this parameter:
# Your library code. _turbo_default = object() def move(direction, mode='slither', turbo=_turbo_default, extra_sinuous=False, hail_lyft=False): if turbo is not _turbo_default: warnings.warn( "'turbo' is deprecated", DeprecationWarning, stacklevel=2) else: # The old default. turbo = False
But your users might not notice the warning. Warnings are not very loud: they can be suppressed, or lost in log files. Users might heedlessly upgrade to the next major version of your library, the version that deletes “turbo”. Their code will run without error and silently do the wrong thing! As the Zen of Python says, “Errors should never pass silently.” Indeed, reptiles hear poorly, so you must correct them very loudly when they make mistakes.
The best way to protect your users is with Python 3’s star syntax, which requires callers to pass keyword arguments.
# Your library code. # All arguments after "*" must be passed by keyword. def move(direction, *, mode='slither', turbo=False, extra_sinuous=False, hail_lyft=False): # ... # A user's application, poorly-written. # Error! Can't use positional args, keyword args required. move('north', 'slither', False, True)
With the star in place, this is the only syntax allowed:
# A user's application. move('north', extra_sinuous=True)
Now when you delete “turbo”, you can be certain any user code that relies on it will fail loudly. If your library also supports Python 2, there’s no shame in that, you can simulate the star syntax thus, borrowing from the example in PEP-3102:
# Your library code, Python 2 compatible. def move(direction, *ignore, mode='slither', turbo=False, extra_sinuous=False, hail_lyft=False): if ignore: raise TypeError('Unexpected kwargs: %r' % ignore) # ...
(Previously I’d cited Brett Slatkin’s technique, but this one is simpler and a more accurate simulation of the Python 3 behavior.)
Requiring keyword arguments is a wise choice, but it requires foresight. If you allow an argument to be passed positionally, you cannot convert it to keyword-only in a later release. So, add the star now. You can observe in the asyncio API that it uses the star pervasively in constructors, methods, and functions. Even though “Lock” only takes one optional parameter so far, the asyncio developers added the star right away. This is providential.
# In asyncio. class Lock: def __init__(self, *, loop=None): # ...
Now we’ve gained the wisdom to change methods and parameters while keeping our covenant with users. The time has come to try the most challenging kind of evolution: changing behavior without changing either methods or parameters.
Changing Behavior
Let’s say your creature is a rattlesnake, and you want to teach it a new behavior.
Sidewinding! The creature’s body will appear the same, but its behavior will change. How can we prepare it for this step of its evolution?
A responsible creator can learn from the following example in the Python standard library, when behavior changed without a new function or parameters. Once upon a time, the os.stat function was introduced to get file statistics, like the modification time. At first, times were always integers.
>>> os.stat('file.txt').st_mtime 1540817862
One day, the core developers decided to use floats for os.stat times, to give sub-second precision. But they worried that existing user code wasn’t ready for the change. They created a setting in Python 2.3, “stat_float_times”, that was false by default. A user could set it to True to opt in to floating-point timestamps.
>>> # Python 2.3. >>> os.stat_float_times(True) >>> os.stat('file.txt').st_mtime 1540817862.598021
Starting in Python 2.5, float times became the default, so any new code written for 2.5 and later could ignore the setting and expect floats. Of course, you could set it to False to keep the old behavior, or set it to True to ensure the new behavior in all Python versions, and prepare your code for the day when stat_float_times is deleted.
Ages passed. In Python 3.1 the setting was deprecated to prepare people for the distant future, and finally, after its decades-long journey, the setting was removed. Float times are now the only option. It’s a long road, but responsible deities are patient because we know this gradual process has a good chance of saving users from unexpected behavior changes.
Tenth Covenant:
Change Behavior Gradually
Here are the steps:
- Add a flag to opt in to the new behavior, default False, warn if it’s False
- Change default to True, deprecate flag entirely
- Remove the flag
If you follow semantic versioning, the versions might be like so:
You need two major releases to complete the maneuver. If you had gone straight from “Add flag, default False, warn if it’s False” to “Remove flag” without the intervening release, your users’ code would be unable to upgrade. User code written correctly for 1.1, which sets the flag to True and handles the new behavior, must be able to upgrade to the next release with no ill effect except new warnings, but if the flag were deleted in the next release that code would break. A responsible deity never violates the Twisted policy: “The First One’s Always Free.”
The Responsible Creator
Our ten covenants belong loosely in three categories:
Evolve Cautiously
- Avoid Bad Features
- Minimize Features
- Keep Features Narrow
- Mark Experimental Features “Provisional”
- Delete Features Gently
Record History Rigorously
- Maintain a Changelog
- Choose a Version Scheme
- Write an Upgrade Guide
Change Slowly and Loudly
- Add Parameters Compatibly
- Change Behavior Gradually
If you keep these covenants with your creature, you’ll be a responsible creator deity. Your creature’s body can evolve over time, forever improving and adapting to changes in its environment, but without sudden changes the creature isn’t prepared for. If you maintain a library, keep these promises to your users, and you can innovate your library without breaking the code of the people who rely on you.
Illustrations:
- The World’s Progress, The Delphian Society, 1913
- Essay Towards a Natural History of Serpents, Charles Owen, 1742
- On the batrachia and reptilia of Costa Rica: With notes on the herpetology and ichthyology of Nicaragua and Peru, Edward Drinker Cope, 1875
- Natural History, Richard Lydekker et. al., 1897
- Mes Prisons, Silvio Pellico, 1843
- Tierfotoagentur / m.blue-shadow
- From Los Angeles Public Library, 1930
|
https://emptysqua.re/blog/api-evolution-the-right-way/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
I’m looking for a way to enable token-based authentication in Jersey. I am trying not to use any particular framework. Is that possible?
My plan is: A user signs up for my web service, my web service generates a token, sends it to the client, and the client will retain it. Then the client, for each request, will send the token instead of username and password.
I was thinking of using a custom filter for each request and
@PreAuthorize("hasRole('ROLE')")
but I just thought that this causes a lot of requests to the database to check if the token is valid.
Or not create filter and in each request put a param token? So that each API first checks the token and after executes something to retrieve resource.
How token-based authentication works
In token-based authentication, the client exchanges hard credentials (such as username and password) for a piece of data called token. For each request, instead of sending the hard credentials, the client will send the token to the server to perform authentication and then authorization.
In a few words, an authentication scheme based on tokens follow these steps:
- The client sends their credentials (username and password) to the server.
- The server authenticates the credentials and, if they are valid, generate a token for the user.
- The server stores the previously generated token in some storage along with the user identifier and an expiration date.
- The server sends the generated token to the client.
- The client sends the token to the server in each request.
- The server, in each request, extracts the token from the incoming request. With the token, the server looks up the user details to perform authentication.
- If the token is valid, the server accepts the request.
- If the token is invalid, the server refuses the request.
- Once the authentication has been performed, the server performs authorization.
- The server can provide an endpoint to refresh tokens.
Note: The step 3 is not required if the server has issued a signed token (such as JWT, which allows you to perform stateless authentication).
What you can do with JAX-RS 2.0 (Jersey, RESTEasy and Apache CXF)
This solution uses only the JAX-RS 2.0 API, avoiding any vendor specific solution. So, it should work with JAX-RS 2.0 implementations, such as Jersey, RESTEasy and Apache CXF.
It is worthwhile to mention that if you are using token-based authentication, resource method which receives and validates the credentials (username and password) and issue a token for the user:
@Path("/authentication") public class AuthenticationEndpoint { @POST @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public Response authenticateUser(@FormParam("username") String username, @FormParam("password") String password) { try { // Authenticate the user using the credentials provided authenticate(username, password); // Issue a token for the user String token = issueToken(username); // Return the token on the response return Response.ok(token).build(); } catch (Exception e) { return Response.status(Response.Status.FORBIDDEN).build(); } } private void authenticate(String username, String password) throws Exception { // Authenticate against a database, LDAP, file or whatever // Throw an Exception if the credentials are invalid } private String issueToken(String username) { // Issue a token (can be a random String persisted to a database or a JWT token) // The issued token must be associated to a user // Return the issued token } }
If any exceptions are thrown when validating the credentials, a response with the status
403 (Forbidden) will be returned.
If the credentials are successfully validated, a response with the status
200 (OK) will be returned and the issued token will be sent to the client in the response payload. The client must send the token to the server in every request.
When consuming
application/x-www-form-urlencoded, the client must to send the credentials in the following format in the request payload:
username=admin&password=123456
Instead of form params, it’s possible to wrap the username and the password into a class:
public class Credentials implements Serializable { private String username; private String password; // Getters and setters omitted }
And then consume it as JSON:
@POST @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public Response authenticateUser(Credentials credentials) { String username = credentials.getUsername(); String password = credentials.getPassword(); // Authenticate the user, issue a token and return a response }
Using this approach, the client must to send the credentials in the following format in the payload of the request:
{ "username": "admin", "password": "123456" }
Extracting the token from the request and validating it
The client should send the token in the standard HTTP
Authorization header of the request. For example:
Authorization: Bearer <token-goes-here>
The name of the standard HTTP header is unfortunate because it carries authentication information, not authorization. However, it’s the standard HTTP header for sending credentials to the server.
JAX-RS provides
@NameBinding, a meta-annotation used to create other annotations to bind filters and interceptors to resource classes and methods. Define a
@Secured annotation as following:
@NameBinding @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface Secured { }
The above defined name-binding annotation will be used to decorate a filter class, which implements
ContainerRequestFilter, allowing you to intercept the request before it be handled by a resource method. The
ContainerRequestContext can be used to access the HTTP request headers and then extract the token:
@Secured @Provider @Priority(Priorities.AUTHENTICATION) public class AuthenticationFilter implements ContainerRequestFilter { private static final String AUTHENTICATION_SCHEME = "Bearer"; @Override public void filter(ContainerRequestContext requestContext) throws IOException { // Get the Authorization header from the request String authorizationHeader = requestContext.getHeaderString(HttpHeaders.AUTHORIZATION); // Validate the Authorization header if (!isTokenBasedAuthentication(authorizationHeader)) { abortWithUnauthorized(requestContext); return; } // Extract the token from the Authorization header String token = authorizationHeader .substring(AUTHENTICATION_SCHEME.length()).trim(); try { // Validate the token validateToken(token); } catch (Exception e) { abortWithUnauthorized(requestContext); } } private boolean isTokenBasedAuthentication(String authorizationHeader) { // Check if the Authorization header is valid // It must not be null and must be prefixed with "Bearer" plus a whitespace // Authentication scheme comparison must be case-insensitive return authorizationHeader != null && authorizationHeader.toLowerCase() .startsWith(AUTHENTICATION_SCHEME.toLowerCase() + " "); } private void abortWithUnauthorized(ContainerRequestContext requestContext) { // Abort the filter chain with a 401 status code // The "WWW-Authenticate" header is sent along with the response requestContext.abortWith( Response.status(Response.Status.UNAUTHORIZED) .header(HttpHeaders.WWW_AUTHENTICATE, AUTHENTICATION_SCHEME) .build()); } private void validateToken(String token) throws Exception { // Check if it was issued by the server and if it's not expired // Throw an Exception if the token is invalid } }
If any problems happen during the token validation, a response with the status
401 (Unauthorized) will be returned. Otherwise the request will proceed to a resource method.
Securing your REST endpoints
To bind the authentication filter to resource methods or resource classes, annotate them with the
@Secured annotation created above. For the methods and/or classes that are annotated, the filter will be executed. It means that such endpoints will only be reached if the request is performed with a valid token.
If some methods or classes do not need authentication, simply do not annotate them:
@Path("/example") public class ExampleResource { @GET @Path("{id}") @Produces(MediaType.APPLICATION_JSON) public Response myUnsecuredMethod(@PathParam("id") Long id) { // This method is not annotated with @Secured // The authentication filter won't be executed before invoking this method ... } @DELETE @Secured @Path("{id}") @Produces(MediaType.APPLICATION_JSON) public Response mySecuredMethod(@PathParam("id") Long id) { // This method is annotated with @Secured // The authentication filter will be executed before invoking this method // The HTTP request must be performed with a valid token ... } }
In the example shown above, the filter will be executed only for the
mySecuredMethod(Long) method because it’s annotated with
@Secured.
Identifying the current user
It’s very likely that you will need to know the user who is performing the request agains your REST API. The following approaches can be used to achieve it:
Overriding the security context of the current request
Within your
ContainerRequestFilter.filter(ContainerRequestContext) method, a new
SecurityContext instance can be set for the current request. Then override the
SecurityContext.getUserPrincipal(), returning a
Principal instance:
final SecurityContext currentSecurityContext = requestContext.getSecurityContext(); requestContext.setSecurityContext(new SecurityContext() { @Override public Principal getUserPrincipal() { return new Principal() { @Override public String getName() { return username; } }; } @Override public boolean isUserInRole(String role) { return true; } @Override public boolean isSecure() { return currentSecurityContext.isSecure(); } @Override public String getAuthenticationScheme() { return "Bearer"; } });
Use the token to look up the user identifier (username), which will be the
Principal‘s name.
Inject the
SecurityContext in any JAX-RS resource class:
@Context SecurityContext securityContext;
The same can be done in a JAX-RS resource method:
@GET @Secured @Path("{id}") @Produces(MediaType.APPLICATION_JSON) public Response myMethod(@PathParam("id") Long id, @Context SecurityContext securityContext) { ... }
And then get the
Principal:
Principal principal = securityContext.getUserPrincipal(); String username = principal.getName();
Using CDI (Context and Dependency Injection)
If, for some reason, you don’t want to override the
SecurityContext, you can use CDI (Context and Dependency Injection), which provides useful features such as events and producers.
Create a CDI qualifier:
@Qualifier @Retention(RUNTIME) @Target({ METHOD, FIELD, PARAMETER }) public @interface AuthenticatedUser { }
In your
AuthenticationFilter created above, inject an
Event annotated with
@AuthenticatedUser:
@Inject @AuthenticatedUser Event<String> userAuthenticatedEvent;
If the authentication succeeds, fire the event passing the username as parameter (remember, the token is issued for a user and the token will be used to look up the user identifier):
userAuthenticatedEvent.fire(username);
It’s very likely that there’s a class that represents a user in your application. Let’s call this class
User.
Create a CDI bean to handle the authentication event, find a
User instance with the correspondent username and assign it to the
authenticatedUser producer field:
@RequestScoped public class AuthenticatedUserProducer { @Produces @RequestScoped @AuthenticatedUser private User authenticatedUser; public void handleAuthenticationEvent(@Observes @AuthenticatedUser String username) { this.authenticatedUser = findUser(username); } private User findUser(String username) { // Hit the the database or a service to find a user by its username and return it // Return the User instance } }
The
authenticatedUser field produces a
User instance that can be injected into container managed beans, such as JAX-RS services, CDI beans, servlets and EJBs. Use the following piece of code to inject a
User instance (in fact, it’s a CDI proxy):
@Inject @AuthenticatedUser User authenticatedUser;
Note that the CDI
@Produces annotation is different from the JAX-RS
@Produces annotation:
- CDI:
javax.enterprise.inject.Produces
- JAX-RS:
javax.ws.rs.Produces
Be sure you use the CDI
@Produces annotation in your
AuthenticatedUserProducer bean.
The key here is the bean annotated with
@RequestScoped, allowing you to share data between filters and your beans. If you don’t wan’t to use events, you can modify the filter to store the authenticated user in a request scoped bean and then read it from your JAX-RS resource classes.
Compared to the approach that overrides the
SecurityContext, the CDI approach allows you to get the authenticated user from beans other than JAX-RS resources and providers.
Supporting role-based authorization
Please refer to my other answer for details on how to support role-based authorization.
Issuing tokens
A token can be:
- Opaque: Reveals no details other than the value itself (like a random string)
- Self-contained: Contains details about the token itself (like JWT).
See details below:
Random string as token
A token can be issued by generating a random string and persisting it to a database along with the user identifier and an expiration date. A good example of how to generate a random string in Java can be seen here. You also could use:
Random random = new SecureRandom(); String token = new BigInteger(130, random).toString(32);
JWT (JSON Web Token)
JWT (JSON Web Token) is a standard method for representing claims securely between two parties and is defined by the RFC 7519.
It’s a self-contained token and it enables you to store details in claims. These claims are stored in the token payload which is a JSON encoded as Base64. Here are some claims registered in the RFC 7519 and what they mean (read the full RFC for further details):
iss: Principal that issued the token.
sub: Principal that is the subject of the JWT.
exp: Expiration date for the token.
nbf: Time on which the token will start to be accepted for processing.
iat: Time on which the token was issued.
jti: Unique identifier for the token.
Be aware that you must not store sensitive data, such as passwords, in the token.
The payload can be read by the client and the integrity of the token can be easily checked by verifying its signature on the server. The signature is what prevents the token from being tampered with.
You won’t need to persist JWT tokens if you don’t need to track them. Althought, by persisting the tokens, you will have the possibility of invalidating and revoking the access of them. To keep the track of JWT tokens, instead of persisting the whole token on the server, you could persist the token identifier (
jti claim) along with some other details such as the user you issued the token for, the expiration date, etc.
When persisting tokens, always consider removing the old ones in order to prevent your database from growing indefinitely.
Using JWT
There are a few Java libraries to issue and validate JWT tokens such as:
To find some other great resources to work with JWT, have a look at.
Handling token refreshment with JWT
Accept only valid (and non-expired) tokens for refreshment. It’s responsability of the client to refresh the tokens before the expiration date indicated in the
exp claim.
You should prevent the tokens from being refreshed indefinitely. See below a few approaches that you could consider.
You could keep the track of token refreshment by adding two claims to your token (the claim names are up to you):
refreshLimit: Indicates how many times the token can be refreshed.
refreshCount: Indicates how many times the token has been refreshed.
So only refresh the token if the following conditions are true:
- The token is not expired (
exp >= now).
- The number of times that the token has been refreshed is less than the number of times that the token can be refreshed (
refreshCount < refreshLimit).
And when refreshing the token:
- Update the expiration date (
exp = now + some-amount-of-time).
- Increment the number of times that the token has been refreshed (
refreshCount++).
Alternatively to keeping the track of the number of refreshments, you could have a claim that indicates the absolute expiration date (which works pretty similar to the
refreshLimit claim described above). Before the absolute expiration date, any number of refreshments is acceptable.
Another approach involves issuing a separate long-lived refresh token that is used to issue short-lived JWT tokens.
The best approach depends on your requirements.
Handling token revocation with JWT
If you want to revoke tokens, you must keep the track of them. You don’t need to store the whole token on server side, store only the token identifier (that must be unique) and some metadata if you need. For the token identifier you could use UUID.
The
jti claim should be used to store the token identifier on the token. When validating the token, ensure that it has not been revoked by checking the value of the
jti claim against the token identifiers you have on server side.
For security purposes, revoke all the tokens for a user when they change their password.
Additional information
- It doesn’t matter which type of authentication you decide to use. Always do it on the top of a HTTPS connection to prevent the man-in-the-middle attack.
- Take a look at this question from Information Security for more information about tokens.
- In this article you will find some useful information about token-based authentication.
This answer is all about authorization and it is a complement of my previous answer about authentication
Why another answer? I attempted to expand my previous answer by adding details on how to support JSR-250 annotations. However the original answer became the way too long and exceeded the maximum length of 30,000 characters. So I moved the whole authorization details to this answer, keeping the other answer focused on performing authentication and issuing tokens.
Supporting role-based authorization with the
@Secured annotation
Besides authentication flow shown in the other answer, role-based authorization can be supported in the REST endpoints.
Create an enumeration and define the roles according to your needs:
public enum Role { ROLE_1, ROLE_2, ROLE_3 }
Change the
@Secured name binding annotation created before to support roles:
@NameBinding @Retention(RUNTIME) @Target({TYPE, METHOD}) public @interface Secured { Role[] value() default {}; }
And then annotate the resource classes and methods with
@Secured to perform the authorization. The method annotations will override the class annotations:
@Path("/example") @Secured({Role.ROLE_1}) public class ExampleResource { @GET @Path("{id}") @Produces(MediaType.APPLICATION_JSON) public Response myMethod(@PathParam("id") Long id) { // This method is not annotated with @Secured // But it's declared within a class annotated with @Secured({Role.ROLE_1}) // So it only can be executed by the users who have the ROLE_1 role ... } @DELETE @Path("{id}") @Produces(MediaType.APPLICATION_JSON) @Secured({Role.ROLE_1, Role.ROLE_2}) public Response myOtherMethod(@PathParam("id") Long id) { // This method is annotated with @Secured({Role.ROLE_1, Role.ROLE_2}) // The method annotation overrides the class annotation // So it only can be executed by the users who have the ROLE_1 or ROLE_2 roles ... } }
Create a filter with the
AUTHORIZATION priority, which is executed after the
AUTHENTICATION priority filter defined previously.
The
ResourceInfo can be used to get the resource
Method and resource
Class that will handle the request and then extract the
@Secured annotations from them:
@Secured @Provider @Priority(Priorities.AUTHORIZATION) public class AuthorizationFilter implements ContainerRequestFilter { @Context private ResourceInfo resourceInfo; @Override public void filter(ContainerRequestContext requestContext) throws IOException { // Get the resource class which matches with the requested URL // Extract the roles declared by it Class<?> resourceClass = resourceInfo.getResourceClass(); List<Role> classRoles = extractRoles(resourceClass); // Get the resource method which matches with the requested URL // Extract the roles declared by it Method resourceMethod = resourceInfo.getResourceMethod(); List<Role> methodRoles = extractRoles(resourceMethod); try { // Check if the user is allowed to execute the method // The method annotations override the class annotations if (methodRoles.isEmpty()) { checkPermissions(classRoles); } else { checkPermissions(methodRoles); } } catch (Exception e) { requestContext.abortWith( Response.status(Response.Status.FORBIDDEN).build()); } } // Extract the roles from the annotated element private List<Role> extractRoles(AnnotatedElement annotatedElement) { if (annotatedElement == null) { return new ArrayList<Role>(); } else { Secured secured = annotatedElement.getAnnotation(Secured.class); if (secured == null) { return new ArrayList<Role>(); } else { Role[] allowedRoles = secured.value(); return Arrays.asList(allowedRoles); } } } private void checkPermissions(List<Role> allowedRoles) throws Exception { // Check if the user contains one of the allowed roles // Throw an Exception if the user has not permission to execute the method } }
If the user has no permission to execute the operation, the request is aborted with a
403 (Forbidden).
To know the user who is performing the request, see my previous answer. You can get it from the
SecurityContext (which should be already set in the
ContainerRequestContext) or inject it using CDI, depending on the approach you go for.
If a
@Secured annotation has no roles declared, you can assume all authenticated users can access that endpoint, disregarding the roles the users have.
Supporting role-based authorization with JSR-250 annotations
Alternatively to defining the roles in the
@Secured annotation as shown above, you could consider JSR-250 annotations such as
@RolesAllowed,
@PermitAll and
@DenyAll.
JAX-RS doesn’t support such annotations out-of-the-box, but it could be achieved with a filter. Here are a few considerations to keep in mind if you want to support all of them:
@DenyAllon the method takes precedence over
@RolesAllowedand
@PermitAllon the class.
@RolesAllowedon the method takes precedence over
@PermitAllon the class.
@PermitAllon the method takes precedence over
@RolesAllowedon the class.
@DenyAllcan’t be attached to classes.
@RolesAllowedon the class takes precedence over
@PermitAllon the class.
So an authorization filter that checks JSR-250 annotations could be like:
@Provider @Priority(Priorities.AUTHORIZATION) public class AuthorizationFilter implements ContainerRequestFilter { @Context private ResourceInfo resourceInfo; @Override public void filter(ContainerRequestContext requestContext) throws IOException { Method method = resourceInfo.getResourceMethod(); // @DenyAll on the method takes precedence over @RolesAllowed and @PermitAll if (method.isAnnotationPresent(DenyAll.class)) { refuseRequest(); } // @RolesAllowed on the method takes precedence over @PermitAll RolesAllowed rolesAllowed = method.getAnnotation(RolesAllowed.class); if (rolesAllowed != null) { performAuthorization(rolesAllowed.value(), requestContext); return; } // @PermitAll on the method takes precedence over @RolesAllowed on the class if (method.isAnnotationPresent(PermitAll.class)) { // Do nothing return; } // @DenyAll can't be attached to classes // @RolesAllowed on the class takes precedence over @PermitAll on the class rolesAllowed = resourceInfo.getResourceClass().getAnnotation(RolesAllowed.class); if (rolesAllowed != null) { performAuthorization(rolesAllowed.value(), requestContext); } // @PermitAll on the class if (resourceInfo.getResourceClass().isAnnotationPresent(PermitAll.class)) { // Do nothing return; } // Authentication is required for non-annotated methods if (!isAuthenticated(requestContext)) { refuseRequest(); } } /** * Perform authorization based on roles. * * @param rolesAllowed * @param requestContext */ private void performAuthorization(String[] rolesAllowed, ContainerRequestContext requestContext) { if (rolesAllowed.length > 0 && !isAuthenticated(requestContext)) { refuseRequest(); } for (final String role : rolesAllowed) { if (requestContext.getSecurityContext().isUserInRole(role)) { return; } } refuseRequest(); } /** * Check if the user is authenticated. * * @param requestContext * @return */ private boolean isAuthenticated(final ContainerRequestContext requestContext) { // Return true if the user is authenticated or false otherwise // An implementation could be like: // return requestContext.getSecurityContext().getUserPrincipal() != null; } /** * Refuse the request. */ private void refuseRequest() { throw new AccessDeniedException( "You don't have permissions to perform this action."); } }
Note: The above implementation is based on the Jersey
RolesAllowedDynamicFeature. If you use Jersey, you don’t need to write your own filter, just use the existing implementation.
Tags: authentication, rest, sed
|
https://exceptionshub.com/best-practice-for-rest-token-based-authentication-with-jax-rs-and-jersey.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Log pings
This flag causes all subsequent pings that are submitted to also be echoed to the product's log.
Once enabled, the only way to disable this feature is to restart or manually reset the application.
On how to access logs
The Glean SDKs log warnings and errors through platform-specific logging frameworks. See the platform-specific instructions for information on how to view the logs on the platform you are on.
Limits
- The accepted values are
trueor
false. Any other value will be ignored.
API
setLogPings
Enables or disables ping logging.
This API can safely be called before
Glean.initialize.
The tag will be applied upon initialization in this case.
import Glean Glean.shared.setLogPings(true)
use glean; glean.set_log_pings(true);
import Glean from "@mozilla/glean/<platform>"; Glean.setLogPings(true);
Environment variable
GLEAN_LOG_PINGS
It is also possible to enable ping logging through
the
GLEAN_LOG_PINGS_LOG_PINGS=true python my_application.py
$ GLEAN_LOG_PINGS=true cargo run
$ GLEAN_LOG_PINGS=true ./mach run
|
https://mozilla.github.io/glean/book/reference/debug/logPings.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
NAME
SYNOPSIS
DESCRIPTION
CTL NAMESPACE
CTL EXTERNAL CONFIGURATION
SEE ALSO
pmemobj_ctl_getU()/pmemobj_ctl_getW(), pmemobj_ctl_setU()/pmemobj_ctl_setW(), pmemobj_ctl_execU()/pmemobj_ctl_execW()
#include <libpmemobj.h> int pmemobj_ctl_getU(PMEMobjpool *pop, const char *name, void *arg); (EXPERIMENTAL) int pmemobj_ctl_getW(PMEMobjpool *pop, const wchar_t *name, void *arg); (EXPERIMENTAL) int pmemobj_ctl_setU(PMEMobjpool *pop, const char *name, void *arg); (EXPERIMENTAL) int pmemobj_ctl_setW(PMEMobjpool *pop, const wchar_t *name, void *arg); (EXPERIMENTAL) int pmemobj_ctl_execU(PMEMobjpool *pop, const char *name, void *arg); (EXPERIMENTAL) int pmemobj_ctl_execW(PMEMobjpool *pop, const wchar_t *name, void *arg); (EXPERIMENTAL)obj_ctl_getU()/pmemobj_ctl_getW(), pmemobj_ctl_setU()/pmemobj_ctl_setW() and pmemobj_ctl_execU()/pmemobj_ctl_execW() functions provide a uniform interface for querying and modifying the internal behavior of libpmemobjobj_createU()/pmemobj_createW() function.
prefault.at_open | rw | global | int | int | - | boolean
If set, every page of the pool will be touched and written to when the pool is opened, in order to trigger page allocation and minimize the performance impact of pagefaults. Affects only the pmemobj_openU()/pmemobj_openW() function.
sds.at_create | rw | global | int | int | - | boolean
If set, force-enables or force-disables SDS feature during pool creation. Affects only the pmemobj_createU()/pmemobj_createW().
tx.cache.size | rw | - | long long | long long | - | integer
Size in bytes of the transaction snapshot cache. In a larger cache the frequency of persistent allocations is lower, but with higher fixed cost.
This should be set to roughly the sum of sizes of the snapshotted regions in an average transaction in the pool.
This.
heap.alloc_class.[class_id].desc | rw | - |
struct pobj_alloc_class_desc |
struct pobj_alloc_class_desc | - | integer, integer, integer, string
Describes an allocation class. Allows one to create or view the internal data structures of the allocator.
Creating custom allocation classes can be beneficial for both raw allocation throughput, scalability and, most importantly, fragmentation. By carefully constructing allocation classes that match the application workload, one can entirely eliminate external and internal fragmentation. For example, it is possible to easily construct a slab-like allocation mechanism for any data structure.
The
[class_id] is an index field. Only values between 0-254 are valid.
If setting an allocation class, but the
class_id is already taken, the
function will return -1.
The values between 0-127 are reserved for the default allocation classes of the
library and can be used only for reading.
The recommended method for retrieving information about all allocation classes is to call this entry point for all class ids between 0 and 254 and discard those results for which the function returns an error.
This entry point takes a complex argument.
struct pobj_alloc_class_desc { size_t unit_size; size_t alignment; unsigned units_per_block; enum pobj_header_type header_type; unsigned class_id; };
The first field,
unit_size, is an 8-byte unsigned integer that defines the
allocation class size. While theoretically limited only by
PMEMOBJ_MAX_ALLOC_SIZE, for most workloads this value should be between
8 bytes and 2 megabytes.
The
alignment to match the internal size of the
block (256 kilobytes or a multiple thereof). For example, given a class with
a
unit_size of 512 bytes and a
units_per_block of 1000, a single block of
memory for that class will have 512 kilobytes.
This is relevant because the bigger the block size, the less frequently blocks
need to be fetched, resulting in lower contention on global heap state.
If the CTL call is being done at runtime, the
units_per_block variable of the
provided alloc class structure is modified to match the actual value..
none. Header type that incurs no metadata overhead beyond a single bitmap entry. Can be used for very small allocation classes or when objects must be adjacent to each other. This header type does not support type numbers (type number is always 0) or allocations that span more than one unit.
The
class_id field is an optional, runtime-only variable that allows the
user to retrieve the identifier of the class. This will be equivalent to the
provided
[class_id]. This field cannot be set from a config file.
The allocation classes are a runtime state of the library and must be created after every open. It is highly recommended to use the configuration file to store the classes.
This structure is declared in the
libpmemobj/ctl.h header file. Please refer
to this file for an in-depth explanation of the allocation classes and relevant
algorithms.
Allocation classes constructed in this way can be leveraged by explicitly specifying the class using POBJ_CLASS_ID(id) flag in pmemobj_tx_xalloc()/pmemobj_xalloc() functions.
Example of a valid alloc class query string:
heap.alloc_class.128.desc=500,0,1000,compact
This query, if executed, will create an allocation class with an id of 128 that has a unit size of 500 bytes, has at least 1000 units per block and uses a compact header.
For reading, function returns 0 if successful, if the allocation class does not exist it sets the errno to ENOENT and returns -1;
This entry point can fail if any of the parameters of the allocation class is invalid or if exactly the same class already exists.
heap.alloc_class.new.desc | -w | - | - |
struct pobj_alloc_class_desc | - | integer, integer, integer, string
Same as
heap.alloc_class.[class_id].desc, but instead of requiring the user
to provide the class_id, it automatically creates the allocation class with the
first available identifier.
This should be used when it’s impossible to guarantee unique allocation class naming in the application (e.g. when writing a library that uses libpmemobj).
The required class identifier will be stored in the
class_id field of the
struct pobj_alloc_class_desc..
heap.size.granularity | rw- | - | uint64_t | uint64_t | - | long long
Reads or modifies the granularity with which the heap grows when OOM. Valid only if the poolset has been defined with directories.
A granularity of 0 specifies that the pool will not grow automatically.
This.
libpmemobj(7), pmem_ctl(5) and
The contents of this web site and the associated GitHub repositories are BSD-licensed open source.
|
https://pmem.io/pmdk/manpages/windows/v1.10/libpmemobj/pmemobj_ctl_get.3/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Filed.
The example I will use is about car maintenance. A car with an empty fuel tank need to be refueled. The car exists behind a web service with three methods defined using the Web Service Definition Language, WSDL, below.
<definitions xmlns: <types> <xsd:schema> <xsd:import </xsd:schema> </types> <message name="addFuel"> <part name="parameters" element="tns:addFuel"/> </message> <message name="addFuelResponse"> <part name="parameters" element="tns:addFuelResponse"/> </message> <message name="getFuelLevel"> <part name="parameters" element="tns:getFuelLevel"/> </message> <message name="getFuelLevelResponse"> <part name="parameters" element="tns:getFuelLevelResponse"/> </message> <message name="emptyFuel"> <part name="parameters" element="tns:emptyFuel"/> </message> <message name="emptyFuelResponse"> <part name="parameters" element="tns:emptyFuelResponse"/> </message> <portType name="Car"> <operation name="addFuel"> <input wsam: <output wsam: </operation> <operation name="getFuelLevel"> <input wsam: <output wsam: </operation> <operation name="emptyFuel"> <input wsam: <output wsam: </operation> </portType> <binding name="CarPortBinding" type="tns:Car"> <soap:binding <operation name="addFuel"> <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> <operation name="getFuelLevel"> <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> <operation name="emptyFuel"> <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> </binding> <service name="CarService"> <port name="CarPort" binding="tns:CarPortBinding"> <soap:address </port> </service> </definitions>
We can see that there are three methods available here
This a really boring example, you can add fuel, check the level and empty the tank. But it is sufficient complicated so you can get a feeling that the example actually does something. We will not use the WSDL, it is presented just so you can see the definition and be glad that you don't have to penetrate everything to understand the example.
Before you start building the example, I need to show the file structure this example lives in.
example --- product --- src -- main -- java -- se -- sigma -- example --- Car.java | | | | | -- WebService.java | | | | | -- pom.xml | | -- test --- src -- test -- java -- se -- sigma -- example --- FuelCarTest.java | | | | | | | -- FuelCarSteps.java | | | | | -- resources -- se -- sigma -- example -- CarMaintenance.feature | | | -- pom.xml | | -- pom.xml
Given this file structure, you should be able to re-create this example.
We have seen the WSDL for the web service that will be used in the example. The actual implementation is done using two classes, Car and WebService. Car is the domain logic and WebService is the provider that will make Car available.
The Car implementation:
File: product/src/main/java/se/sigma/example/Car.java package se.sigma.example; import javax.jws.WebMethod; import javax.jws.WebService; import java.util.Date; @WebService public class Car { private Integer fuelLevel; public Car() { fuelLevel = 0; } @WebMethod public void addFuel(int addedAmount) { String message = "adding " + addedAmount; usageLog(message); fuelLevel = fuelLevel + addedAmount; } @WebMethod public Integer getFuelLevel() { String message = "returning fuel level " + fuelLevel; usageLog(message); return fuelLevel; } @WebMethod public void emptyFuel() { String message = "Emptying fuel tank"; usageLog(message); fuelLevel = 0; } private void usageLog(String message) { Date now = new Date(); System.out.println(now + " " + message); } }
This is nothing more than a pojo, plain old java object, with some annotations.
The simplest possible thing that could work for providing this as a web service might be to use the javax.xml.ws.Endpoint class as the publishing tool. It will take the annotated class make it available as a web service. The implementation I use looks like this:
File: product/src/main/java/se/sigma/example/WebService.java package se.sigma.example; import javax.xml.ws.Endpoint; public class WebService { public static void main(String[] args) { Endpoint.publish("", new Car()); } }
The service will be published behind port 8090 and the context car. You should be able to access it from
The final part that is needed to tie this example together is a Maven pom. The one I used here looks like:
File: product/pom.xml <?xml version="1.0" encoding="UTF-8"?> <project> <modelVersion>4.0.0</modelVersion> <parent> <groupId>se.sigma.cucumber</groupId> <artifactId>example</artifactId> <version>1.0-SNAPSHOT</version> </parent> <artifactId>product</artifactId> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <dependencies> <dependency> <groupId>com.sun.xml.ws</groupId> <artifactId>jaxws-rt</artifactId> <version>2.2.5</version> </dependency> </dependencies> </project>
This simple implementation created the WSDL above. The next step is to create a SoapUI project that can be used to explore and test the service.
The rest of this example assumes that the main defined above in the WebService is running. Start it from your IDE or whatever tool you use to edit Java. I started it from the tool of my choice, IntelliJ IDEA.
The star in this example is SoapUI. SoapUI is an open source testing tool that will allow you to explore a Web Service just by examining the WSDL. An introduction with some images can be found at I will limit my self to explain the steps brief in text.
Now we have a working SoapUI project. We could stay here. But that would mean that somebody would have to run the test manually. We would rather have a build system or similar to run the test often. Lets use the SoapUI project and connect to three different tools. These tools are:
Maven is a build tool that many people has opinions about, either they hate it or they love it. I will show how the Maven SoapUI plugin can be configured to run the project we just set up.
We need to define a plugin repository, SoapUI isn't available on Maven Central. It should be
<pluginRepositories> <pluginRepository> <id>eviwarePluginRepository</id> <url></url> </pluginRepository> </pluginRepositories>
The plugin also has to be configured
<plugin> <groupId>eviware</groupId> <artifactId>maven-soapui-plugin</artifactId> <version>4.0.1</version> <configuration> <projectFile>
The most important things here are of course the parameters that we may want to vary and the path to the SoapUI script.
Note the syntax for defining the parameters, you have to have the name followed by an equal sign and than the value. There may not be any spaces.
The complete Maven pom will be available below.
Another option for testing web services through SoapUI is to connect to it from JUnit. This would eliminate the usage of the Maven Plugin. An example is the JUnit implementation below:
File: test/src/test/java/se/sigma/example/junit/FuelCarTest.java package se.sigma.example.junit; import com.eviware.soapui.tools.SoapUITestCaseRunner; import org.junit.Test; public class FuelCarTest { @Test public void verifyTheInputValueIsReturned() throws Exception {"); String[] properties = new String[2]; properties[0] = "addedFuel=42"; properties[1] = "expectedFuel=42"; runner.setProjectProperties(properties); runner.run(); } }
The parameters are set using the same syntax as in the Maven plugin. They are defined in the array properties and passed to the script. The path to the SoapUI script is defined as an absolute path, a relative path resulted in problems for the SoapUITestCaseRunner to locate the script.
Both the Maven plugin and the JUnit implementation has problems with it descriptions. It is not easy to look at the Maven plugin and decide what this test actually does. Similar, it is not very easy to look at the JUnit implementation and see what actually is going on here. This may be corrected with the next tool, Cucumber.
Cucumber is a Behaviour Driven Development, BDD, tool that allows you to define what you expect with the syntax
This example is than translated to executable code through some step definitions. The steps are not meant for somebody unfamiliar with code to read. They should read a feature that defines what we expect, other has to implement and read the steps that actually uses the system under test.
A feature that defines what we want to verify may look like:
File: test/src/test/resources/se.sigma.example.cucumber/CarMaintenance.feature Feature: Daily car maintenance Cars need maintenance Scenario: Fuelling Given a car with an empty gas tank When you fill it with 50 litres of fuel Then the tank contains 50 litres
This feature cannot live by itself, it need support. The most important thing is to define the steps that actually connects to the Web Service. One implementation may look like:
File: test/src/test/java/se/sigma/example/cucumber/FuelCarSteps.java package se.sigma.example.cucumber; import com.eviware.soapui.tools.SoapUITestCaseRunner; import cucumber.annotation.en.Given; import cucumber.annotation.en.Then; import cucumber.annotation.en.When; public class FuelCarSteps { private String[] properties = new String[2]; @Given("^a car with an empty gas tank$") public void a_car_with_an_empty_gas_tank() { // Nothing to do here, it will be taken care of in the SoapUI script } @When("^you fill it with (.*) litres of fuel$") public void you_fill_it_with_litres_of_fuel(String addedFuel) { properties[0] = "addedFuel=" + addedFuel; } @Then("^the tank contains (.*) litres$") public void the_tank_contains_litres(String expectedFuel) throws Exception { properties[1] = "expectedFuel=" + expectedFuel;"); runner.setProjectProperties(properties); runner.run(); } }
The Given and When steps doesn't really do anything else than read some parameters and store them so they are available in the final Then step. It is of course the Then step that actually performs anything.
The steps need to be glued together with the steps. It can be done using a JUnit runner. It is implemented as:
File: test/src/test/java/se/sigma/example/cucumber/FuelCarTest.java package se.sigma.example.cucumber; import cucumber.junit.Cucumber; import org.junit.runner.RunWith; @RunWith(Cucumber.class) public class FuelCarTest { }
Note that the steps may not be defined in the test class. Steps are defined globally and defining them in the test class would tie them hard to this particular feature.
A Maven pom that supports all three tools at the same time may look like this:
File: test/pom.xml <?xml version="1.0" encoding="UTF-8"?> <project> <modelVersion>4.0.0</modelVersion> <parent> <groupId>se.sigma.cucumber</groupId> <artifactId>example</artifactId> <version>1.0-SNAPSHOT</version> </parent> <artifactId>test</artifactId> <repositories> <repository> <id>soapUI</id> <url></url> </repository> </repositories> .1</version> <configuration> <projectFile>./example> </plugins> </build> <dependencies> <dependency> <groupId>se.sigma.cucumber</groupId> <artifactId>product</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> </dependency> <dependency> <groupId>eviware</groupId> <artifactId>maven-soapui-plugin</artifactId> <version>4.0.1< verify a web service. We need some tools and we need to connect them properly. I choosed to use Cucumber to increase the readability in this example. If you prefer using the Maven plugin or JUnit, please do so.
This post has been reviewed by some people who I wish to thank for their help
Thank you very much for your feedback!
|
https://www.thinkcode.se/blog/2011/11/16/testing-a-web-service-with-soapui-junit-maven-and-cucumber
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
SpaCy is a free, open-source library for advanced Natural Language Processing (NLP) in Python.
If you’re operating with plenty of text, you’ll eventually want to know more about it. For example, what’s it about? What do the phrases suggest in context? Who is doing what to whom? Which texts are just like every other?
Certainly, spaCy can resolve all the problems stated above.
Linguistic Features in SpaCy
SpaCy goes about as an all-inclusive resource for different tasks used in NLP projects. For instance, Tokenization, Lemmatisation, Part-of-speech(POS) labeling, Name substance acknowledgment, Dependency parsing, Sentence Segmentation, Word-to-vector changes, and other cleaning and standardization text methods.
Installation of SpaCy
!pip install -U spacy
!pip install -U spacy-lookups-data
!python -m spacy download en_core_web_sm
Once we’ve downloaded and installed a model, we will load it via spacy.load(). spaCy has different types of pre-trained models. In addition, the default model for the English language is en_core_web_sm.
Moreover, the NLP object is a language instance of the spaCy model. And, this will return a Language object containing all components and data needed to process text.
import spacy nlp = spacy.load('en_core_web_sm')
Tokenization in SpaCy
Tokenization is the task of splitting a text into meaningful segments called tokens. The input to the tokenizer is a Unicode text and the output is a Doc object.
In addition, a Doc is a sequence of Token objects. Each Doc consists of individual tokens, and we can iterate over them.
doc = nlp('We are learning SpaCy library today') for token in doc: print(token.text)
Part-of-speech tagging
Part of speech tagging is the process of assigning a
POS tag to each token depending on its usage in the sentence.
doc = nlp('We are learning SpaCy library today')
for token in doc:
print(f'{token.text:{15}} {token.lemma_:{15}} {token.pos_:{10}} {token.is_stop}')
Dependency Parsing
Dependency Parsing is the process of extracting the dependency parse of a sentence to represent its grammatical structure. It defines the dependency relationship between headworks and their dependents.
The head of a sentence has no dependency and is called the root of the sentence. The verb is usually the head of the sentence. And, headwork is related to all other words.
doc = nlp('We are learning SpaCy library today')
for chunk in doc.noun_chunks:
print(f'{chunk.text:{30}} {chunk.root.text:{15}} {chunk.root.dep_}')
Lemmatization
Work-related = nlp('We are learning SpaCy library today') for token in doc: print(token.text, token.lemma_)
Sentence Boundary Detection
Sentence Segmentation is the process of locating the start and end of sentences in a given text. This allows you to divide a text into linguistically meaningful units. SpaCy uses the dependency parse to determine sentence boundaries. And to extract sentences in spaCy sents property is used.
doc = nlp('First Sentence. Second Sentence. Third Sentence.') print(list(doc.sents))
Named Entity Recognition
Named Entity Recognition (NER) is the process of locating named entities in unstructured text. After that classifying them into pre-defined categories. Such as person names, organizations, locations, monetary values, percentages, time expressions, and so on.
In order to improve the keyword search, we populate tags for a set of documents. Named entities are available as the ents property of a Doc.
doc = nlp('We are learning SpaCy library today')
for ent in doc.ents:
print(ent.text, ent.label_)
Similarity
The similarity is determined by comparing word vectors or “word embeddings”, multi-dimensional meaning representations of a word.
As you can see in the example,.
tokens = nlp("dog cat banana afskfsd") for token in tokens: print(token.text, token.has_vector, token.vector_norm, token.is_oov)
Conclusion
In conclusion, spaCy is a modern, reliable NLP framework that quickly became the standard for doing NLP with Python. Its main advantages are speed, accuracy, extensibility.
We have gained insights into linguistic Annotations like Tokenization, Lemmatisation, Part-of-speech(POS) tagging, Entity recognition, Dependency parsing, Sentence segmentation, and Similarity.
|
https://blog.knoldus.com/is-spacy-python-nlp-any-good-seven-ways-you-can-be-certain/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
in the middle of the preliminary inquiry, Wilson disappeared. At
the time of Gopie and Sargeant’s trial, Wilson remained at large.
[31] Both Gopie and Sargeant were charged with one count
of conspiracy to import a controlled substance and one count of
importing a controlled substance. The first charge was later
changed to conspiracy to import a narcotic.
[32] Fraser pleaded guilty to importing cocaine and received
a conditional sentence of two years less a day. Gopie, Sargeant
and Gittens initially were to be tried together but, at the beginning of the trial, with the Crown’s consent, the trial judge
ordered that Gittens be tried separately.
[33] Fraser was the prosecution’s main witness at Gopie and
Sargeant’s trial.
[34] The central issues at trial were whether the evidence
proved that Gopie and Sargeant knew about a conspiracy to
import a narcotic and whether they were co-conspirators in the
importation scheme.
[35] Gopie did not testify. Sargeant testified and denied
involvement in the conspiracy.
[36] The jury convicted Gopie of conspiracy to import a narcotic and acquitted Sargeant of both counts.
The Issues
[37] On his conviction appeal, Gopie submits that(1) the trial judge erred in leaving the conspiracy count withthe jury or, alternatively, that the verdict on the conspiracycount was unreasonable;
(2) the jury charge was inadequate in several respects; and
(3) the application judge erred in dismissing the Application.
1. Was the verdict unreasonable?
[38] At the close of the Crown’s case, Gopie moved for a
directed verdict. The trial judge dismissed the motion. On
appeal, Gopie submits that the trial judge erred in leaving the
conspiracy count with the jury. Alternatively, he submits that
the verdict is unreasonable.
[39] Gopie contends that the Crown’s case, taken at its highest, established no more than his presence for a limited number
of events as part of Wilson and Fraser’s plan to import drugs
unfolded. He argues that the evidence could not reasonably support the inference that he had agreed to import a narcotic with
Wilson or anyone else. Therefore, the motion for a directed verdict was wrongly dismissed.
|
https://digital.ontarioreports.ca/ontarioreports/20180629/?pg=109
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.